Generative AI gets a lot of press, from image-generating tools like Midjourney to Runway to OpenAI’s ChatGPT. But businesses aren’t convinced of the tech’s potential to positively affect their bottom lines; at least that’s what surveys (and my colleague Ron Miller’s reporting) suggest.
In a Boston Consulting Group (BCG) poll this month of over 1,400 C-suite executives, 66% said that they were ambivalent about — or outright dissatisfied with — their organization’s progress on GenAI so far, citing a shortage of talent and skills, unclear roadmaps and an absence of strategy around deploying GenAI responsibly.
To be clear, the execs — who hail from such industries as manufacturing, transportation and industrial goods — still see GenAI as a priority. Eighty-nine percent responding to the BCG poll ranked the tech as a “top-three” IT initiative for their companies in 2024. But only about half of the poll’s 1,400 respondents expect GenAI to bring substantial productivity gains (i.e., in the area of 10% or more) to the workforces that they oversee.
The results, taken in tandem with responses to a BCG survey late last year, put into sharp relief the high degree of enterprise skepticism surrounding AI-powered generative tools of any kind. In the survey last year, which canvassed a group of 2,000 exec decision-makers, more than 50% said that they were “discouraging” GenAI adoption over worries it would encourage bad or illegal decision-making and compromise their employer’s data security.
“Bad or illegal decision-making” touches on copyright violations — a hot-button topic in GenAI.
GenAI models “learn” from examples (e.g., illustrations, photos, ebooks, movies) to craft essays and code, create artwork, compose music and more, but the vendors building the models aren’t necessarily compensating — or informing — the creators of the examples. The legality of training models on copyrighted material sans permission is being hashed out in countless court cases. However, what might possibly land GenAI users in trouble is regurgitation, or when a generative model spits out a mirror copy of a training example.
In a piece published this week in IEEE Spectrum, noted AI critic Gary Marcus and Reid Southen, a visual effects artist, show how AI systems, including OpenAI’s DALL-E 3, regurgitate data even when not specifically prompted to do so. “[There’s] no publicly available tool or database that users could consult to determine possible infringement, nor any instruction to users as [to] how they might possibly do so,” they write.
Perhaps it’s no surprise, then, that in a poll of Fortune 500 companies by Acrolinx, a content governance startup, nearly a third said that intellectual property was their biggest concern about the use of generative AI.
What might alleviate IP concerns for some corporate decision-makers are pledges of legal protection from GenAI vendors. A growing number of vendors — including IBM, Microsoft, Amazon, Anthropic and OpenAI — have pledged to defend, financially and otherwise, customers using their GenAI tools who end up on the wrong side of copyright litigation.
These policies aren’t the be-all and end-all; most, if not all, lack clarity about how far they actually reach, Reworked’s David Barry notes. (For example, if a user writes prompts that make infringement likely, it’s unclear if a company such as OpenAI would indemnify them.) But they’re certainly better than nothing, which not so long ago was the norm.
As for execs’ GenAI data security concerns, those may be harder to allay.
Worried that confidential data could end up in the hands of GenAI vendors, companies like Apple, Bank of America, Citi, Deutsche Bank, Goldman Sachs, Wells Fargo, JPMorgan, Walmart and Verizon have restricted their staff from accessing public GenAI tools like ChatGPT. In response, vendors like OpenAI have clarified their data-collection policies to make it clear that they don’t train models on corporate data — at least not in all circumstances. Whether that’ll convince potential enterprise customers remains to be seen.
Because of these challenges — and others — 65% of execs answering the January BCG poll believe that it’ll take at least two years before GenAI moves beyond the hype. These execs say that, to take full (but responsible) advantage of GenAI, a significant percentage of their workforce will need upskilling, and AI regulations will have to be hashed out in each of the countries where their companies are operating.
Outside of Europe, regulations aren’t likely to arrive anytime soon and may change as GenAI tech rapidly advances. On a hopeful note, however, the January BCG survey highlights execs who’ve readily embraced GenAI despite the uncertainties.
Among the companies planning to invest more than $50 million in GenAI in 2024, 21% have successfully trained over a quarter of their workforce on GenAI tools, according to the survey. Seventy-two percent of GenAI big spenders are already preparing for AI regulations, while 68% have guardrails in place for using GenAI at work.
“This is the year to turn gen AI’s promise into tangible business success,” BCG CEO Christoph Schweizer said in an emailed statement. “Almost every CEO, myself included, has experienced a steep learning curve with gen AI. When technology is changing so quickly, it can be tempting to wait and see where things land. But with gen AI, the early winners are experimenting, learning, and building at scale.”