Goldman Sachs Report Casts Doubt on GenAI
- By Paul Mah
- July 17, 2024
Will GenAI ever pay off? A new research paper from investment giant Goldman Sachs questioned the economic viability of generative AI, pointing to how the huge spending has “little to show for” so far. And as tech giants continue to pour tens of billions of dollars, the question arises if the outlandish spending is justified.
The paper, titled “Gen AI: Too Much Spend, Too Little Benefit?” is based on a series of interviews with Goldman Sachs economists and researchers, an infrastructure expert, and an MIT professor.
Not so fast
According to estimates by MIT professor Daron Acemoglu, only a quarter of AI-exposed tasks will be cost-effective to automate within the next 10 years. Specifically, he forecasts that AI will increase US productivity by only 0.5% and GDP growth by only 0.9% cumulatively over the next decade.
“Generative AI has the potential to fundamentally change the process of scientific discovery, research and development, innovation, new product and material testing, etc., as well as create new products and platforms,” said Acemoglu. However, the focus and architecture of generative AI technology means that “truly transformative” changes won’t happen quickly – and are unlikely to happen over the next 10 years.
“[For now] AI technology will instead primarily increase the efficiency of existing production processes by automating certain tasks or by making workers who perform these tasks more productive.”
Gains to be made
Goldman Sachs senior global economist Joseph Briggs is more optimistic. He estimates that GenAI will ultimately automate 25% of all work tasks and raise US productivity by 9% and GDP growth by 6.1% cumulatively over the next decade.
While Briggs acknowledges that automating many AI-exposed tasks isn’t cost-effective today, he argues that the large potential for cost savings and the likelihood that costs will decline over the long run should eventually lead to more AI automation.
He says his forecast takes into consideration productivity gains from labor reallocation or the creation of new tasks, on top of primary cost savings of workers completing existing tasks more efficiently.
“Our productivity estimates incorporate both worker reallocation – via displacement and subsequent reemployment in new occupations made possible by AI-related technological advancement – and new task creation that expands non-displaced workers’ production potential,” he said.
Questions to ask
Ultimately, there remains great uncertainty around GenAI. Will the next few models continue to bring jaw-dropping improvements, or are we really close to the plateau of what is possible? But even if there are more gains to be made, will it be too costly?
Already, new generations of models being trained are costing as much as USD 1 billion, powered by enormously powerful 100,000-GPU clusters. As more GPUs and quality data are needed, this could conceivably balloon to tens of billions or even a hundred billion dollars.
Finally, is a transformer-based model a dead end for intelligence, given that it generates answers probabilistically based on training data? Could GenAI do anything more than generate paragraphs of occasionally accurate text?
The report can be accessed here (pdf).
Image credit: iStock/Quardia
Paul Mah
Paul Mah is the editor of DSAITrends, where he report on the latest developments in data science and AI. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose.