Microsoft, OpenAI Planning USD100B AI Supercomputer
- By Paul Mah
- April 03, 2024
Microsoft and OpenAI are planning to build a USD100 billion supercomputer packed with millions of specialized chips to power the next generation of AI.
According to a report by The Information, the US-based supercomputer will be known as “Stargate” and could launch as soon as 2028.
Plans have already been drawn up for the massive data center project, though it is understood that Microsoft’s involvement is contingent on OpenAI fulfilling its promise to boost the capabilities of its AI.
OpenAI currently uses Microsoft data centers to power ChatGPT. In return, Microsoft has exclusive rights to resell OpenAI's technology to its customers.
Stargate will be built in five phases, and when completed has the potential to far exceed the computing power currently supplied to OpenAI.
It would require several gigawatts of power to function or roughly the total amount of power consumed by all the data centers of the top three countries in the Asia Pacific.
"We are always planning for the next generation of infrastructure innovations needed to continue pushing the frontier of AI capability,” said Microsoft in response to a query from Business Insider.
Tech giants like Meta are in a race to build ever-larger supercomputers with hundreds of thousands of GPUs, putting cutting-edge AI research increasingly out of reach of everyone else, including academia and most governments.
For instance, Meta in a blog last month announced a major new investment in AI – two GPU clusters that together, will bump their capacity by 48,000 of Nvidia’s top-of-the-line H100 GPUs.
“By the end of 2024, we’re aiming to continue to grow our infrastructure build-out that will include 350,000 NVIDIA H100 GPUs as part of a portfolio that will feature compute power equivalent to nearly 600,000 H100s,” said Meta.
GPUs are also growing increasingly powerful. Last month, Nvidia unveiled its next-generation “Blackwell” GPUs that it says can train trillion-parameter AI models such as GPT-4 much faster, with less number of GPUs, and using a much lower power budget.
Image credit: iStock/Andrey Semenov
Paul Mah
Paul Mah is the editor of DSAITrends, where he report on the latest developments in data science and AI. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose.