DeepMind CEO Urges Caution With AI
- By Paul Mah
- January 18, 2023
With all the interest generated by ChatGPT and DALL-E 2 of late, one might be forgiven for thinking that OpenAI is the only organization in town working on AI. Certainly, there is no question that the breathtaking images and responses from generative AI models have captured our collective imaginations of late.
Demis Hassabis, CEO of AI firm DeepMind is urging caution, however. In a rare interview, the man who arguably brought AI into the mainstream has called on the tech industry to slow down and more fully consider the potential impacts of AI technology.
For the uninitiated, it was under Hassabis that DeepMind developed the AlphaGo program that beat Go champion Lee Sedol in 2016 – a full decade ahead of projections by experts. It achieved this feat using a concept known as reinforcement learning where the AI plays millions of games against itself to fine-tune the weights that determine its choices.
Why we need to slow down
“I would advocate not moving fast and breaking things,” Hassabis said in an interview with Time Magazine published last week. He was referring to an old Facebook motto that encouraged engineers to release new capabilities as quickly as possible, even to the extent of breaking systems that worked.
According to Hassabis, AI tools are close to the point where they have the potential to be deeply damaging to human civilization if misused.
“When it comes to very powerful technologies – and obviously AI is going to be one of the most powerful ever – we need to be careful. Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material,” he explained.
And getting it right matters because we are the guinea pigs here. As I wrote last year, AI can potentially be misused to generate deadly pathogens or novel chemical weapons – and this could be achieved simply by reversing ML models designed to weed out toxicity in drugs.
Indeed, Hassabis cited Google having similar concerns around the misuse of AI to his decision to be acquired by Google in 2014, turning down a bigger offer from Facebook. For now, DeepMind has published “red lines” against unethical uses of its technology, including surveillance and weaponry. It also has an internal ethics board with representatives from all areas of the company who helm a separate review process.
Breakthroughs at DeepMind
To be clear, Hassabis isn’t asking his rivals to slow down because DeepMind couldn’t keep up. DeepMind last year unveiled a new multi-modal AI system that can perform a large variety of tasks ranging from playing video games, moving a robotic arm to stack blocks, or serving as a chatbot. Specifically, the Gatos agent is pre-trained to perform up to 604 distinct tasks and can outperform humans in many of them.
In April last year, it also published a blueprint for a faster engine last year called Chinchilla, which proved that model size is not the only consideration when it comes to improving machine learning. With a mere 70 billion parameters, the Chinchilla model was shown to outperform OpenAI’s GPT-3 (175B) and DeepMind’s Gopher (280B).
And DeepMind is currently considering the release of its chatbot called Sparrow for a private beta later this year. According to the Time report, the delay is for DeepMind to work on reinforcement learning-based features that ChatGPT lacks, such as the citing of sources.
The road to AGI
Ultimately, Hassabis has a vision of achieving AGI, or Artificial General Intelligence, where AI systems gain a human-like ability to reason and learn new skills. These systems can serve to create a world of radical abundance where inequality is eliminated.
But as organizations race toward AGI, Hassabis thinks the next phase of developments in AI might look very different from the last decade when the AI industry published its findings openly. “We’re getting into an era where we have to start thinking about the freeloaders, or people who are reading but not contributing to that information base.”
He isn’t the only one with this opinion. While major AI tools today are open-sourced, the intensifying race to develop more capable AI may see organizations holding back, according to Toby Walsh, a professor of AI at UNSW Sydney.
Moreover, early players such as OpenAI could well benefit from a vast torrent of feedback that is prohibitively expensive to acquire any other way, giving it an insurmountable lead over other research organizations.
In a nutshell, progress in AI could well slow down.
Paul Mah is the editor of DSAITrends. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose. You can reach him at [email protected].
Image credit: iStockphoto/maconline99
Paul Mah
Paul Mah is the editor of DSAITrends, where he report on the latest developments in data science and AI. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose.