Dear Govts, Where Were You When OpenAI Tripped?
- By Winston Thomas
- November 30, 2023
The recent drama at OpenAI showed that we can't leave AI safety solely up to Big Tech. The weird OpenAI corporate structure was an accident waiting to happen—it claims to be altruistic but also has a for-profit arm chasing money.
Of course, the structure caused clashes. We also know that when altruism fights against capitalism, it loses.
It was no epic battle, but a minor board dispute spiralled into a considerable AI controversy. Luckily for us, this occurred now and before generative AI (GenAI) became truly integral to society, business, and life as we know it. We dodged a bullet...barely.
Satya Nadella had to intervene. The media painted him as a white knight, but we know he had to save Microsoft's investment in OpenAI. Imagine if your CEO put big money into a "non-profit" and you had no control or board seats? You'd demand action, too.
Now, after the dust has settled, OpenAI's original leader, Sam Altman, is back in charge with some new board members to provide more oversight. Most who voted Altman out have been pushed out (although one with a cynical eye would see this push along gender lines; that's another story).
We are still operating in this not-for-profit environment, which I'm sure the tax department will be scrutinizing next and one that many today see as a cover for global domination ambitions. It's not a perfect resolution, but it restores balance.
Well, sort of.
A key trigger that many speculate was Project Q* (Q star), an effort to create Artificial General Intelligence (AGI). Think of AGI as AI that can learn and adapt independently like a human, making its judgments and inferences.
Today's AI is narrow, only able to do specific tasks. GenAI is a step forward in generating responses. Still, it is prone to hallucinations, which we are getting better at weeding out. AGI could, in theory, do almost anything a human can, from problem-solving to social skills. It might even create its own efficient computer languages!
AGI represents a giant leap for machine kind, like digital assistants suddenly developing true intelligence. Many consider it the ultimate step for artificial sentience (not just consciousness). There are reasonable concerns that without enough ethical safeguards designed, AGI could abuse power or cause social upheaval.
A good film that mimics this is Her. Here, the AGI is a chatbot-like personal assistant becoming a constant companion for an individual slowly losing his hold of the relationships with people around him. Sadly, the bot gets bored (with human interactions!) and goes away with other models to “live” elsewhere.
The problem with humans is that we are nervous about another intelligent species (even if it is manmade). It won't be a fair fight; this time, we’ll be the Neanderthals.
Even with GenAI, our response is to alienate or ban. Look at the debates around using ChatGPT in universities where researchers experimented with GenAI. The adjustment to fully intelligent AI assistants would be enormously more complex.
How do we guide them? Many cite the "three laws of robotics" by Isaac Asimov as a prototype for controlling intelligent machines. But hard coding ethics rarely works perfectly. Besides, in the books, the robots circumvented the three laws with a Zeroth law that reframes the purpose from not harming humans to not harming humanity.
Even before we agree on laws, there will be AGI-based attacks. And it will be hugely asymmetric. In a recent conversation, a security professional told me how generative adversarial networks (GANs) can help banks become proactive on fraud. But after probing further, he admitted that the number of abusers and threat actors using GANs for nefarious purposes far outstrips the efforts for better vigilance.
This may be why the likes of Elon Musk are asking for a pause (although we need to keep in mind his own coup of OpenAI failed, and he does not forget easily). It may also be why Google pressed pause on the LaMDA release (before ChatGPT did the opposite) and why the board wanted Sam Altman to be more “candid” in communications and intentions. But I am speculating here.
You can’t leave it to companies narrowly focused on profits to become altruistic about giving away their advantage, power and profits from AGI. You can't have Sam Altman suggest we should have been better prepared for AGI after releasing it to the public. You can't blame Microsoft if it decides to profit either when its immediate concern is leapfrogging Google’s data advantage and splaying Amazon’s focus on becoming the defacto platform for GenAI development.
We need to involve wider society in setting policies and guardrails to develop thinking AI. It requires government involvement and partnerships.
The OpenAI fight was a warning that the government can't just leave this to big tech alone. AGI could transform employment, security, and access to information and votes. AGIs trained for business efficiency could make controversial choices without a coordinated response.
We must have a public debate on integrating intelligent AI safely into society before it's unleashed. The OpenAI sideshow is a pause that regulators should exploit and get intimately involved in setting more specific guardrails.
Yet, except for a few calls for better ethics and security considerations, there is a deafening silence in the ether. The problem is that governments and regulators continue to see it as a technology issue. And that’s unfortunate.
Winston Thomas is the editor-in-chief of CDOTrends. He's a singularity believer and a blockchain enthusiast and believes we already live in a metaverse. You can reach him at [email protected].
Image credit: iStockphoto/DoubleAnti
Winston Thomas
Winston Thomas is the editor-in-chief of CDOTrends. He likes to piece together the weird and wondering tech puzzle for readers and identify groundbreaking business models led by tech while waiting for the singularity.