AI Legal Minefield Awaits Unprepared Companies
- By Lachlan Colquhoun
- July 31, 2023
It's difficult to know how to respond to last week's news that four of the most influential companies in the artificial intelligence industry have created a collective body to guide AI's ethical development.
Should we be reassured by the words of Brad Smith, the Microsoft president, when he says that “companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control?”
Or should it be a concern that the four companies, comprising Microsoft, ChatGPT developer OpenAI, Anthropic, and Google, which owns U.K.-based DeepMind, are pursuing a model of self-regulation that has been found wanting in the past?
The Frontier Model Forum, formed by the four companies, says its focus is the "safe and responsible" development of the AI model. Still, this approach is often characterized as putting “Dracula in charge of the blood bank.”
Another alarming analogy was the response to the recent announcement by Meta that it was releasing an AI model to the public, with one expert—Dame Wendy Hall, regius professor of computer science at the University of Southampton—describing it as being “a bit like giving people a template to build a nuclear bomb.”
At face value, though, it all sounds reassuring enough. The companies have pledged to ensure their products meet safety requirements before releasing them to the public.
They say they will allow independent experts to test and evaluate their systems and develop mechanisms to inform users when they are looking at AI content, likely through some kind of watermarking system.
In all this, however, perhaps the most alarming issue is that we will need time to understand the true impact of AI. Yet, it is being implemented at scale right now.
Many of these decisions and projects will have an impact we will only begin to understand in the future. By that point, it may be too late for any remediation.
In the meantime, we are supposed to trust that the AI vendors are doing the right thing. We should ask if that is good enough.
Legal checks
More encouraging is research from Gartner, which surveyed AI users and found a significant commitment to legal and ethical due diligence as part of AI use cases.
The research was conducted in late 2022 among 622 organizations that have deployed AI in more than five use cases across business units and production processes for over three years.
It found that the most significant differentiator identified among AI-mature organizations was the involvement of legal counsel at the ideation stage of AI use cases.
“Organizations that are more experienced with AI do not want to be told they’ve crossed a line once they are further along in the process of developing an AI use case”
AI-mature organizations were 3.8 times more likely to involve legal experts at the ideation phase of an AI project’s life cycle.
“There is uncertainty around the ethics and legality of various AI tactics, as well as a fear of violating privacy regulations,” said Erick Brethenoux, a vice president analyst at Gartner.
“Organizations that are more experienced with AI do not want to be told they’ve crossed a line once they are further along in the process of developing an AI use case.”
Risk factors
The Gartner study also shows how AI is becoming ubiquitous and going mainstream.
It found that 55% of organizations that have previously deployed AI always consider AI for every new use case that they are evaluating. More than half of organizations—52%—report that risk factors are critical when evaluating new AI use cases.
Gartner’s Erick Brethenoux says this AI-first strategy is a “hallmark of AI maturing and driver of increased return on investment.
“While AI-mature organizations are more likely to consider AI for every possible use case, they are also more likely to weigh risk as a critical factor when determining whether to move forward,” he said.
All of this plays into the debate on whether appropriate legislation and regulation are the best approach to address the challenges of AI.
Suppose organizations implementing AI consider the legal implications at the earliest point in their planning. These legal frameworks should be clear, comprehensive and rigorous in that case.
Could this deliver more certainty than relying on self-regulation from an industry group that does not even represent all of the key players?
So far, the European Parliament has responded to AI, approving legislation in June.
The AI Act aims to promote "human-centric and trustworthy AI," introducing "obligations for providers and those deploying AI systems" and proposing bans on any intrusive and discriminatory use of the technology.
It might be a good start, but other jurisdictions—notably the U.S.—are taking a different approach, which risks a global misalignment that could add to the current confusion.
The world does not need a piecemeal legislative approach at this tipping point in AI adoption.
Mature users know risks and want to be clear about them before investing.
Legislation that protects the public can also give organizations more certainty around their investment and play a positive role in creating a vibrant—and also safe—AI future.
Lachlan Colquhoun is the Australia and New Zealand correspondent for CDOTrends and the NextGenConnectivity editor. He remains fascinated with how businesses reinvent themselves through digital technology to solve existing issues and change their entire business models. You can reach him at [email protected].
Image credit: iStockphoto/Andrey Deryabin