The Legal Jeopardy Hiding In Your AI Model
- By Winston Thomas
- March 29, 2023
ChatGPT ushered in a new wave of enthusiasm for AI. Many developers laid off also founded startups to join the generative AI gold rush.
Yet, will this unabated fervor lead them into legal jeopardy as governments shore up their AI laws? According to lawyers, it’s a significant concern that today’s AI and data teams need to pay close attention to.
E.U. throws down the AI gauntlet
A major legal framework to watch out for is the E.U. AI Act.
Ling Ho, the litigation and dispute resolution partner at Clifford Chance, calls it a “game changer” for how AI will be regulated in the E.U. and beyond. It is a risk-based framework for the governance of the entire AI supply chain, with extra-territorial reach and significant sanctions.
It’s still in the works. “The E.U.'s AI Liability Directive is being negotiated in parallel to introducing harmonizing measures on civil liability and compensation for damage caused by AI,” says Ho.
“Important concepts under both, including how tightly AI is defined, remain to be confirmed through negotiations,” he adds.
However, you can be sure that other countries are watching this framework closely. Like GDPR, Ho feels this Act will inspire other regions and countries to create their frameworks.
“Financial regulators will focus on systems and controls and how well financial institutions are managing the practical implications of AI use. Crucial topics to consider include AI systems' fitness for purpose, accurate marketing, testing and accuracy. Firms should focus on consumer impact to avoid censure,” Ho advises.
Enforcement has already begun
Governments are not waiting for the regulations. In some jurisdictions, enforcement action has already begun. This means that data teams working on AI models now directly impact reputation.
“Reputational concerns are becoming increasingly important in the face of growing enforcement activity. There will also be an increased antitrust enforcement focus on businesses using AI and data, and we will see more AI dispute cases heard in local courts, which are critical to building out the limited existing body of case law,” says Ho.
For example, the OAIC, the data regulator in Australia, is already taking quick action to investigate last year’s Medlab and Medibank data breaches.
“We have also seen the Australian anti-trust regulator, the ACCC, recently prevail in its legal proceedings against a workplace relations advisory business for using Google Ads, which gave the impression that the business had affiliations with the Australian government,” says Robert Tang, counsel at Clifford Chance.
Tang sees these examples as pointing to a growing trend in APAC where regulators are turning their attention to legal risks posed by AI adoption. They are also laying down the “steps to mitigate potential harm to the public,” he explains.
Will APAC take a different route?
The E.U. is no stranger to creating an encompassing Act. You just need to look at GDPR, which impacts any business dealing with E.U. persons and adds fines based on actual revenues.
The Asia Pacific region is not the E.U. and may be taking a slightly different route. Many countries and territories are regulating AI existing legal frameworks or combined with specific sector regulations.
Ho does not see an intent in APAC jurisdiction to create a similar AI Act — at least not now. Instead, they are looking to build on existing frameworks, such as data protection and security laws, to guide them on responsible AI deployment.
“This principle-based, technology-neutral approach to regulating artificial intelligence can be seen in Hong Kong and Singapore,” says Ho.
Janice Goh, partner at Cavenagh Law LLP cited the example of Quoine Pte Ltd v B2C2 Ltd 2020 SGCA(I) 02. Cavenagh Law LLP and Clifford Chance Pte Ltd are registered in a Formal Law Alliance under the name Clifford Chance Asia
The Singapore Court had to decide whether certain failings in an algorithmic trading software amounted to a mistake that entitled the party to rely on the software to cancel the trades.
“The Singapore Court applied conventional legal principles to arrive at its decision that there was no such mistake which rendered the contract void, on the basis that it was necessary to have regard to the mindset of the programmer when the relevant programs were written, not at the later time when contracts were entered into,” Goh explains.
“The decision, including the views of the dissenting judges, highlights difficulties faced by the courts in adapting traditional legal principles to novel contexts,” she adds.
China, however, remains an essential outlier in the region. It is planning to create its regulatory framework for AI by 2030. And the country is already laying down the foundations for it.
“Over the years, it has been active in introducing AI standards and ethics — for example, by amending civil law to enhance data security and rights. In 2021, the Personal Information Protection Law was implemented alongside the cybersecurity law. Various regulations and policy papers are routinely rolled out, not only on a nationwide basis but also on a local governmental basis,” Ho notes.
Can you protect your AI from litigation?
With generative AI, it is clear that today’s algorithms can create content, albeit based on previously ingested data. But do these qualify as copyrighted material?
“Yes, especially if the AI by-product is generated due to a process that includes tangible human intervention,” Ho answers.
“For example, some AI software is only able to draw an abstract picture as a result of having been trained with pictures inputted by a human. Here the selection of input pictures is key; therefore, the AI-generated content is also the result of human creativity, which is key to obtaining copyright protection in most jurisdictions,” he explains.
Not all AI-generated works are protected. The International Association for the protection of IP (AIPPI) in 2019 concluded that “one may reasonably exclude AI-generated works with no human intervention from enjoying copyright protection because they lack a human component of creativity,” Ho points out.
“This is the position taken by the U.S. Copyright Office, which denied a request to register artwork created by AI. The situation is similar in France, Germany, and the U.K., where courts have been reluctant to recognize any person other than a natural person as the author of a copyrighted work,” he adds.
This means the IP framework for AI-generated content will depend on where it was created or copyrighted. That’s because local IP laws differ.
“In the landmark Tencent judgment, the Nanshan District People’s Court held that an article written by AI software was entitled to copyright protection; however, the copyright vested in the developers of the software as opposed to the AI system itself,” says Ho.
However, there are so-called related rights as well. This is the rights of a work not connected with the work's actual author, which may provide some form of protection, “but these may be weaker,” says Ho.
“A potential collateral effect of this may be the incentive to claim that AI had human intervention — where that wasn't the case — in order to obtain the stronger protection afforded by copyright,” he adds.
Patents face similar problems.
In Australia, for example, the Full Court ruled in April 2022 that only natural persons, such as businesses or individuals, can be an inventor under Australian patent law. The patent applicant's request for leave to appeal the decision was dismissed, and for the time being, therefore, the Federal Court's decision is final.
“The 2022 decision aligns Australia with the approach adopted by other jurisdictions, including the U.K., the European Patent Office (EPO), and the U.S. Patents and Trademark Office (USPTO). South Africa is the only remaining jurisdiction to have named DABUS (the AI system) as an inventor on a patent, and it remains to be seen whether South Africa’s courts will review this decision in the future,” Ho observes. DABUS stands for device for the autonomous bootstrapping of unified sentience.
Various Patent Offices worldwide have issued rulings on their refusals to allow AI to be listed as an inventor, at least not as a sole inventor, such as New Zealand, EPO and U.S.
“It is generally recognized that reforms in relevant legislation will be needed to provide courts with a legal basis to rule otherwise. Singapore, in some ways, brought its copyright law more up to date with current technology, particularly how content is created, distributed and used,” says Ho.
Start getting your AI house in order first
AI and data teams don’t have the luxury of waiting for the legal frameworks for AI-generated data or algorithms to be updated or introduced. It may be time to do a legal assessment or work closely with the legal teams.
Existing IP and related laws are indeed struggling to keep pace with the age of machine learning. But when they do, the deployment of AI “will inevitably lead to further litigation as a result of AI's reliance on works of individual creators,” says Ho.
Winston Thomas is the editor-in-chief of CDOTrends and DigitalWorkforceTrends. He’s a singularity believer, a blockchain enthusiast, and believes we already live in a metaverse. You can reach him at [email protected].
Image credit: iStockphoto/ARMMY PICCA
Winston Thomas
Winston Thomas is the editor-in-chief of CDOTrends. He likes to piece together the weird and wondering tech puzzle for readers and identify groundbreaking business models led by tech while waiting for the singularity.