Human Error 404: Stop Blaming "The Robots" for Purcell Photo Debacle
- By Lachlan Colquhoun
- February 05, 2024
AI is rarely out of the news, and last week's clickbait in Australia was about doctoring a photograph of Victorian politician Georgie Purcell.
In the nightly news broadcast by Channel 9 for a story on duck hunting, a photograph of the young blonde Animal Rights Party MP was edited to enlarge her breasts and change her clothing, putting her in a mid-riff top that exposed her stomach.
Amid the outrage that followed, the bosses at Channel 9 blamed the AI robots and denied responsibility.
"Our graphics department sourced an online image of Georgie to use in our story on duck hunting," said Nine's news director, Hugh Nailon.
"As is common practice, the image was resized to fit our specs. During that process, the automation by Photoshop created an image that was not consistent with the original. This did not meet the high editorial standards we have, and for that, we apologize to Ms. Purcell unreservedly."
No humans were involved in changing the image, according to Nine, but this was hotly denied by the software provider Adobe.
“Any changes to this image would have required human intervention and approval,” an Adobe spokesperson said, opening up a war of words with the network.
The comments from Adobe are consistent with the message coming from the industry.
Microsoft's AI design assistant is called 'Copilot' because the company says that without the human as the pilot, the copilot—the AI—can’t do its job.
In the aftermath of the affair, it would be accurate to say that Nine’s claims that the robots were responsible were not widely believed.
Embellishing a photograph this way seems strange for an AI program unless it was already programmed to do so by a human.
Contentious applications
Along with AI-created deepfakes of people like Taylor Swift and Donald Trump, the media, entertainment, and fashion industries are wrestling with the ethical supply chain surrounding the creation of images.
AI-powered image generation software from Canva is becoming ubiquitous, and its applications can be contentious.
“We’ve got this discourse that AI is big and scary, and the robots are coming. No, they are not. It's not big and scary. It's already here."
For example, young and upcoming fashion labels are creating images for their designs from AI software because they lack the funds for photo shoots with real human models.
While it allows the labels to scale up and get their product out through media channels, there are grumbling accusations from the modeling industry, which can see itself as an early casualty of AI, the classic case of human jobs being replaced.
Over-regulation
While all the finger-pointing was going on about doctoring the Georgie Purcell image, Australia's Productivity Commission had its own—and possibly more meaningful—contribution to the debate.
While many public sector regulators worldwide are calling for restrictions on AI, the Productivity Commission's view is that AI should not be overly regulated because to do so would limit its applications and the benefits it can deliver.
Urging the Government not to implement "unnecessary and confusing" new regulations, the Commission says that, where possible, existing regulatory frameworks should be sufficient to deal with the advent of AI.
Commissioner Stephen King said that new rules should only be introduced "if the current rules and regulations are clearly not fit for purpose."
“Nothing is worse than passing technology-specific regulation and finding it’s obsolete within five years,” Mr King said.
“We’ve got this discourse that AI is big and scary, and the robots are coming. No, they are not. It’s not big and scary. It’s already here.”
Ethical transgression
At first glance, this might not seem to sit well with the policies of the current Australian Government, which is considering what it calls “mandatory safeguards” for AI systems in high-risk areas.
Delve deeper, though, and this is the other side of what the Productivity Commission says is a significant impediment to AI adoption, and that is developing higher levels of public trust.
This brings us back to the image doctoring in the Nine News broadcast. Regardless of who was responsible, the outcome was a clear ethical transgression, which can only undermine trust in AI, and the buck stops with the humans irrespective of what the news director said.
Another pertinent comment in the Productivity Commission report was about data. While advocating "technology neutral" regulations around AI, the report shows that the Government is responsible for establishing "clear and functional mechanisms for data collection, curation, sharing, and use."
Productivity gains, it said, were dependent on raising data quality without “undermining the incentives of data holders or increasing risks to individuals”.
Through all the noise about AI regulation, the Productivity Commission does suggest a way forward that doesn't hobble AI and bog it down in new regulatory frameworks.
If we get the data piece right, over-regulation of AI applications and new regulations won't be so necessary.
The technology can then be applied and changed without reference to regulations, which might soon be outdated and unsuitable.
And while we’re at it, let’s stop blaming the robots.
We've created them, so we have to take responsibility for them. Shirking responsibility is a sure way to undermine trust and retard the productivity gains which AI can deliver if only humans can get it right.
Image credit: iStockphoto/BrianAJackson
Lachlan Colquhoun
Lachlan Colquhoun is the Australia and New Zealand correspondent for CDOTrends and the NextGenConnectivity editor. He remains fascinated with how businesses reinvent themselves through digital technology to solve existing issues and change their business models.