ChatGPT: Please Handle With Care
- By Lachlan Colquhoun
- March 28, 2023
When organizing some speeches for an upcoming birthday of a friend, it was interesting to learn that one of the speakers was planning to recite a sonnet created by the artificial intelligence app ChatGPT.
The speaker planned to write down a few key sentences and instruct ChatGPT to re-arrange them as an Elizabethan sonnet which he would read at the event.
In this case, there was no intention of hiding that ChatGPT had been used. The speaker planned to be upfront about it, giving it an exciting dimension to add to the entertainment.
Such transparency is not always the case. There are Australian reports of a wave of scamming using artificial intelligence technology to impersonate the voice of friends and family.
In one case, a woman lost AUD11,000 in three transactions under voicemail instructions from someone she thought was her daughter, who was traveling overseas. The message asked for an urgent transfer of funds, and the mother responded immediately but sent the money to fraudsters who used audio samples in an AI program to create fake messages.
Scary capabilities
The release of ChatGPT has been one of the phenomena of 2023 so far, taking AI into the consumer realm. Real estate agents, students, bloggers and even journalists have had their work transformed, and many organizations on the other side of the equation – such as schools and universities – are still struggling to respond.
However, among all the wonders of ChatGPT, some dangers are acknowledged by OpenAI, the company which created the technology.
OpenAI chief executive Sam Altman made an unusual confession as the company released the latest app version last week.
“We’ve got to be careful here,” Altman told ABC News in the U.S.
“I think people should be happy that we are a little bit scared of this. I’m particularly worried that these models could be used for large-scale disinformation. Now that they’re getting better at writing computer code, they could be used for offensive cyber attacks.”
“Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it”
The new version of ChatGPT – GPT-4 — is quite scary in its capabilities, given that it is still in relatively early stages of development. It scored 90% in U.S. bar exams and a near-perfect score in high school maths exams and can write program code in most languages.
“It waits for someone to give it an input,” Altman said. “This is a tool that is very much in human control.”
But he was concerned about which humans had control over the input and felt that some who flout established "safety limits." "Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it,” Altman added.
Pushing boundaries
Altman seems to have had it both ways. He says that ChatGPT and AI, in general, could be “the greatest technology humanity has yet developed.” Still, at the same time, he adds that regulators and wider society need to be involved in setting parameters guarding against potentially harmful consequences.
This is also true in the commercial world, but there are particular applications for businesses to derive production and marketing insights and drive process improvements and efficiencies. That is as it has always been with technology improvements.
Also last week, Japanese electric car developer Turing unveiled a driverless car with an AI that makes driving decisions on the road. On-board cameras, alongside sensors and dynamic maps, gather information which is then processed to operate the steering wheel and drive itself on public roads.
Meanwhile, AI is being deployed at an unprecedented pace on the battlefield in Ukraine. It is pushing the boundaries as each side looks for an advantage. In this dangerous escalating game and the desperation to win, ethical considerations in the heat of this battle are secondary.
Even so, the Responsible AI in the Military Domain (REAIM) summit – the first event of its kind – recently concluded in The Hague.
The two-day event highlighted opportunities and risks in military applications of AI and culminated with a Call to Action endorsed by 57 States and a Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.
It sounds promising, but we must remember that the Nazis waived the Geneva Convention when it suited them in the invasion of the Soviet Union.
The REAIM initiative is the military equivalent of what OpenAI’s Sam Altman was asking for in terms of regulation. Having created the weapon, he wants others to determine how they should be used and limited.
This might seem somewhat disingenuous, given the consternation in academic circles on the danger of plagiarism from using ChatGPT and what that might mean for the quality of education.
People are already abnegating their responsibility to ChatGPT, and we are told that GPT-4 is already an improvement. Human beings, it can be argued, are inherently lazy. If they can get an AI to do something for them, they will, and their moral ownership of what they produce will wither away, along with their skills to create by themselves.
GPT-4 still “hallucinates facts,” and users are warned that “great care should be taken when using language model outputs, particularly in high-stakes contexts.”
That will likely be taken as seriously as the terms and conditions on signing up for software.
OpenAI’s Sam Altman is right to warn us about misusing the product he has created. Ethical misuse is one thing, but beyond that, we need to be careful that embracing artificial intelligence doesn’t make the bulk of humanity stupid.
Lachlan Colquhoun is the Australia and New Zealand correspondent for CDOTrends and the NextGenConnectivity editor. He remains fascinated with how businesses reinvent themselves through digital technology to solve existing issues and change their entire business models. You can reach him at [email protected].
Image credit: iStockphoto/May Lim