AI Goes on Trial
- By Lachlan Colquhoun
- February 06, 2023
There were humans on the witness stand, but it has been the use of artificial intelligence in government that has been on trial at the Australian Royal Commission into the country’s robodebt scandal.
The inquiry is looking into how the former conservative coalition government, voted out in May last year, implemented an algorithmic program that was applied to welfare payments in an attempt to clamp down on welfare fraud.
Implemented in 2015 and running for four years, the scheme raised AUD1.7 billion for the Government, all of which has been repaid in compensation when the scheme was ultimately found to be flawed in its calculations and illegal.
Around 400,000 people were accused of misreporting their income, and in most cases, the accusations were wrong.
The program took annual income data from the Australian Taxation Office. It averaged it out to understand which recipients had misreported their annual income and therefore claimed too much in benefits.
At a time when governments around the world are implementing AI and algorithms in their data management and their welfare systems, the robodebt scandal is a cautionary example to the government sector of what can go wrong when AI is used, and humans avoid taking responsibility.
This is a global issue. In 2019 the U.N. warned against the emergence of the “digital welfare stage” as governments sought to slash their welfare bill through increased technology-based surveillance.
The Dutch Government was forced to abandon its scheme to detect welfare fraud in 2020 after a judge ruled that it violated human rights, targeted low-income families, and disproportionately targeted ethnic minorities. The AI bias was towards racism.
The Dutch verdict resonated in the U.K., where the government also uses robots in the welfare system. The chair of a House of Commons work and pensions committee, Stephen Timms, warned that it demonstrated that “parliaments ought to look very closely at the ways in which governments use technology in the social security system.”
Not responsible
The Royal Commission has seen a procession of high-placed bureaucrats and former government ministers all point the finger of responsibility at someone else and admit they had no actual knowledge of how the algorithm worked.
It was technology, someone else had approved it, and it wasn’t up to any human to question its accuracy, ethics or even legality once it was implemented.
This is despite several legal opinions warning of problems and also a sneaking suspicion for some in the bureaucracy that, in fact, the calculations the program was making were wrong.
A “shameful chapter in the history of public administration” and a “massive failure of policy and law”
One former government lawyer told the Commission that it was a “known issue” that the calculations could be wrong. She accepted this was a “mathematical reality,” but that administrative decisions were “typically made on incomplete information.”
Critics of the Commission claim it is political payback from the new government. Still, the litany of excuses from public servants and government ministers has lent support to claims from current Minister Bill Shorten that it was a “shameful chapter in the history of public administration” and a “massive failure of policy and law.”
Beyond robodebt
Welfare is not the only area where governments are implementing AI. According to the Brookings Institute, 19 European countries had launched national AI strategies as of the end of 2021.
Advocates say that AI can help improve the efficiency and quality of public service delivery in areas spanning education, health care and social protection.
The government of Togo, for example, has experimented with a pilot project which takes mobile phone metadata and satellite images to identify households most in need.
U.S. public officials are using AI to identify children at risk, but even this program has its critics.
They are using a program called the Allegheny Family Screen Tool designed to help overloaded social workers better understand which families should be investigated.
Associated Press has run a series of articles questioning the program, saying it has the potential to widen racial disparities as black families are more often singled out for mandatory investigation, building in bias to the system.
Research from Carnegie Mellon University found that in around 30% of cases, the social workers disagreed with the risk scores generated by the AI. While they could override the tool, many did not.
This is what also happened in Australia and why the scandal was able to happen.
Once it was installed, the public servants considered the AI infallible and unable to be challenged, so they persisted in implementing its decisions even though many knew it was wrong.
The result was catastrophic and even tragic for hundreds of thousands of Australia’s most vulnerable people.
Lives were upended, mental health deteriorated, and there are claims that some people committed suicide because they were under so much stress after receiving payment demands, which ultimately proved incorrect.
The Royal Commission will judge what happened in the past but will make recommendations for the future.
Policymakers in Australia and around the world should pay some attention. There’s no doubt that AI is coming at scale to public administration, so governments must strike the right balance because if they don’t, the tragedy of Robodebt is bound to be repeated elsewhere.
Lachlan Colquhoun is the Australia and New Zealand correspondent for CDOTrends and the NextGenConnectivity editor. He remains fascinated with how businesses reinvent themselves through digital technology to solve existing issues and change their entire business models. You can reach him at [email protected].
Image credit: iStockphoto/BCFC