The Robodebt Scandal: When Data-Driven Decisions Go Wrong
- By Lachlan Colquhoun
- July 10, 2023
Last week, a Royal Commission in Australia strongly criticized government ministers and public servants over the country’s so-called “robodebt” scandal. But the final report also lifted the lid on the shocking data governance of government departments and raised questions on the use of automation.
To recap, robodebt is the term for an Australian Government scheme implemented in 2015, which raised AUD1.7 billion for the Government.
Designed to claw money back from people abusing Australia’s social security system, the program took annual income data from the Australian Taxation Office.
It then averaged it out to understand which recipients may have misreported their annual income and therefore claimed too much in benefits.
Ultimately, income averaging was proved to be not only inaccurate but illegal, and the Government was forced to pay back the money it raised from around 400,000 people, with some compensation.
The scheme created significant stress for many wrongly targeted people and, in several cases, has been blamed for pushing people to suicide.
Dodgy data matching
The Royal Commission’s final report has a long chapter on the automation and data matching process, which was at the heart of the scheme, operated by the Australian Taxation Office (ATO) and the Department of Human Services (DHS).
The basic principle was to take data from the ATO, where employers report annual income, and from earnings that welfare recipients had reported to the DHS.
The ATO matched DHS recipients against its payroll and tax data. After verifying the identities, the matched records were returned to the DHS, which looked for discrepancies in income.
The original database was huge and involved a one-off transfer of five years of ATO data covering 2011-2015.
“There are irresistible attractions for Governments to move in this direction, which must be balanced against the grave risk of stumbling, zombie-like, into a digital welfare dystopia”
That might sound clean enough, but problems soon emerged. The matching was not always accurate, the DHS changed their database, and those changes were not implemented in the ATO system, and—as the Royal Commission found—the DHS was not always compliant with data handling protocols established for the Commonwealth Government.
“In late 2016, following a spike in media stories and complaints about the Scheme, the ATO started asking for information about how DHS was handling the data that the ATO had disclosed to it under the Scheme,” the Royal Commission report says.
The tax authorities asked for confirmation that the data was being handled appropriately, but that confirmation was never provided.
The Commission commented that “open and transparent communication between Commonwealth entities engaging in data-matching programs is necessary to ensure that each participating entity understands, and undertakes proper scrutiny and evaluation of, the legal and administrative framework.”
This did not occur, and proper governance, controls, and risk management measures were lacking. The DHS also did not comply with protocols for destroying irrelevant or used data.
Despite this, the former Australian Government hailed the scheme as a technological triumph.
Alan Tudge, the Minister for Human Services, issued a media release on 23 November 2016 titled “New technology helps raise [AUD]4.5 million in welfare debts a day.”
The release praised a “new online system” that “is now initiating 20,000 compliance interventions a week—a jump from 20,000 a year… this is a great example of the Government using technology to strengthen our compliance activities with faster and more effective review systems.”
Rigid automation
Under the hood, however, not all was well. Concerns about the legality of the scheme from whistleblowers were being ignored, and the Government conducted a campaign with friendly media to discredit complaints.
The Government had also hoped to use artificial intelligence to analyze the data and called in Data61, the data science subsidiary of the national science organization the CSIRO. Their advice, however, was that “there are some risks that such a methodology may not be possible given the data quality.”
After ruling out AI, the scheme turned to automation. The model involved a system of business rules with “no ability to move outside of specific and defined action on the basis of the data received.”
“It was extremely rigid,” the report says.
“Once the rules had been coded and set in place, the system itself would stay in place until the rules were changed by way of human intervention.”
Having created the process, the DHS and the Government refused to consider that it could be making errors. Welfare recipients were simply informed that they owed a certain amount of money and were given no insight into how the amounts were calculated.
“This meant that debts being raised on incorrect or incorrectly applied data were issued with no review,” the report said.
Public policy failure
These data handling and processing issues, combined with political and bureaucratic policies, together created what has been called the biggest failure of public policy in Australian history, and data is at its core.
“The automation used in the Scheme at its outset, removing the human element, was a key factor in the harm it did,” the report says.
“The Scheme serves as an example of what can go wrong when adequate care and skill are not employed in the design of a project, where frameworks for design are missing or not followed, where concerns are suppressed,” it added.
“People should know how decisions are made, periodic independent audits should supplement the accountability of decision making, and safeguards ought to be entrenched in the architecture of decision making. The use of algorithms needs to be consistent with these principles and the rule of law.”
The report does say that it is not all “doom and gloom.”
“When done well, AI and automation can enable government to provide services in a way that is ‘faster, cheaper, quicker and more accessible,’ it says.
“Automated systems can provide improved consistency, accuracy and transparency of administrative decision-making. The concept of ‘when done well’ is what government must grapple with as increasingly powerful technology becomes more ubiquitous.”
Perhaps the most appropriate footnote is to use a quote the Royal Commission also uses as a warning on the use of digital technology in Government.
In late 2019, Philip Alston, the UN’s Special Rapporteur on Extreme Poverty and Human Rights, made a comment which applies not just to the robodebt scandal but to the broader digitization of government.
“The digital welfare state is either already a reality or emerging in many countries across the globe. In these states, systems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish,” Alston said.
“There are irresistible attractions for Governments to move in this direction, which must be balanced against the grave risk of stumbling, zombie-like, into a digital welfare dystopia.”
We can't say we weren't warned in contemplating that quote and the Royal Commission’s more than 600-word report.
Lachlan Colquhoun is the Australia and New Zealand correspondent for CDOTrends and the NextGenConnectivity editor. He remains fascinated with how businesses reinvent themselves through digital technology to solve existing issues and change their entire business models. You can reach him at [email protected].
Image credit: iStockphoto/Nazan Akpolat