Under Prime Minister Jacinda Ardern, New Zealand has become a world leader in several areas, from fighting the Covid-19 pandemic to moving away from traditional measures of growth to understand the health of the economy.
In the last weeks, NZ has taken a step toward digital leadership by launching an Algorithm Charter for use by Government agencies, an ethics-based governance framework which aims to give people confidence that their data is being used safely and effectively across Government.
This is a country which has both a government chief data steward and a national chief digital officer, and where academics at Auckland University are creating robots with the sensitivity of a human hand (so they can do things like pick fruits without bruising).
Not immune to bias
Ethics and the removal of bias is one of the hot topics in the world of artificial intelligence. Everything from recruitment decisions to the racial profiling of prison inmates and their likely re-offending have been found to have inherent biases which skew the results, further entrenching bias and inequality.
New Zealand, for all of its brand purity, has not been immune to this. A 2019 review of algorithm use in the public sector found a “huge variability as to the extent of the use and how they were being used,” according to James Shaw, the statistics minister who announced the new code.
The most public example of this was in the immigration area, where Immigration NZ piloted an algorithm which sought to determine which overstayers should be fast-tracked for deportation.
This was halted amid controversy over how ethnicity or ‘country of origin’ was used as an input.
In 2017, the NZ accident compensation scheme was also criticized for the way it has used algorithms to detect fraud in insurance claims.
This is similar to other examples around world. In Florida, for example, correctional authorities used an algorithm designed to help identify re-offenders, with the result that it was twice as likely to label black prisoners re-offenders simply on the basis of race.
In Holland this year, a court rules that an automated surveillance system to detect welfare fraud was unlawful.
In New Zealand, the controversies kicked off a review of algorithm use in the public sector which reported to the Parliament and led to the Charter just announced.
Understanding the Charter
The Charter is not without its critics, but it’s an admirable first step which could — and perhaps should — be followed by other Government’s throughout the world.
At the outset, it put humans at the top of the chain of responsibility, which might seem obvious, but it is essential.
Each organization — and 21 Government agencies have already signed up — is required to have a designated human point of contact for enquiries from the public on algorithms, a radical improvement to the situation in neighboring Australia where the Government introduced its “robodebt” system for social security and then effectively abnegated responsibility.
Positive features include a commitment to the peer review of algorithms to audit for unintended consequence, and a commitment to “maintain transparency by clearly explaining how decisions are informed.”
Algorithms are complex and different in each application, so the problem the Charter has is that its parameters are vague, but there is an acknowledgement that there are limitations and bias in any algorithm. In other words, the machine is not a god, it is a human tool.
Not all algorithms are subject to the Charter, only those which are identified through a risk assessment as potentially containing bias.
“Where algorithms are being employed by Government agencies in a way that can significantly impact on the wellbeing of people, or there is a high likelihood many people will suffer an unintended adverse impact, it is appropriate to apply the Charter,” the Charter says.
That is a lovely motherhood statement which says that the NZ Government doesn’t want to discriminate. But the implications are that the responsibility is on people to complain if they feel they are being discriminated against, rather than the algorithms themselves being created according to a rigorous set of parameters.
Even so, this is a positive first step. If other Governments follow suit, it could impact the vendor community, for which Government is a major client.
Operationalizing the Charter
If a vendor has to get its algorithms to pass an — hopefully more rigorous — ethics-based assessment to sell to Government, it is likely that some of that approach will find its way into products, but also raise expectations among consumers and corporates.
It is perhaps unsurprising that Microsoft has weighed in, both congratulating NZ on the move but also seeking to insert itself into the process.
Microsoft’s government affairs lead for NZ has urged practical implementation guidelines and offered its support and expertise in a blog post.
Microsoft made submissions on earlier drafts of the Charter, and suggests that its own Responsible AI Resource Centre can help to “operationalize” the Charter.
“As a next step, we’d encourage the NZ Government to consider developing practical implementation guidelines, including sharing examples of Government projects that have piloted the principles of the Charter,” wrote Microsoft’s Maciej Surowiec.
The comments show how much more needs to be done in embedding ethical AI, but they also show how much further than most countries NZ has come.
Just as NZ could be a model example on controlling the COVID-19 pandemic, so it might be seen as a model for responsible data management, with Government once again taking the lead.
Photo credit: iStockphoto/Mi-chi Huang