“It was very, very hard to find this person,” Widjaja told the Gartner Analytics Conference in Sydney in late February.
“I went to the legal school, and then to the computing school, and then I did find her—and yes a female—who was an engineer by training. But it wasn’t easy. So we talk about Industry 4.0 jobs, well here is one.”
Just as the business world made way for a new role for the chief digital officer, so in 2020—with the rise of AI and growing qualms about its use—the new role of the AI ethicist could be one of the hottest jobs of 2020.
And as Widjaja explained, as companies roll out AI in all of its forms, ethics and bias are increasingly hard to ignore.
A question of trust
The autonomous nature of AI systems brings issues of ethics and values into focus. Since they are topics which do not traditionally sit comfortably with technology and commercial teams, these issues require new paradigms of leadership to navigate them.
Widjaja quoted a new global study on consumer’s attitudes toward AI, which revealed that over 60% of participants are uncomfortable with businesses using AI applications to interact with them.
Over half of these respondents felt that AI makes biased decisions and that these biases often originate from the technology designers.
But not only do consumers and other external stakeholders demand it, often it is a business imperative too.
A 2018 Deloitte survey found that 32% of AI-aware executives ranked the ethical risks of AI as one of their top three concerns with the technology.
The stakes are high, because 88% of companies surveyed in the same research planned to increase spending on cognitive spending, and 54% said that increase would be by 10% or more.
The bias conundrum
One ready example of the problems of bias is in recruitment. Many organizations have been failed by their recruitment process because of bias, such as the “old school tie” hiring the wrong but well-connected people.
If you can eliminate this bias, you will not be hiring a dud. Somebody from a much more diverse background could do a much better job and make more of a positive contribution.
AI Ethicists need to be on guard against this, and much more, as they make their way into organizations.
According to Widjaja, many of the issues AI Ethicists will face will not be with the technology but the culture they will work in.
“The average age of an AI Ethicist is in their early 30s, and they will present to vice presidents of their organizations who are over 50 years old,” he said.
“They will have to say ‘that is wrong, and this is wrong.’ You can imagine the conversations. So the bottleneck is process and technology and also culture and are we ready for these types of conversations?”
The issue showed that AI ethics cannot be bought “off the shelf.”
“It comes from you, as an expression of your organization's values and leadership,” said Widjaja.
The new muscle
The Singapore-based data analyst also talked about removing bias from news feeds, and counter fake news and “echo chambers.” He presented an example of a feed in which issues were presented from multiple viewpoints so that no one idea was dominant.
“We had the technology to do this five years ago, and natural language processing can do this very easily today,” he said.
Another issue for the new role was that many new companies and startups were concerned they would “get destroyed” because of onerous compliance costs around ethics.
It was important that this issue does not hobble startups at birth. They could access some open source solutions which were available and relevant for smaller enterprises.
All enterprises would need to factor in ethical approaches to AI as the technology rolls out.
“We are talking about value being created by data-driven decisions,” says Widjaja.
“Embedding AI ethics is very much a new muscle for many of us, but it is a muscle which will become more and more important.”
Photo credit: iStockphoto/marekuliasz