Organizations today have access to more data than before and are harnessing it for business insights through next-gen cloud-based tools and data warehouses. And armed with copious data and cutting-edge ML tools such as PyTorch and TensorFlow, more businesses are also turning to AI than ever.
This blurring of lines between data analytics and AI gives leading organizations the data-centric means to pull ahead of less savvy competitors, but also raises a moral dilemma in the form of AI ethics.
Increasingly, businesses are finding that AI is not a question of whether they have the AI know-how or if the data they need to train a better ML model is available – but whether they should use this data and how.
Why AI ethics matter
There is no question that AI has revolutionized our world. With its ability to parse through vast volumes of information, AI can find abnormalities and identify patterns far quicker and consistently than humans ever can. The result is actionable insights that can improve efficiency and increase profitability.
Unfortunately, the output of even the best ML model is predicated entirely on its inputs. And depending on the quality of this data, AI brings along with it a set of risks around legal and ethical quandaries, bias, and unintended consequences. Chances are that most organizations only ever consider the legal aspects of AI, and only at a superficial level.
Finally, it is worth pointing out that algorithms are subject to manipulation just like human workers. As noted in “Tackling AI’s unintended consequences” by Bain & Company, where a worker is typically observed by management and makes relatively few decisions in the course of his or her day, an algorithm will make many decisions – often unseen and based on opaque considerations.
Businesses must hence tread carefully. Yet those tasked to develop and deploy AI systems might not always understand its potential to shift power and perpetuate existing inequalities. And to be fair, data scientists are hired primarily for their ability to utilize data, find insights, and implement AI, not tackle ethical dilemmas.
We are failing at responsible AI
But is bias in AI decisions truly a concern? To underscore how gender bias in AI has already crept into our lives, a UNESCO blog on this topic suggests a couple of experiments. Type “greatest leader of all time” in your favorite search engine and observe the gender disparity there.
Next, perform an image search for “school girl” and “school boy” on your favorite search engine, and note the women and girls in all sorts of sexualized costumes appearing in the former, compared to ordinary young school boys in the latter.
This is only for a subset of results in a niche area; how else has bias crept into our algorithms and ML models?
As noted by Anja Kaspersen, the former head of geopolitics and international security at the World Economic Forum, we are failing at AI. In an article earlier this month, she observed how the ethics and governance of AI systems remain unclear despite a surge of attention to responsibly develop and use AI.
She recommended broadening the existing dialogues around the ethics and rules for AI to include the entire life cycle of AI systems, and not just during the initial development and deployment stages. In the same vein, she also called for “a much more inclusive cast of experts and stakeholders” to be brought on board to address the potential downstream consequences and limitations of AI.
What businesses can do
Workable ideas to mobilize governments and industries are certainly needed. However, is there anything individual businesses can do today? In a contributed opinion piece for InformationWeek, Jack Berkowitz, the chief data officer at ADP offers a suggestion: Create an AI and ethics board.
An AI and data ethics board is one way to ensure these principles are woven into product development and uses of internal data, according to Berkowitz.
This can be established by bringing an interdisciplinary team with inputs from those in IT, legal and compliance, security, privacy, and product – including those from outside the organizations. A diverse and knowledgeable team can ensure more effective discussions around the implications of various use cases.
Such a board would ideally conduct regular meetings to review projects with the focus on analyzing them at a fundamental level to determine whether the project is in line with the organization’s values, could it result in harm (or risks of harm) upon users, and whether the data should even be shared or used in this way.
As AI and ethics become an increasingly important issue for organizations of all sizes, it is high time businesses make these considerations a core part of their company culture.
Paul Mah is the editor of DSAITrends. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose. You can reach him at [email protected].
Image credit: iStockphoto/StudioM1