Digital India Act Becomes Critical for Addressing AI Bias
- By K Yatish Rajawat and Dev Chandrasekhar
- December 04, 2023
On November 1, 2023, the first day of the ‘AI Safety Summit 2023’ in the U.K., Rajeev Chandrasekhar, the Indian Union Minister of State for Skill Development & Entrepreneurship and Electronics & IT, addressed the inaugural plenary session.
He emphasized that India's digital transformation has ushered in tremendous opportunities and will continue to do so through AI. He also reiterated India's commitment to AI, strongly focusing on safety, trust, and accountability.
Machine bias is the prejudice encoded in algorithms and software. AI systems are often trained on massive amounts of data, which can often include historical data that reflects the biases of the time.
Machine bias occurs when data used to train AI systems is biased, or when the algorithms themselves are designed to perpetuate existing biases and impose costs on society and individual lives by, for example, denying people loans, jobs, and even bail. In some cases, machine bias has even led to deaths.
Machine bias "learns" to uphold the status quo and replicate oppressive systems—they very often fail to design for disability and foster inclusive cultures; as medical diagnostic tools, they could lead to inaccurate diagnoses and inappropriate treatments.
As with any complex problem, AI bias has no simple solution, and "self"-regulation will not work. The Digital India Act is now the only legislation that can in some way assert control over AI. The Act has been in the works for many years—it should clearly define the role and usage of responsible AI using a multi-pronged approach.
One proposed way to do this is to get organizations—to begin with, large private corporations and PSUs—to embed into business processes the reviewing of algorithms before they are deployed and then regularly monitoring them for bias once they are in use.
Algorithmic bias should be disclosed wherever AI is being used in decision-making. If, for example, AI is being used by the HR department of an organization for selecting candidates, legislation needs to insist on third-party-certified disclosure and transparency about the AI algorithm as well as the "training" data that has been created and the statistical errors.
Debiasing algorithms, with help from the Indian Statistical Institute, should be in place to find and eliminate bias from AI systems. One kind of debiasing method, for instance, modifies the weights of various features in an AI model to lessen bias against particular groups.
Mandated bias audits of AI systems should be routinely conducted by organizations using them. This should be part of the regulatory disclosure processes to ensure they happen. Finding patterns in the system's outputs and testing them with a range of inputs are two ways to accomplish this.
Protocols should be mandated for leadership, management, and employees to know how to spot and steer clear of bias in AI systems. The technical and moral implications of AI bias should be included in this training. If bias is detected or if people are affected by bias due to a decision taken by an AI-based system, protocols should be in place to correct and complain against this. This will again need a legislative mandate for appeal and correction measures. There are different ways to address it. Accounting practices can insist that if AI has been used in the preparation of accounts, it should be disclosed.
Good governance practices should incorporate transparency in AI disclosures, and not everything can be achieved through just legislation or law. Good practices are as important for responsible AI as anything else.
Companies and governments should be held accountable for the algorithms they use and ensure that these algorithms are fair and unbiased. It also means creating a culture of transparency and accountability around the development and use of algorithms. That will be the way to ensure that, as Chandrasekhar said, AI is "… utilized only for the good, only for the progress and prosperity of all our citizens across all countries."
This end goal envisaged by the minister will only happen if good governance practices are followed by all corporates and a strong nudge is provided in the Digital India Act.
Image credit: iStockphoto/Parradee Kietsirikul
K Yatish Rajawat and Dev Chandrasekhar
K Yatish Rajawat and D Chandrashekar are researchers with the Center for Innovation in Public Policy in India.