Digital India Act should address AIbias and enforce algorithmic accountability
3 min readDev Chandrasekhar
Sanas, a tech startup based in the United States, has developed a product that can modulate an individual’s accent. As per the demo on its website, the software is specifically designed to cater to speakers from India and the Philippines. Upon trying the feature with my Indian accent, I note that while my speech had a slightly synthetic sound, there was a distinct resonance of a white American dialect.
The goal of Sanas is to have call center representatives globally sound primarily white and American, citing improvement in customer service and increased accessibility to American clientele as their justification.
However, an inherent bias is apparent in Sanas’ software. It ostensibly reinforces the superiority of a white American accent over other accents.
Machine bias is the prejudice encoded in algorithms and software. AI systems are often trained on massive amounts of data, which most often can include historical data that reflects the biases of the time. When data used to train AI systems is biased, or when the algorithms themselves are designed in ways that perpetuate existing biases and impose costs on society and lives by, for example, denying people loans, jobs, and even bail. In some cases, machine bias has even led to deaths.
Machine bias “learns” to uphold the status quo and replicate oppressive systems–they very often fail to design for disability and foster inclusive cultures; as medical diagnostic tools, they could leadto inaccurate diagnoses and inappropriate treatments.
A few months ago in early November, Rajeev Chandrasekhar, Union Minister of State for Skill Development & Entrepreneurship and Electronics & IT, addressed the inaugural plenary sessionfirst day of the ‘AI Safety Summit 2023’ in the UK. Among others, he emphasized that India’s digital transformation has ushered in tremendous opportunities and will continue to do so through AI. He also reiterated India’s commitment to AI with a strong focus on safety, trust, and accountability.
As with any complex problem, AI bias has no simple solution, and “self”-regulation will not work. The Digital India Act is now the only legislation that can in some way assert control on AI. The Act has been in the works for many years—it should clearly define the role and usage of responsible AIusing a multi-pronged approach.
One proposed way to do this is to get organizations–to begin with, large private corporations and PSUs–to embed into business processes the reviewing of algorithms before they are deployed and then regularly monitoring them for bias once they are in use.
Algorithmic bias should be disclosed wherever AI is being used in decision making. If, for example, AI is being used by the HR department of an organization for selecting candidates, legislation needs to insist on third-party-certified disclosure and transparency about the AI algorithm as well as the “training” data has been created and the statistical errors.
Debiasing algorithms,with help from the likes of the Indian Statistical Institute, should be in place to find and eliminate bias from AI systems. One kind of debiasing method, for instance, modifies the weights of various features in an AI model to lessen bias against particular groups.Mandated bias audits of AI systems should be routinely conducted. Finding patterns in the system’s outputs and testing it with a range of inputs are two ways to accomplish this.
Protocols should be mandated for leadership, management, and employees to know how to spot and steer clear of bias in AI systems. The technical as well as moral implications of AI bias should be included in this training. If bias is detected or if people are affected by bias due to a decision that has been taken by an AI-based syst…