Can AI Ever Be Truly Free From Bias?
Share

This article was first published in The Hindu Business Line on December 01, 2022. The views are of the individual
authors.

In June, the Ministry of Electronics and Information Technology (MeitY) published the Draft National Data Governance Framework Policy, which aims to enhance the access, quality and use of non-personal data in ‘line with the current emerging technology needs of the decade.’ This is another step in a worldwide push to adapt machine learning and AI models into the sphere of governance.

While India is currently considering the legislative and regulatory safeguards needed to adopt AI systems in governance, many countries have begun implementing them. In January 2021, the Dutch government resigned en masse in response to a child welfare fraud scandal that involved the misuse of benefit schemes. An algorithm used to detect fraud mistakenly flagged applications due to which the Dutch tax authorities accused almost 26,000 parents of fraudulently claiming child allowance over several years. Families were forced to repay tens of thousands of euros and given no means of redress. Tax authorities also admitted that many people were subjected to extra scrutiny due to their dual nationality, which disproportionately targeted ethnic minorities in the country.

The Dutch tax authorities used a ‘self-learning’ algorithm to assess benefit claims and classify them according to the potential risk for fraud. The algorithm flagged certain applications as being at a higher risk for fraud and forwarded these applications to an official for manual scrutiny, with no explanation of why they were flagged.

AI Bias

What makes the situation more complicated is that it is difficult to narrow down to a single factor that caused the ‘self-learning algorithm’ to arrive at the biased output due to the ‘black box effect’ and the lack of transparency about how an AI system makes its decisions. This biased output delivered by the system is an example of AI bias.

AI bias occurs when there is an anomaly in the output produced by a machine learning algorithm. This may be caused due to prejudiced assumptions made during the algorithm development process or prejudices in the training data. The process of creating a machine learning algorithm is based upon the concept of ‘training’. During training, the system is exposed to vast amounts of data, which it uses as a sample to study how to make judgments or predictions. AI systems are heavily dependent on the use of accurate, clean, and well-labelled training data to learn from, which will, in turn, produce accurate and functional results. Therefore, mistakes in the input such as poor-quality data or the inherent biases introduced by the labellers themselves can give rise to bias in an AI system.

Some of the more common approaches to this question include inclusivity — both in the context of data collection as well as the design of the system. There have also been calls about the need for increased transparency and explainability which would allow people to understand how AI systems make their decisions.

One possible mechanism to address the problem of bias is the blind taste test mechanism. The mechanism works to check if the results produced by an AI system are dependent upon a specific variable such as their sex, race, economic status or sexual orientation. This includes testing the algorithm twice, the first time with the variable, such as race, and the second time without it. Despite the various efforts, it may be impossible to fully eradicate bias in AI systems due the biases in developers and engineers that get reflected in the systems. The effects of these biases can be devastating depending upon the context and the scale at which they are implemented.

In the interim, regulators and states must step up to carefully scrutinise, regulate or in some cases halt the use of AI systems which are being used to provide essential services to people. As countries including India continue to develop their regulatory and governance frameworks for AI, it may have to consider whether certain applications of AI should be banned in its entirety, including its use in deploying welfare schemes.