An AI system introduced in 2015 with much fanfare in the U.S. failed to recognise faces of African Americans with the same accuracy as those of Caucasian Americans. | Photo Credit: AP
We can intuitively recognise whether an action is ethical or not. Let us look at the theoretical basis of understanding ethics with an example. A cigarette company wants to decide on launching a new product, whose primary feature is reduced tar. It plans to tell customers that the lower tar content is a ‘healthier’ option. This is only half true. In reality, a smoker may have to inhale more frequently from a cigarette with lower tar to get the flavour of a regular cigarette.
Let us analyse this from three dominant ethical perspectives:
First, the egoistic perspective states that we take actions that result in the greatest good for oneself. The cigarette company is likely to sell more cigarettes, assuming that the new product wins over more new customers. From an egoistic perspective, hence, the company should launch the new cigarette. Second, the utilitarian perspective states that we take actions that result in the greatest good for all. Launching the new cigarette is good for the company. The new brand of cigarette also provides a ‘healthier’ choice for smokers. And more choice is good for customers. Hence, the company should launch the product.
The egoistic and utilitarian perspectives together form the ‘teleological perspective’, where the focus is on the results that achieve the greatest good.
Third, the ‘deontological perspective’, on the other hand, focusses more on the intention of the maker than the results. The company deceives the customer when it says that the new cigarette is ‘healthier’. Knowingly endangering the health of humans is not an ethical intention. So, the company should not launch this cigarette.
In the context of Artificial Intelligence (AI), my hypothesis is that most commercially available AI systems are optimised using the teleological perspectives and not the deontological perspective. Let us analyse a facial recognition system, a showcase for AI’s success. An AI system introduced in 2015 with much fanfare in the U.S. failed to recognise faces of African Americans with the same accuracy as those of Caucasian Americans. Google, the creator of this AI system, quickly took remedial action. However, from a teleological perspective, this flawed AI system gets a go ahead. According to the 2010 census, Caucasian Americans constitute 72.4% of the country’s population. So an AI system that identifies Caucasian American faces better is useful for a majority of Internet users in the U.S., and to Google.
However, from a deontological perspective, the system should have been rejected as its intention probably was not to identify people from all races, which would have been the most ethical aim to have. In fact, the question that comes to mind is — shouldn’t digital platform companies, whose markets span many countries, aim to identify faces of all races with an equal accuracy?
Social media is not the only context where AI facial recognition systems are used today. These systems are increasingly being used for law enforcement. Imagine the implications of being labelled a threat to public safety just because limited data based on one’s skin colour was used to train the AI system. Americans are taking note. Recent news reports suggest that San Francisco has banned use of facial recognition by law enforcement.
The ethical basis of AI, for the most part, rests outside the algorithm. The bias is in the data used to train the algorithm. It stems from our own flawed historical and cultural perspectives — sometimes unconscious — that contaminate the data. It is also in the way we frame the social and economic problems that the AI algorithm tries to solve.
With the proliferation of AI, it is important for us to know the ethical basis of every AI system that we use or is used on us. An ethical basis resting on both teleological and deontological perspectives gives us more faith in a system. Sometimes, even an inclusive intention may need careful scrutiny. For instance, Polaroid’s ID-2 camera, introduced in the 1960s, provided quality photographs of people with darker skin. However, later, reports emerged that the company developed this for use in dompas, an identification document black South Africans were forced to carry during apartheid.
Understanding and discussing the ethical basis of AI is important for India. Reports suggest that the NITI Aayog is ready with a ₹7,500 crore plan to invest in building a national capability and infrastructure. The transformative capability of AI in India is huge, and must be rooted in an egalitarian ethical basis. Any institutional framework for AI should have a multidisciplinary and multi-stakeholder approach, and have an explicit focus on the ethical basis.
N. Dayasindhu is the co-founder and CEO of Itihaasa Research and Digital. Views are personal.
Support quality journalism - Subscribe to The Hindu Digital
Please enter a valid email address.
Support The Hindu's new online experience with zero ads.
Already a user? Sign In