How prejudiced Artificial Intelligence can get

As AI gets increasingly popular in fields like medicine, engineering, law, and sciences, the scope of Artificial Intelligence has increased beyond imagination and today, a lot of global industrial practices are AI-dependent. 

How prejudiced Artificial Intelligence can get
How prejudiced Artificial Intelligence can get

How unbiased Artificial Intelligence can get

Artificial Intelligence, today, encompasses every aspect of our life. From our smartphones to shopping habits to recruitment- there is a little bit of AI in everything. 

Since the very backbone of AI is coding, which is human-made, there is a high probability that biases slip into the system and make AI unfair in some sense. As this technology gets increasingly popular in fields like medicine, engineering, law, and sciences, the scope of Artificial Intelligence has increased beyond imagination and today, a lot of global industrial practices are AI-dependent. 

It still is a grey area as to what kind of biases does AI imbibe. The systems are said to incorporate sexist and racial biases of human discrimination. AI as a system learns from itself and grows. It is capable of automatically learning and growing basis the coding and inputs. As this happens, there is a tendency that there is an amplification of the biases that the system understands. 

AI tends to pick up the prejudices, discrimination, and biases through various sources. Since it rapidly goes through books, social media, and content online- underlying discrimination and other negative aspects can also be picked up by AI and incorporated into the system.

It can be said that the bias is automated. The system does not ‘realise’ that it is discriminating, but it does anyway. We have always looked at Artificial Intelligence as neutral and unbiased given its nature of non-human decision making- but it is imperative to understand that AI can actually be biased due to the data it is being fed, and it is a dangerous assumption to make. 

For example, an AI system that is used for recruitment uses historical hiring patterns of an organisation. If the organisation has favoured male employees being hired over women, the system understands that and can down mark applications that are from women. As AI systems are trained to observe pattern and consistency in the behaviour of previous human actions, the biases that people have almost always reflect in AI and sometimes at a much greater magnitude.

Artificial Intelligence does not only have pre-fed biases due to the code but can also learn new biases due to its ability to read and understand large volumes of data. The ‘fact’ that AI provides an objective and impartial view has been proven wrong many times. 

The creation of AI is not done keeping in mind genders and race. This leads to a lack of social context for the systems. A problem that can be addressed at the earlier stages is often dragged out until it starts defaulting and creating problems of bias. 

The improvement of AI to control and remove bias is a long journey. Tech giants such as Google have been investing resources in making their AI less discriminatory and testing it with various derogatory terms to ensure the system does not respond in a way it should not.

Clearly, the bias in AI is still high. With voice assistants being women and sounding like servants, we still have a long way to go in making technology truly equitable.