Artificial Intelligence is still in its infancy and it is very difficult to define which way the technology will go and in what time. It is developing quickly and is one of the most promising areas of technology that we have found till date. What segregates AI from other technologies, however, is its human face. It is being designed to mimic human conscious, and probably reach human level intelligence one day. The question remains- will AI be as biased as humans in perceiving data around them or can they teach us something about inclusivity?
For now, we know that AI systems could have at least some level of biasness. These biases could be based on race, gender, age and more. For example, the first generation of visual AI by Google made some controversial bias against people from African descent and identified them as gorillas. This is not all, AI powered voice command had several struggles in understanding females but worked perfectly fine with men.
Another controversial bias emerged in 2016, during the presidential election, Facebook algorithms worked to create fear and peddled several apprehensive stories to a vulnerable section of its audience. This is not all. AI has also been shown to reinforce gender stereotypes in the most sexist fashion possible. Note that all modern, highly advanced systems that solved big issues are powered by a male voice whereas AIs that solve basic tasks and do secretarial work have female voices. A glaring example of this kind of bias exists in IBM’s powerful Watson system and Apple’s basic Siri assistant.
So why does technology have habits like humans, specially when it comes to stereotyping and sexism. This is because of the creators of these systems. Any technology, no matter how advanced, doesn’t have the kind of intelligence that could define any kind of preference towards a certain group of people. This happens only when the bias of the creators seeps into the systems that they are creating. If the developers have negative connotations associated with a certain gender or race, their AI will also develop the same.
This tendency could be very harmful when AI is applied on a mass level. For example, if AIs are fed with data that suggests that people of a certain skin color are more likely to end up in jail, the program will reinforce the same stereotype. This could be very harmful when this particular racial group wants admission in schools, colleges or wants to be treated in hospitals. If a particular racial group is underprivileged, they may end up facing more discrimination at the hands of AI than at the hands of other people.
Is it really possible to make an AI that is fully free of bias? It could be very difficult to find out the solution for this problem. However, as AI is developing more and the ‘human’ problems of this system are being identified at a very early stage. This will help in creating better systems for the future and ensure that we evolve and get rid of our biases.