We've seen the films where machines take over the world and mortals are obliterated. While they're entertaining fare, the consensus is that this is, thankfully, a pretty far-fetched scenario and that it's not going to happen. There is, however, a more realistic issue with which we should be concerned - algorithmic bias.
"Algorithmic bias" is when seemingly harmless programming takes on the prejudices of its creators or the data it is fed. The results are problems such as, for example, warped Google searches, qualified candidates barred from medical school, and a chatbot that posts racist and sexist messages on Twitter.
One of the trickiest things about algorithmic bias is that the engineers doing the programming don't have to be actively racist, sexist, or ageist for these issues to raise their heads. By its very nature, Artificial Intelligence (AI) is designed to learn by itself, and sometimes it just makes mistakes. Of course, we can make adjustments after the fact, but the preferred solution would be to prevent it from happening in the first place. So how do we keep bias out of AI?
Ironically, one of the most exciting possibilities of AI is a world free of human biases. For example, when it comes to hiring, an algorithm could give men and women equality when applying for the same job or prevent racial prejudice in policing.
Consciously or not, though, the machines we create do reflect how people see the world, and thus can adopt similar stereotypes and world views. As AI becomes increasingly ingrained in our lives, it's something we do need to be mindful of.
When it comes to AI, there's the added challenge that bias doesn't come in just one form; there are multiple types of bias. These include interaction bias, latent bias, selection bias, data-driven bias, and confirmation bias.