Bias in AI algorithms: How do we keep it out?: Page 2 of 4

December 07, 2017 // By Francisco Socal
Francisco Socal, vision and AI product manager with Imagination Technologies Ltd., discusses the various sources of bias that can enter into artificial intelligence and machine learning applications.

"Interaction bias" is where the user biases an algorithm based on the way they interact with it. When machines are taught to learn from those around them, they can't decide what data to keep or discard, or what is right or wrong. Instead, they simply consume everything they are given -- the good, the bad and the ugly -- and base their decision-making on it. Tay, the chatbot mentioned above, was an example of this type of bias. It was influenced by a community that taught it to be racist.  

"Latent bias" is where the algorithm incorrectly correlates ideas with things such as race, gender, or sexuality. For example, when searching for an image of a doctor, an AI might present a male doctor before a female one, or vice versa when searching for a nurse. 

"Selection bias" sees the data used to train the algorithm over-represent one population or group, making it work in their favour and at the cost of others. With the example of hiring, if AI is trained to recognise CVs only from men, then female candidates won't be successful in the application process.

"Data-driven bias" is where the original data used to train the algorithm is already biased. Machines are like children: they don't question the data they are given, but simply look for patterns within it. If the data are skewed at the outset, the output will reflect this.

The final example, and similar to data-drives bias, is "confirmation bias," which involves favouring information that verifies pre-existing beliefs. It affects how people gather information and also how we interpret it. For example, if you believe that people born in August are more creative than those born at any other time of year, there's a tendency to look for data that reinforces this thinking.

When reading these examples of how bias can infiltrate AI, it can seem concerning. But it's important to take stock and remember that the world is a biased place and so, in some instances, the results we receive from AI aren't surprising. That doesn't make it right, and it highlights the need to have a process for testing and validating AI algorithms and systems so that biases are caught early – ideally,  during development and before deployment.

Design category: