Bias in AI algorithms: How do we keep it out?: Page 3 of 4

December 07, 2017 //By Francisco Socal
Bias in AI algorithms: How do we keep it out?
Francisco Socal, vision and AI product manager with Imagination Technologies Ltd., discusses the various sources of bias that can enter into artificial intelligence and machine learning applications.

Unlike humans, algorithms can't lie and so, if the results are biased, there must be a reason for it: the data it has been given. Humans can lie about the reasons for not hiring someone, but AI can't. With algorithms, we can potentially know when they're biased and tweak them so that in the future, they overcome this.

AI learns as it goes, and so mistakes will be made. Often it's not until an algorithm is used in the real-world that any built-in biases are discovered, because they are amplified. Rather than seeing algorithms as a threat, they can present a unique opportunity to address any bias and correct where necessary.

We can build systems to detect biased decision-making and act on them. Compared to humans, AI is particularly well-suited to applying Bayesian methods to determine the probability of a given hypothesis, without all the potential for human bias. It's complicated, but possible, and when you consider how important AI already is (and this will only increase in the years to come) it's a responsibility we shouldn't shirk.

As AI systems are built and deployed, it's vital that we understand them so that we can design them to have awareness and avoid potential bias issues in the future. We forget that AI is still very much in its infancy, despite all the rapid advancements. There's still so much to learn and improvements to be made. This tweaking will go on for some time, but in the meantime, AI will get smarter and we'll identify increasing numbers of ways to overcome issues such as bias.  

It is vital that the technology industry constantly question how and why machines do what they do. While most AI operates in a black box, with the decision-making process hidden, transparency in AI is key to building trust and dispelling myths.

There's a lot of research going on to help identify biases, such as that being done by the Fraunhofer Heinrich Hertz Institute. They are looking into identifying the different types of biases, such as those mentioned earlier, but also more "low-level" biases, and issues that can occur during the training and development of AI.

Design category: 

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.