Another consideration is unsupervised training. At the moment, most AI models are generated through supervised training: datasets with labels provided by humans. With unsupervised training, no labels are given and the algorithm has to classify, identify, and cluster the data by itself. While this method is typically many orders of magnitude slower than supervised learning, the approach would limit human involvement and, therefore, eliminate any conscious or unconscious human biases that can work their way into the data.
There are also things that can be done at the grassroots level. Technology companies need to involve a range of people when creating a new product, site, or feature. Diversity will mean that algorithms are fed a wider variety of data that is unintentionally biased. There's also a greater chance that any bias will be spotted if a number of people are analyzing the output.
Algorithmic auditing also has a role to play. In 2016, a Carnegie Mellon research team discovered algorithmic bias in online job advertisements. When they replicated people searching for jobs online, Google ads showed men the listings for high-income jobs nearly six times more often than to women. The team concluded that carrying out internal auditing would've helped to reduce this type of bias.
Simply put, machine bias is human bias. Bias in AI can develop in multitude ways, but the reality is, it comes from one source: us.
The key to dealing with the issue is dependent on technology companies, engineers, and developers all taking visible steps to safeguard against accidentally creating an algorithm that discriminates. By carrying out algorithmic auditing and maintaining transparency at all times, we can be confident of keeping bias out of our AI algorithms.