Are AI’s biased because they are built by white males? No and yes and no.

If the people who are in charge of designing and training AI systems are biased (along the lines of race, gender, religion, etc.) do their biases show up in the finished AI? Not so long ago, one of Amazon’s facial-recognition AI’s was revealed to have incorrectly matched US politicians to the identities of US criminals. One source that publicized the story is a book called The Reality Game, by Samuel Woolley. Woolley is a credentialed researcher who has met with experts in various fields, and one of his assertions is that deep learning AI’s are biased because they are designed and trained by white men. My experience in the field of AI says that this claim is a gross error but that AI’s do learn biases. There are two prominent flaws in Woolley’s claim:

First, and easiest to deconstruct: he says that AI’s are designed and trained by white men. My research as a PhD student was in the same field as the Amazon AI example above (deep learning for computer vision tasks). Reading publications, attending conferences, etc., I observed that white men were not the majority of the players in this space. (In fact, there were more asian men than any other demographic, enough so that a person wouldn’t need to even pay close attention to pick up on this fact.)

Second, the author asserts that the trainers’ own biases are the biases that show up in the AI. I wholeheartedly agree that a trained AI, at least any AI which needs to perform a difficult task, is very likely to be biased, but those biases don’t coincide with the trainers’ biases. So where does this bias come from?

Assuming the best, the biases arise because of the training data that is used to train the artificial neural network (the “brain” in deep learning). This training dataset might have implicit bias in it because it was assembled shoddily, but it could also have bias in it for other reasons, including the possibility that there is bias in the real world: as an example, let’s suppose we are training an AI to determine whether children should be considered for candidacy in an advanced education program, based on video footage of them performing in elementary-school academic competitions.

The AI researcher will build an AI and then might train it by showing the AI a lot of video footage of existing academic elites, perhaps from when they were younger, attending the same sort of elementary-school academic competition that the current generation of children are attending.

What if there is a real world bias —- perhaps rooted in an unfair, historical reason — which results in the current generation of academics being predominantly from a single background, gender, race, etc.? The AI is not supposed to learn that these demographics makes for a better candidate, but if it watches video footage from the dataset we have just described, then it will likely learn to look for these demographics in the candidates. The AI learns a bias, so the researchers face a hard problem: remove the bias from the datasets that they use for training the AI. But how can they do this?

In fact you may think of multiple ways to re-work are your dataset to avoid bias. That’s great; however, it introduces new complications, notably, that the training data —- now more than ever — can contain the biases of whoever assembled it. Imagine you are aware of a bias in the original dataset that disposes the AI to favour shorter candidates. You might modify the dataset with the intention of avoiding this bias, but is the dataset better now? Does it represent the real world better or worse than before? If a researcher intentionally modified it, does it carry the researcher’s biases? You’re unlikely to know for certain unless you do such an egregious job that the AI’s new biases are blatant. There’s no formula you can follow to build the best dataset for the service of society. It’s a hard problem, but the fact that this problem exists does not indicate that the researchers themselves have the same biases or that too many of them are white and male. (Seriously, just go to a single national or international academic conference to debunk that latter point.)

Leave a comment

Your email address will not be published. Required fields are marked *