Elsewhere, predictive algorithms have been shown to penalise disproportionately people of colour. Neither race nor gender were included as inputs in these algorithms.
Are the algorithms to blame?
By referring to algorithms being biased we anthropomorphise them as we shift the blame to the tool. However, the ones who should be held accountable are the actual decision makers. Moreover, the word "bias" has a different meaning in machine learning compared to its general use.
In the case of the algorithm Amazon was using, all the input consisted of historical data. The algorithm was using previous hiring decisions which were made by people. Blaming the algorithm then doesn’t really help fix the problem, according to the Harvard Business Review.
Human judgments are even worse than algorithms
If we stick to the original faulty decisions of humans, things will of course remain the same. By maintaining reliance on human judgment, we stay biased.
People are:
- Inconsistent: We are not reliable and the probability of a mistake increases as we become tired. In addition, consistency is almost impossible because achieving a decision-making process that is 100% transparent is not easy.
- Easily distracted by irrelevant information: Taller people make more money. We certainly have no intention of hiring or promoting people based on height but still this somehow influences our judgments.
- Not transparent and relying on subjective criteria: Any hiring manager may find it difficult to explain what exactly is meant by being a "team player". How reliably can you recognise this trait?
-jk-