Tomas Buckley takes a cold hard look at how AI mirrors human flaws and how if we are not careful, it may propagate more systemic oppression.
In writing about AI, I am in good company. Just like Elon Musk, Bill Gates and Sam Harris before me, I will try to inform you about something of which I know little. We often mistakenly believe that algorithms are objective viewers of our chaotic world. This is untrue, these machines are biased too. Bias pops up in our world and is always there. Like the red quality streets, it is something that nobody wants but will always exist anyway. We can, however, take some measures against bias. You will find plenty of examples of unintentionally misogynist or racist AIs online but this normally isn’t due to malicious intent on anyone’s part.
Take the prison population of Ireland as an example. There are roughly 3,550 men in prison in Ireland and 130 women. Let’s say that our judges get it right 95% of the time. Meaning 5% of convictions are false positives, a term I’d normally explain but I think covid has explained for me already. This means that there would be about 6 innocent women in jail and 177 innocent men in jail. Taken out of context, we would be shocked to hear that 97% of innocent people in jail are men. We can see here that this is obviously just a result of there being more men in court, not because of any bias against men.
But what happens if we were to apply this in a place where systemic discrimination puts minority groups in the courtroom more often? In this case, we would see more innocent people of that minority put into jail not because of a bias on the part of the judges, but because of statistics and systemic oppression. So if the fault is in society and not the judge/the AI algorithm then how can we reverse this discrimination? One solution is to actively create bias which pushes against what society has created. But how? How strongly should we push against this tendency, how many guilty people are we going to allow to walk free to save the innocent people who need to be saved? How long will we need to apply such measures for?
Even if we say, for the moment, that we have an algorithm that fairly balances society’s oppression. The existence of this algorithm will eventually become public knowledge if it is being used often in courtrooms. Acquitted minorities may find themselves the subject of further discrimination from people who think that they deserve to be in jail. Real criminals may also increase criminal activity, knowing that their probability of conviction has decreased. Chaos would ensue and hatred would be amplified. We need to find a way to influence society without it knowing that it’s being influenced. Like giving that friend who stinks shower gel for Christmas, we need a subtle way of solving the problem. I believe that we need to have an algorithm that doesn’t act on every case, one that has its say on enough cases to make a difference but not enough to draw attention to it. As for the contents of the algorithm being discovered, that would normally be protected as the intellectual property of the company that made it.
As we have seen, the algorithms mentioned here are a system of power like any other but they still need a mammy to hold their hand; someone to tell them when they’re being bold and when they shouldn’t eat something. We need to help them to recognise bad men and tell them not to bully the little kid. We should all be informed of what they can and can’t do because it influences the lives of all of us. We should all ask for algorithms that fix us.