I recently watched a Netflix documentary titled “Coded Bias”, and I had two immediate reactions:
Ha! I KNEW the algorithms were biased against global majority (Black and brown) people
We’re all in deep doo-doo unless something changes about the way tech companies, law enforcement and others use the algorithms.
In the documentary, MIT Media Lab researcher Joy Buolamwini describes her discovery that facial recognition algorithms were exceedingly bad at properly recognizing Black and brown faces. From there, she set out to find out how the algorithms were being used, what the inherent issues were, and whether this headlong rush to bring about Terminator’s Skynet could be halted.
What’s an algorithm and why is it a problem?
If, like me, you’re a bit hazy on what an algorithm actually IS, the documentary explains that it’s a predictive mathematical model. It uses the data that’s fed into it to draw conclusions and take or suggest actions. So, this can affect whether you get a job interview, how your job performance is rated, your eligibility for a mortgage or loan and, what’s most troubling, whether you’re seen as a potential risk or threat.
Surprise, surprise, this last one is more likely to happen to highly melanated people. But lest you think pale skin lets you off the hook, one instance of an algorithm used in a hiring process threw all resumes from women out, so only men got interviews.
There are a couple of issues with using algorithms in this way. First, as they used to say in the early days of coding, GIGO (garbage in, garbage out): if the input is flawed, the output will be flawed, too.
The second issue is where a lot of that input comes from. Programmers are mostly white men, therefore early input to facial recognition software was mostly white men, so facial recognition algorithms are best at identifying white men and worst at identifying Black women. By the end of the documentary, we learn that there have been improvements, but Black women still get the short straw.
As if that’s not bad enough, there are all the other ways algorithms are used. I, along with many others, have written before about how the content moderation policies on many social media sites seem to favor white men while reserving the harshest punishment for Black women. And about how writing about racism and anti-racism still seems to be a greater social media crime than using hate speech against the very people who are fighting against racism. Algorithms do a lot of the initial heavy lifting in those cases, and we know how they usually turn out.
The push to profit via algorithms is troubling
Plus there’s the fact that large corporations bent on profit already own much of our data. (Before you go deleting your all social media accounts, know that if you’ve ever been ON the grid, it’s probably already too late to go OFF it as they already have your data and could even have sold it several times. That’s just my opinion, though.)
I’m going to pull out two particularly troubling aspects of the documentary (though who am I kidding; it was ALL troubling).
First, the attempt by global tech companies to sell facial recognition data to law enforcement, though currently halted in some jurisdictions, is unlikely to stop long term. The example of a terrified 15-year-old Black boy in London who was mistakenly identified as a problem puts that issue into sharp relief.
England being England, the police stopped trying to detain him once the error was apparent. But I was chilled by the coppers’ attitude - in another case - that trying to avoid facial recognition cameras was tantamount to being guilty of something. And I couldn’t help wondering whether that lad would still be alive if the same thing had happened in the trigger-happy US.
Who’s guarding the guards?
Second, the documentary revealed that because these algorithms use machine learning, nobody actually knows everything they’re using for input, and how they’re arriving at decisions. In other words, the algorithmic gatekeepers have no gatekeepers of their own. As we’ve seen on LinkedIn and elsewhere, there’s a tendency to delete “suspect” content first and leave the innocent content creator with a long battle to get it restored after human review.
And what do we do if people assume that mathematical models are smarter than humans, which they really aren’t? Remember, GIGO.
It’s not good enough. There has to be some oversight to avoid continuing to encode discrimination. And encoded it is - a short-lived Microsoft chatbot spent a few hours on Twitter, and emerged as a racist and homophobe, so they had to shut it down.
3 standout quotes from “Coded Bias”
A few quotes stood out to me:
"The past dwells within our algorithms" - the underlying narrative of white supremacy is now encoded in the algorithms making decisions about our lives. That’s a big problem, especially for global majority people.
"What will the powerful do to us with AI?” - a troubling question indeed, as the algorithms replicate harmful power dynamics
"Racism is becoming mechanized"- nuff said. Many of us have already seen this, and it looks like it’s only going to get worse.
A Terrifying Reality With a Sliver of Hope
Overall, the documentary reveals that the data we’re letting companies collect through social media sites and apps, plus other seemingly less harmful methods, is being used to control our world view, enhance inequality, and hinder societal progress. It seems pretty bleak, not to mention terrifying.
However, there IS one tiny glimmer of hope: the Algorithmic Justice League - an organization set up to ensure that technology serves the many and not just the few. I truly hope they win the uphill battle to prioritize human rights over tech companies’ desire for huge profits. But I’m not totally convinced they will.
Have you watched the documentary? What stood out for you?
© Sharon Hurley Hall, 2021. All Rights Reserved.
Cover photo courtesy of Canva.