Skip to content

Artificial Intelligence and Discrimination: Where Bias Meets Tech

artificial intelligence

The spread of misinformation on the internet is no new problem. 

Over the past few years, we’ve all bared witness to the alarming rise in conspiracy theories, deep fakes, and “fake news.” 

Tweets by a government official claim that climate change is a hoax and that Coronovirus was “manufactured” abroad. Recently, far-right conspiracy group QAnon has grown out of its fringe status and become a household name. A 2018 study conducted by researchers at MIT determined that false stories are 70 percent more likely to be retweeted on Twitter than true stories, and falsehoods were also determined to spread 10 times faster than the truth.

It’s no news, either, that the spread of misinformation is highly damaging, especially to traditionally discriminated-against groups. For example, legal scholar Danielle Citron, in an interview about deepfakes, notes, “98 percent of deep fakes that are appearing online are deep fake sex videos. And 99 percent of deep fake sex videos involve women.”

In the cases of “fake news,” deep fakes, and dangerous conspiracy, the spread of misinformation seems rooted in distinctly human phenomena.

The MIT study mentioned above determined that it wasn’t algorithms furthering the spread of false stories on Twitter, it was humans. According to the “novelty hypothesis,” people tend to react to false stories with emotions like surprise and disgust, motivating them to retweet, while true stories are more likely to evoke sadness, anticipation, and trust. 

Confirmation bias, the tendency of individuals to only process and internalize information consistent with their own prior beliefs, can serve to slowly radicalize individuals and even lead them toward isolated groups with extreme and dangerous agendas, like QAnon.

Humans, by nature, are discriminatory, imperfect, and prone to manipulation. We must acknowledge and consciously work against these tendencies in order to be as fair, unbiased, and knowledgeable as possible while on the internet. 

On the other hand, we should be able to trust that algorithms and artificial intelligence, unlike humans, are objective, unbiased, and accurate, right? Unfortunately, that’s far from the case.

Widely used artificial intelligence services harbor undeniable racial and gender biases. 

Joy Buolamwini is a Ghanian-American computer scientist and self-proclaimed “poet of code” based out of MIT. Her thesis, Gender Shades, evaluates the accuracy of facial recognition technologies that are powered by artificial intelligence. Her research focuses on gender and race, ultimately determining that shocking biases exist in these technologies and that transparency is essential when it comes to artificially intelligent products that focus on human subjects. 

Buolamwini evaluated the artificial intelligence used in facial recognition technology by companies IBM, Microsoft, and government-used Face++, determining that all companies performed better on male faces compared to female faces, and all companies performed better on lighter skin tones than darker skin tones. In a subsequent study, researchers determined that Amazon’s facial recognition technology had even more difficulty identifying female and darker-skinned faces.

It’s also important to note that, at the time of Buolamwini’s evaluation, none of the companies who were tested had publicly reported anything about their technology’s performance across the range of human attributes like gender, skin type, ethnicity, or age. Buolamwini argues that inclusive product testing and reporting are vital to creating ethical artificial intelligence systems. This technology is being used more and more readily at airports and by law enforcement, areas where demographic accuracy is essential. 

Buolamwini introduces a new term through which to understand this bias. We’re probably all familiar with the male gaze, the white gaze, and the postcolonial gaze; Buolamwini adds “the coded gaze” to our vocabularies. Automated systems reflect the priorities, preferences, and even prejudices of their creators, and machine learning often takes place through databases that are predominantly white and male. “Under the assumption of false machine neutrality,” Buolamwini emphasizes, “We risk losing the gains made with the civil rights movement and women’s movement.” 

In her talk, “Compassion through Computation: Fighting Algorithmic Bias,” which she gave to the World Economic Forum in 2019, Buolamwimi draws attention to the scope and urgency of holding algorithms accountable. According to a Georgetown Law study, one in two Americans have their faces logged in a law enforcement recognition network, which are databases for unregulated searches that employ algorithms with untested accuracy. 

This has far-ranging implications for democracy and surveillance in our country and across the globe. For example, in 2018, news outlet The Intercept published an investigative article calling out IBM’s use of NYPD surveillance footage to develop tech that allows police to analyze footage by skin color, enabling tools for racial profiling. Along with harmful discrimination in law enforcement contexts, concerning possible uses of facial recognition technology include the bolstering of mass surveillance and the weaponization of AI.

In late 2018, Buolamwini partnered with the Georgetown Law Center on Privacy and Technology to launch the Safe Face Pledge to counteract these problems by providing organizations with actionable steps to follow ethical artificial intelligence principles. This agreement is the first of its kind, and its signatories make commitments to “show value for human life, dignity, and rights, address harmful bias, facilitate transparency, and embed commitments into business practices.”

Buolamwini is also the founder of The Algorithmic Justice League, through which she hopes to help create a world with more ethical and inclusive technology. The organization has a bold and ambitious mission: “to raise public awareness about the impacts of AI, equip advocates with empirical research to bolster campaigns, build the voice and choice of most impacted communities, and galvanize researchers, policymakers, and industry practitioners to mitigate AI bias and harms.” The AJL works toward its goal through advocacy, research, art, storytelling, policy, and outreach.

Although there’s still work to be done, awareness is the first step toward addressing and regulating discriminatory use of artificial intelligence and facial recognition technologies that affect us in all realms of our lives, from healthcare to economic opportunity to our criminal justice system.

Technology should serve all of us. Not just the privileged few. 


For more information on the real-world implications of algorithmic bias, Joy Buolamwini recommends Automating Inequality: How High Tech Tools Profile, Police, and Punish the Poor by Virginia Eubanks and Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil. 

To learn more about the spread of false information on the internet, listen to “Warped Reality,” an episode of the TED Radio Hour Podcast.

And to dive deeper into questions of racial and gender equality, feel free to check out Novel Hand’s Justice and Human Rights articles and resources.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.