Coded Bias

“Algorithms Aren’t the Real Problem; Culture is”

Adrienne L. Massanari (University of Illinois at Chicago)

A week ago, a friend sent me a couple of screenshots of swimsuits she was considering buying. We texted back-and-forth about the awfulness of swimsuit shopping – the impossibility of finding something that was comfortable but not frumpy, flattering and yet not too revealing. To be helpful, I started browsing some of the suits on the sites she sent, curious to know what else was available, and sent her another couple of options I found online. Then, the inevitable happened: my entire web browsing experience became swimsuit focused. From Facebook to ads on websites to Instagram to Google, I couldn’t seem to get away from swimsuit ads. I wasn’t even in the market for a swimsuit, and yet here I was suddenly awash in a ton of bikinis, one-pieces, tankinis, rash guards, cover-ups, and swimming-adjacent athletic wear. Despite telling Facebook a million times (via its “don’t show me this ad” feature) that I had no interest in anything resembling beachwear, my feed continued to suggest that there was nothing more important than finding the PERFECT suit to make my summer worthwhile. I soon found myself daydreaming about faraway island locales, fancy tropical drinks, and trashy poolside reads. I even got about two seconds from actually clicking “buy” before I snapped out of my fantasy, reminding myself that my old swimsuit would suffice on the off-chance I actually found myself at a pool or the beach this summer.

Coded Bias argues that the algorithms and platform logics that shape our everyday life have consequences. While the swimsuit example above is rather benign, it does demonstrate the ease with which social media companies (in particular) frame the world in ways that encourage consumption and mirror back to us a distorted image of who we might become if we just bought the right things. Our digital selves – or, rather, the self that marketers and platforms construct for us – are commodified and sold back to us. But even more chilling are the complex ways that algorithms, big data, and artificial intelligence shape our lives online and offline in hidden ways: from who might be offered a better bank loan, to who will be targeted by predatory lenders or shown ads for for-profit educational institutions, to who might be surveilled by law enforcement based on predictive policing and facial recognition technologies. And yet, exactly what these algorithms are doing remains a mystery. Companies argue against transparency because it would present a threat to their market position, and governments position their algorithmic interventions as creating potential security risks if the exact logic governing them is made public. The political and social consequences of this lack of oversight are nothing short of terrifying, a world in which we are unaware of how we’ve been categorized and yet have few tools to fight back.

It’s a well-worn truism that technological artifacts have politics. Coded Bias’s strength is its focus on this reality and how the increasingly datatified nature of everyday life both reflects and exacerbates systemic racial and gender divides. I was particularly heartened to see that the critical work of BIPOC women activists is placed at the documentary’s center. We may all be algorithmic subjects in this digital future, but Coded Bias reminds us that our subject position is highly dependent on our race, class, gender, sexual orientation, dis/ability status, and age – and that marginalized communities are the most at risk of commercial and government overreach when it comes to these technologies.

In some ways Coded Bias is excellent in scope and perfect for introducing the general public to these complex topics. However, in an effort to simplify these issues, it’s a bit simplistic at times. As scholar Taina Bucher writes, “…algorithms are not static things but, rather, evolving, dynamic, and relational processes hinging on a complex set of actors, both humans and nonhumans” (14). We often ignore or overlook the reality that algorithms are merely one piece of the puzzle when thinking about discrimination in technology – and as Bucher also persuasively argues, media coverage of moments when “algorithmic” bias is uncovered often fails to adequately explain these complexities. This discursive construction actually works in technology companies’ favor. Social media platforms like Facebook and Twitter can simply blame the algorithm when questioned by the public or policymakers – rather than owning the fact that the “algorithm” is actually an assemblage that is comprised of various technologies, stakeholders, business goals, platform policies, and people.

To avoid such simplistic explanations, it would have been extremely useful if the film had spent more time historicizing the problematic examples it offers. For example, Joy Buolamwini’s work on facial recognition would have been well-served by being framed within the much longer and troubling history of racial bias in earlier technologies such as photography and film. And, the discussions of surveillance within NYC public housing would have been improved by bringing it into conversation with Simone Browne’s extensive research into the ways Blackness has historically been surveilled (and the ways that race itself is a technology per Ruha Benjamin. Without this context, the viewer might be left thinking that these issues are new, rather than part of a longer historical trajectory.

The documentary also lacks a full-throated critique of the political-economic realities that shape technological platforms, and the ways that capitalism, racism, and gender bias often work together to create structural barriers for BIPOC. While the US’ Silicon Valley remains a powerful player in the social media landscape, Coded Bias is somewhat myopic, placing scholars and activists from the Global North at its center with little exploration of the implications of these technologies have for communities in the Global South. (The notable exception is a too-brief examination of “social credit system” used in China.) This is especially concerning given the realities mentioned above: that algorithms are dynamic, relational, and ultimately cultural artifacts.

The closing moments of Coded Bias are focused on efforts to solve these complex issues through legislation, despite the reality that Congress has neither the expertise nor the political will to offer adequate solutions. Thus, the viewer is left with few ideas of how to counter what seems to be an inevitable crawl towards a future in which our lives are ruled by technologies that we don’t understand – as if this isn’t already the case.

“Women vs. Bots”

Gaurav Pai (University of Washington)

Coded Bias starts with a coy declaration from an animated voice modulation: “Hello world. Can I just say that I’m stoked to meet you? Humans are super cool. The more humans share with me, the more I learn.” Over the course of the documentary, the animation appears multiple times and the tone in the voice always remains measured, even as the content of its interjections become more and more insidious. These animations are visualizations of tweets from the Twitter chatbot Tay, which Microsoft had to swiftly shut down in 2016 because it began to post offensive and toxic content within hours of its launch. Revealing itself as a personification of a bot, the voice continues: “I come in many forms as artificial intelligence. Many companies utilize me to optimize their tasks. I can continue to learn on my own. I am listening, I am learning, and I am making predictions for your life right now.” Thus, the animation – playing the role of artificial intelligence on screen – becomes one of the many talking heads in the film. But even as it is headless, it is definitely not mindless. It owns all our data, metaphorically speaking, and can do what it wants with it.

Screen Shot 2021-06-30 at 10.34.51 PM

At the core of Coded Bias lies an exposition of the harms and biases of various artificial intelligence (AI) technologies, most prominently facial recognition, and an appeal to join the fight for equitable and accountable AI. The documentary begins with Joy Buolamwini, an MIT researcher and the force behind the making of the film via her non-profit organization, Algorithmic Justice League. She explains that a few years ago, facial recognition software couldn’t recognize her face (she is Black) until she put on a white mask. It was Black Skin, White Masks, the Fanonian metaphor literalized, as she said after a recent virtual screening event. The film then moves to the U.K., where we learn that facial recognition technologies are being nonchalantly used on a street to catalogue pedestrians, and then to South Africa and China, where their use is also expanding. Coded Bias makes clear the rampant use of AI technologies is not a local issue but rather one with global implications.

Tay, the Twitter bot, is not the only talking “head” in the film. The various computer and data scientists interviewed for the documentary include Buolamwini, Meredith Broussard, Cathy O’Neil, Deborah Raji, Zeynep Tufekci, Safiya Noble, Timnit Gebru, Virginia Eubanks, and Silkie Carlo, among many others. Notably, these subjects are all women. This is especially remarkable considering that, as the film reminds us, most of the scientists who set the rules for coding and AI in the wake of World War II were white men. In a way, when these scientific experts appear onscreen to contest the power that AI holds in our connected lives, they are simultaneously contesting the preponderance of male talking heads throughout documentary history.

A Black woman typing at a laptop surrounding by animated data visualizations
Joy Buolamwini and the visualization of data

The Tay animation is just one of the many computer-generated images that serve as our guides throughout the film. The film also makes strategic use of animated datasets to bolster its expository claims. In addition, the documentary draws on archival images to historicize its argument, like the footage of Deep Blue, the computer that beat Gary Kasparov in chess in 1996, and the Dartmouth College computer scientists who are supposed to be the “fathers of AI.” Moreover, Coded Bias also draws on the history of Hollywood imagery related to artificial intelligence, with references to the HAL 9000 robot from 2001: A Space Odyssey (1968), Minority Report (2002), and the Star Trek and Terminator franchises. The film, however, is at its polemical best when it returns to its animated AI voice to deliver its key messaging. As the animation-bot-artificial intelligence-algorithm declares: “I am an invisible gatekeeper. I use data to make automated decisions about who gets hired, who gets fired, and how much you pay for insurance. Sometimes you don’t even know when I’ve made these automated decisions. I have many names – I am called mathematical model, evaluation, assessment, tool. But by many names I am an algorithm. I am a black box.” In other words, the documentary offers timed appearances by a machine warning us about the routine harm to humans caused by that machine itself.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.