In Coded Bias, Shalini Kantayya explores the bias inherent in facial recognition and surveillance, and the consequences that might entail for the marginalized.
Never miss a great film again. Get exclusive content and hidden-gem recommendations you won’t find on the website.
Click here to sign up for the Seventh Row Newsletter.
Shalini Kantayya’s latest documentary, Coded Bias, sets its sight on the marginalized voices in tech research to highlight the inherent bias in the code that is used for facial recognition software. The film opens on Joy Boulamwini, a Ghanaian-American computer scientist who discovers that the facial recognition software she uses for work can’t recognize her unless she literally puts on a white mask. In the UK, the film follows Silkie Carlo who is the director of Big Brother Watch, an organization that is monitoring the use of facial recognition technology by law enforcement.
These are only two of the women in Kantayya’s film, but there are many more. These women all come from different backgrounds and academic fields, but they share the same fight: better regulations and more transparency in the world of artificial intelligence; from the code that makes facial recognition work — or not work — to the dangers of algorithmic determinism.
I talked to Shalini Kantayya about Coded Bias on the phone in lockdown, and how all of these obtuse technologies actually impact us every single day, as well as how her love for science fiction informed her film.
Seventh Row (7R): What initially drew you to make Coded Bias?
Shalini Kantayya: All of my work has to do with how disruptive technology impacts inequality — whether it makes the world more or less fair, especially for people who are often marginalized. I discovered the work of Joy Boulamwini and Cathy O’Neil through their TED Talks. Joy obviously talks about racial bias in facial recognition, whereas Cathy talks about the blind faith that we have in big data. Dr. Zeynep Tufekci talks about how we are unknowingly building the tools of an authoritarian state. These talks are how I fell down this rabbit hole, and they worked as the catalyst for me wanting to explore the dark underbelly of big tech.
7R: Coded Bias explores many different forms of AI and surveillance. Was this something you set out to do, or did it evolve throughout the process?
Shalini Kantayya: It would’ve been much easier for me to just make a film about facial recognition. I understand that the film is sprawling in all these different directions — how algorithms impact our opportunities. But as I began to understand these issues, I felt a responsibility to translate them to the public. Facial recognition is the most primal, the way that we can most viscerally understand how these algorithms work. It was important for me to use facial recognition as a catalyst to better understand the more invisible and opaque systems that might be limiting our opportunities and interfering with our civil rights and democracy.
7R: I feel like the medium of film might have an advantage in telling stories like these, because you can actually show it in a way that a piece of text maybe cannot: viscerally, as you say.
Shalini Kantayya: Oh, absolutely. I think that one of the greatest technologies in the world is the human heart and the power of empathy. We are now seeing a movement sweeping the globe. I live in Brooklyn, and I can literally hear the voices of children on the street every day, in a movement for civil rights and equality—the largest in 50 years. Because it is a movement of empathy for George Floyd.
I really feel that it is the human heart that is actually the biggest catalyst for change, and for me, that’s the power of film. This idea that we get to explore people and lives that are much different than ours, and that we are compelled to empathize with them. It is that power of empathy, that power of feeling, that actually makes real change.
In the research for this film, it was actually more difficult than you’d think to find people who were aware that they had been a victim of algorithmic bias, and so I did rely a lot on the research of the women in the film to suss out these real stories.
7R: You’ve said that AI is the new battleground for civil rights. Can you elaborate on what that battleground might look like?
Shalini Kantayya: Absolutely. I feel very disconnected from the word ‘privacy’. It feels like a very privileged idea. It vaguely has the connotations of property rights. People have this idea that, if I have nothing to hide, it’s OK. What I discovered in the making of this film is that it is actually about massive invasive surveillance structures that we have no rights or protections around.
The kind of information that private corporations like Google or Facebook have about us makes the Stasi and illegal programs inside the FBI, like COINTELPRO, look like they had a light touch, like they were kind of cute. This is when I began to realize that there are algorithms and automated systems that are making decisions about, you know, how long a criminal sentence someone will serve, who gets health care and who doesn’t, or who gets better quality healthcare; who gets hired and who doesn’t, who gets into college.
These AI systems are often the most invisible, automated, first-line gatekeepers to every opportunity we have, and they are rarely vetted for bias, or even for accuracy, to be frank. People believe in this sort of magic, that if the machine says it, we trust it. We have this implicit trust in machines, and we have to change the way we think about these systems.
7R: You speak on “algorithmic determinism” in Coded Bias, which is a concept that has made its way into popular culture recently with shows like Alex Garland’s Devs and the third season of Westworld, but I think it is still a “high concept” for most people. Could you explain what you mean with that term?
Shalini Kantayya: First of all, I’m a lay person. I’m not an AI researcher. I like to speak to those people at the bar. I call myself a barstool scientist. This idea of “algorithmic determinism” is one that runs through my film, because I began to realize that we are creating systems that will govern our future based on data from the past, with all its inequalities.
Meredith Broussard says it so well in the film: we will not have social progress if we rely on all these systems to replicate data from the past. Even with our Google searches, you know, it’s basically saying what I’ve searched for in the past is going to predict what I see and what I want to see… well, what if I want to see perspectives that are radically different than mine. So, there’s a way in which it becomes a deterministic model of creating the future.
7R: In Coded Bias, like in Westworld season 3, the AI that determines our future is given a voice and an embodiment. It speaks to us about itself and the world. How did you approach that concept in a documentary?
Shalini Kantayya: The AI starts as a very factual voice. It’s actually the voice of Tay, this chatbot that was unleashed as a young woman based of off teenage girls’ data, basically to chat with people. Within 16 hours [of being] online, Tay became a racist, sexist, antisemitic monster. And what I wanted to do was make a clear visual differentiation between the factual Tay, because that’s actual transcripts from the chatbot, and the morphing of that voice to say, “This is an AI that got out of control. Do you really want that same technology to decide who gets health care? Or to decide who gets hired? How long a prison sentence someone serves, or what risk someone poses to society?” And yet we have implicit trust in these AI systems. I wanted to draw a connection between the actual voice of Tay and the morphing of all of the forms of AI that are now permeated by this same technology.
7R: You also use a lot of digital effects, that sort of replicate the idea of surveillance cameras. Why did you decide on that for Coded Bias?
Shalini Kantayya: One of the challenges in making my film was how we make invisible systems visible to people. Trying to show the point of view and the voice of the AI became really important in the film. Through all this technology, through graphic effects, I’m trying to show the capabilities that already exist today.
7R: I feel like these digital effects, and especially the “embodiment” of the AI, is very effective in the documentary format. Are you actively pursuing creative non-fiction in your filmmaking?
Shalini Kantayya: Well, to be truthful, my first love is actually science fiction, and I think my love and celebration of that comes across, hopefully, in the film. Also, science fiction is a way that we can imagine the future.
It isn’t in the film, but in my research, it became clear that even from its inception, science fiction writers and AI researchers were actually collaborating and in conversations in early development of the technology. Arthur C. Clarke (author of 2001: A Space Odyssey) was in collaboration with Marvin Minsky, who was one of the fathers of AI at the MIT Labs. Basically, these guys would get together, and Arthur C. Clarke would say, “I want to build an elevator to the sky,” and Marvin Minsky got six months of funding to actually explore that idea.
I think that many AI researchers are fans of the genre, but also, I feel like science fiction is playing such an important role in how we imagine the future. It was really important for me to pull from those well-known devices about our imagination about AI, so that we could reflect on the AI of the now.
7R: Does the film camera have a responsibility or a role in the evolution of facial recognition?
Shalini Kantayya: For me, the camera has been a tool that has been empowering. We know about the atrocities like George Floyd only because of the camera being rolled. What I think is dangerous is when real time facial recognition starts to be deployed on CCTV Networks.
New York and London are two cities with massive camera networks all over the city. I f you can imagine all of those having real time facial recognition, it would really change the quality of life in our democracies. We have to be very careful about this facial recognition technology. Because once it is available, you can start to take facial recognition and match it to your social media profile. And all of the sudden, just from seeing someone on the street and taking their picture, you now have their complete social profile. That doesn’t seem right in a democracy. That doesn’t seem safe.
Someone from the ACLU said, “In a healthy democracy, we should know as much about our government as possible, and the government should know very little about us.” I think that that is slowly being eroded in our society
7R: The subject you follow in China says that she is not only OK with this pervasive surveillance and facial recognition tech, but that she believes it makes her life easier, because the system they have set up there can show you who has a “good” or “bad” score, so you don’t have to put energy into getting to know the wrong people.
Shalini Kantayya: Absolutely, and it’s something that’s happening to us already. We are all being scored in so many different ways. How many times have you judged somebody on the basis of how many Facebook, Instagram, or Twitter followers they have? People’s merit is increasingly being judged by these scoring systems, even in western democracies.
I think having that in the film, which to me is sort of a Black Mirror episode inside of a documentary, is showing us where we are all going, when we come to love Big Brother in the way that we all have.
7R: For me to talk to you like this, I have to turn on my built-in microphone and remove the tape over my camera. Is this simply paranoia? Does it help? Is it a false sense of privacy maybe?
Shalini Kantayya: I think we need to do everything we can, and we need better protections than tape over the camera. Citizens can do whatever they can to protect themselves. Protesters are putting on make-up to distract facial recognition cameras when they go to protests. I think people should resist in every inventive way that we have available to us, but we need to push for a massive public understanding around the technologies we are interacting with every single day.
I hope this film will be a kind of Inconvenient Truth of algorithmic justice, in the way that it hopes to bridge scientific understanding with the people most marginalized by these technologies.
I really do feel that data rights are the unfinished work of the civil rights movement. You are European; you live in a civilized society with some degree of GDPR protection. I live in the US, where this is literally the most unregulated sector of society. This is essentially a wild wild west. And this is the home of these technology companies, and so, we actually have to push forward.
I’m very heartened by IBM saying it will not research, deploy, or sell facial recognition technology. Amazon just said last week that it will press pause on facial recognition to police, and Microsoft followed suit and said it would not use or sell facial recognition tech to police departments.
The scary part of this is that they were already selling this technology. This isn’t like it was in beta, and they were waiting to see what its impact was on society before they deployed it. They were already deploying it and selling it to law enforcement — by the way, without a single person in an elected position overseeing that sale. We have a situation where the military and law enforcement are just kicking up a tool and experimenting on people’s rights.
I think the European regime could certainly be stronger, and I’m grateful to the GDPR for existing and giving us an example to follow. Some of us have rights in the US because our information ran through Europe, so thank you for pushing for that. I hope that Europe will lead, because you tend to be the leaders right now, on a stricter, and comprehensive data-rights-as-civil-rights-as-human-rights regiment. Because we need protection as citizens against the untethered power of big data and big technology.
7R: Do we have any autonomy at all over our own internet and tech privacy?
Shalini Kantayya: I don’t think we do. We haven’t been given a fair shake, which is why we shouldn’t have to give up our civil rights, in terms and conditions, to Facebook. They shouldn’t have untethered access to our face.
Especially since Covid-19, we’ve seen so many of our public forums move online. We would never consider walking into a library that takes our biometric data, but we’ll expect it on Zoom or Facebook. We don’t yet have a social contract or basic citizens rights around the use of these platforms, and there needs to be.
I’m grateful that these companies put these technologies down, but frankly, it should not be up to them, whether they put them down. Often people say, “Are there good uses of facial recognition?” What I say is, “We’re missing the point here. Who gets to decide that?” It should certainly not be people who have economic interest in that technology.
7R: Almost all your subjects in Coded Bias are women, and most of them, women of colour. Was this a conscious choice or was it a natural evolution?
Shalini Kantayya: I’m certainly conscious, as a woman of colour, who I position as thought leaders and experts in my films. I tend to focus on voices that are often out of the dialogue, but this was different. This was different because I actually think the people who are leading this movement are women and people of colour. That’s what I discovered in the making of this film.
All the people in my film, they have an incredible amount of expertise. They are incredibly astute. I think there are seven PhDs in the film — like advanced degrees in mathematics from Harvard — and they are extremely qualified. But they also had another identity that was marginalized that allowed them to see the technology from a different perspective. They were women; they were people of colour; they were queer; they were Jewish. Something in their identity was able to [let them] step back and say, “Wait a minute, how is this going to work on the marginalized?” Like Joy finds out in the film, the system wasn’t optimized for her. What could be the unintended harm and consequences of these technologies? It can’t be up to big tech to figure that out.
I do want to say that I think my gender ratio is exactly the same as most technology movies; they are just flipped. I’m always amazed, almost every time I screen this film, invariably someone asks why it is all women and POCs, but yet, we are willing to accept, more times than not, an AI film that features all white men, without questioning it.
That is part of the problem. Women are not minorities, and should not be in AI development, where they make up 14% of the researchers. I don’t have the statistics of people of colour. We need to push for more inclusion in the conversation about the technologies of the future.
7R: This is very clearly an intersectional issue, but is it handled like that in the top flights? I know Alexandria Ocasio-Cortez appears in the film, but are the people who are dealing with this issue now mostly white men?
Shalini Kantayya: I think there is also another gap, which is that the people of government don’t understand these issues. They don’t even know how Twitter works. A lot of the people in our government are older. There is just this gap in understanding, and like I said, the sector is completely unregulated. There is a big lobby in our government. We have to push for legislation and legislators to understand these issues, and how they dovetail with civil rights and democracy.