• Berlin
  • It’s the structures, not the tech: Os Keyes

Berlin

It’s the structures, not the tech: Os Keyes

INTERVIEW! Facial recognition expert Os Keyes on how it's the power structures behind AI, not artificial intelligence itself, that could cause some serious issues if it goes unchecked. Hear Keyes speak at Disruption Network Lab on Jun 15 at 18:30.

Image for It's the structures, not the tech: Os Keyes

© Dorothy Edwards/Crosscut

Disruption Network Lab returns to Kunstquartier Bethanien June 14-15 with another critical reflection on the use of technology – AI Traps. This time DNL looks at artificial intelligence and asks: How does AI reinforce prejudice and influence political discourse/action? Ph.D. student Os Keyes focuses their studies on data, gender and infrastructures of control – with a special focus on facial recognition – and joins the panel The Politics of AI (June 15, 6:30pm) to explore how the bias in AI can be countered.

How did you develop an interest in AI?

Before starting my Ph.D. programme I actually worked as a data scientist for three years at a couple of different companies, one that runs Wikipedia and the other in information security. Hanging around in those spaces, working with those people and just dealing with data on such a massive scale in ways that were extremely sensitive made me aware of the ethical implications of all of this. At the information security company, I started building machine-learning models for a living… and becoming aware of the fact that I am building the system to run when I am not paying attention – like it will be going off making decisions based on random rules that I programmed into it and data I gave to it.

You once said it was a university paper titled “White, Man, and Highly Followed: Gender and Race Inequalities in Twitter” that initially got you interested in facial recognition… because it made you so mad. Why?

This paper used facial recognition on Twitter profile pictures to analyze race and gender on social media. Anyone whose picture they could not analyze was discarded from the data set.

And these people went off and built a big data-set on how people of different races and genders interact. My problems were of a purely scientific nature: in the study there were only three races, two genders and no awareness of the fact that information you have about a user can be more than just a profile picture. For example, the text in their profile itself could lead to a completely different gendered conclusion.

But my greatest annoyance was actually that no one else noticed this flaw in the study and that it was fundamentally incapable of handling the existence of trans people. So that sort of kicked off my interest and I wrote a big paper that analyzed all the literature on gender recognition. And I inevitably found that facial recognition was trans-exclusionary.

Do you have any other examples where a transgender person directly experienced gender bias because facial recognition? 

So one example might be illustrative, as small as it may seem. There are these gender recognition apps. The idea is that you can do like gender-flipped photos, which everyone thinks is hilarious, and gender detection and all the rest. A while ago I met a group of trans people who had run into these systems and had been playing around with them. How unhappy some of them were, that the system had told them that they were the wrong gender was absolutely heartbreaking and entirely understandable, right? Because its literal physical infrastructures saying, ‘No, you are not real, you don’t count’ and claiming that it is scientific and drawing on scientific authority. And as small as that might seem if you think about the ways in which it is going to be built into other forms of existing systems where gendering takes place you can see the potential for their heartbreak to become a lot more ubiquitous…  just imagine human-robot interactions where the robot determines whether it should address you as miss or sir.

So where is facial recognition mostly being used and what are the tangible risks? 

The answer is we don’t know where and that’s incredibly worrying. It should worry us that it’s happening and we don’t know where it’s happening, but it should also worry us that just because it’s being done by random private companies and not the state, does not mean that it is not incredibly risky. First because of how much of the world is dominated by private companies. Second because data from private companies doesn’t stay in private companies hands for long. Any time a national security crisis might come up or a policing issue, companies tend to defer to the state. And when they do that all the data they were gathering, which you did not know they were gathering, suddenly ends up in the hands of the government. It is everywhere, it is increasingly common and I am very worried by the fact that we don’t recognize how ubiquitous it is and that who has access to that data can change at the whim of the state.

So is AI going in the wrong direction? 

I can see it going in one of two directions, if we actually make structural changes we need, AI is a useful tool. It is a thing that can be used with consent, with permission and with contextual awareness in a way that frees human beings up to concentrate on the work and activities that cant be automated.

In a world where we don’t make the structural changes we need, it looks much the same in theory, but there are two major differences. The first is we automate more stuff than AI is actually good for. Second, we become more comfortable and numb to the ubiquitous data collection, the ubiquitous surveillance, the idea that systems should tell us how best to run our lives. That is a society which is very heavily controlled which does not have room for multiple ways of doing things.

So how do we ensure that we don’t go that second route and how do we keep AI beneficial?

The problem is not AI. It is the symptom. The problem is the structures that are creating AI. We live in an environment where there are a few highly wealthy people and companies who have all of the power. Our problem is that we have state infrastructures which are designed to preserve themselves first and for most. The reason why we are seeing these issues in AI is that its development is driven by exactly the same people.

So to fix AI, one of the things we can do is to change requirements and expectations around it, like building in the expectation that it will make the world actively better. We can insist on consent to having AI deployed, based on explainability and transparency. But ultimately if all we do is fix AI, then in 10 years we will have exactly the same debate over another technology.

Disruption Network Lab: AI Traps, Jun 14-15 | Kunstquartier Bethanien, Kreuzberg. See website for complete programme.