|
‘Black Communities Are Already Living in a Tech Dystopia’ - CounterSpin interview with Ruha Benjamin on racism and technology
view post on FAIR.org
Janine Jackson interviewed Ruha Benjamin about racism and technology for the August 9, 2019, episode of CounterSpin. This is a lightly edited transcript.
MP3 Link
Janine Jackson: Welcome to CounterSpin, your weekly look behind the headlines. I’m Janine Jackson.
This week on CounterSpin: Listeners may have heard about the electronic soap dispensers whose light sensors can’t detect black skin, Google and Flickr’s automatic image-labeling that—oops—tagged photos of black people with “ape” and “gorilla.” An Asian-American blogger wrote about her Nikon digital camera that kept asking, “Did someone blink?” And you can, I’m afraid, imagine what turns up in search engine results for “3 black teenagers” versus “3 white teenagers.”
Some examples of discriminatory design are obvious, which doesn’t mean the reasons behind them are easy to fix. And then there are other questions around technology and bias in policing, in housing, in banking, that require deeper questioning.
That questioning is the heart of a new book called Race After Technology: Abolitionist Tools for the New Jim Code. CounterSpin spoke with author Ruha Benjamin; she’s associate professor of African-American studies at Princeton University, and author, also, of the book People’s Science: Bodies and Rights on the Stem Cell Frontier. Ruha Benjamin, today on CounterSpin. That’s coming up, but first we’ll take a very quick look at some recent, relevant press.
***
Janine Jackson: Some 40 million people in this country use fitness trackers or smart watches that monitor their heartbeat; it’s a cultural phenomenon. Both the New York Times and the Washington Post reported earlier this year that employers are increasingly using such devices to monitor—or, you might say, surveil—workers’ exercise, in hopes of cutting healthcare costs.
But neither outlet has shown interest in findings reported late July on Stat, the tech and health issues website, that nearly all of the biggest makers of wearable heart trackers use a technology that is less reliable on dark skin.
There have been consumer complaints. And while the researchers and scientists Stat spoke with made clear that there isn’t a lot of research into the heart trackers themselves, the green light technology they use, and its relationship to melanin, is well-documented. So much so that more research and more public information from manufacturers, who generally do not disclose concerns about accuracy, would be needed to make a case that there isn’t a problem here.
Of course, there are serious implications, not just for those monitored employees—some of whom have insurance premiums and vacation days pegged to their use of these devices—but for the growing amount of research, including medical research, that uses this data.
But so far, the reporting by Ruth Hailu, an intern at Stat and a college student, has been picked up by tech sites, and that’s pretty much it. So big media will still tell you what color Fitbit you might buy, just not what color you might need to be to use it.
There are efforts to require companies to determine whether their algorithms discriminate, including a bill introduced to Congress this spring, called the Algorithmic Accountability Act. It would call on some companies, especially those whose decision systems have a high impact, to conduct impact assessments, pushing them to think more deeply about design.
But as an op-ed in the New York Times noted, the legislation relies for enforcement on the FTC—not famous for enforcing settlements, even with repeat violators. It lacks an avenue for public input—as in the Fitbit case, sometimes the first way we learn about these problems. And companies are notoriously cagey about proprietary information. But without mandated transparency, the legislation doesn’t guarantee that anything learned would actually be incorporated into policy discussions. A start, then, but far from enough.
We’re talking about code on the show today. And, in a way, news itself is a kind of code, a machinery or system seen as objective or neutral, that gains power from that perceived neutrality, that nevertheless reproduces and advances inequity. It works in many ways, one of which is trivializing or ignoring the way people of color are overlooked or an afterthought, including when it involves something as significant as our health.
You’re listening to CounterSpin, brought to you each week by the media watch group FAIR.
***
Janine Jackson: As a media critic in the 1990s, you could reliably expect every talk about the censorious effects of elite media’s corporate and state fealty to be met with the question, “But what about the internet?”
There was an earnest desire to see the power inequities reflected in corporate media—the racism, sexism, class and other biases—designed out of existence by some new delivery mechanism. You could say the same desires, denials, conflicts and questions are writ large in US society’s engagement with technology generally, even as we see robots and algorithms replicating the same problems and harms we have not conquered societally. There’s an insistence that technology is a kind of magic, that of itself can get us somewhere we can’t get on our own.
A new book interrogates that belief and the effects of it. It’s called Race After Technology: Abolitionist Tools for the New Jim Code. Author Ruha Benjamin is associate professor of African-American Studies at Princeton University, and author, also, of People’s Science: Bodies and Rights on the Stem Cell Frontier. She joins us now by phone from Los Angeles. Welcome to CounterSpin, Ruha Benjamin.
Ruha Benjamin: Thank you so much for having me. I’m thrilled to be here.
JJ: I mentioned, at the very top of the show, things like electronic soap dispensers that couldn’t detect black skin, or a camera that thinks that Asian people are blinking. Your book cites the classic Allison Bland tweet about Google Maps telling her to turn on “Malcolm 10 Boulevard,” (though you talk about how that was seen, really, as an engineering victory, to get a computer to read a Roman numeral as a number). Some discriminatory design problems are easier to explain — or harder to deny, I guess — than others, like where you see this presumed neutrality of whiteness.
But part of the hurdle of engaging the questions that you engage in this book is that it involves understanding racism after (as before) technology as not having to do with intent, or with intentionality; you have to dig very deep to even start the conversation, don’t you?
RB: Absolutely. I think for many people, as much as we’re becoming accustomed to talk about and think about implicit bias, for many people, racism is still really about interpersonal interactions: It’s about racial slurs, it’s about hurt feelings. And so it’s hard to understand patterned behaviors, the institutional, the legal, the policy level forms of racism, that don’t rely on malicious intent. Of course, oftentimes, that is still in the mix for many things.
JJ: Right.
RB: But, you know, it’s really difficult for people to zoom the lens out, and think in a more patterned, systematic way. I think once we do, however, it becomes much easier to understand how technology is part of the pattern. It’s one of the mechanisms, in addition to laws, in addition to economic policies, in addition to, let’s say, redlining, housing-discrimination policies, to see that technology is one of the facets that we have to take into consideration. But, as you say, a very simplistic understanding of what racism is prevents that jump in many ways.
JJ: One of the things that I found frustrating when I cite that “What about the internet?” example, was the relief that I could hear in people’s voices, you know? “We don’t have to deal with all of that systemic unfairness you just talked about, because the internet is going to flatten it all out.”
RB: Absolutely. Absolutely.
JJ: And I have a lot of hope for the internet, but it was that sense of the inevitability that was dispiriting. But the power of technology is very seductive, in what we imagine it could be used for.
RB: It is. It is.
JJ: And I just wonder, where do you start folks when you’re trying to explain, for example, what you’re getting at with the term, “the new Jim Code”? You’ve just moved toward it, the continuity there; where do you start folks out?
Ruha Benjamin: “We can’t take an ahistorical approach to technology’s role in sedimenting forms of inequity and hierarchy; we have to go through this history to understand the present.”
RB: As you say, there’s a real hunger for many people, especially those who benefit from the design of our current systems, to want to bypass and jump over the difficult work of actually wrestling with these histories and ongoing forms of deeply embedded discrimination, bias, racism, white supremacy, that infect all of our institutions.
And so to the extent that technology offers a really alluring fix, to jump over that, and not really get in the mud of our history, it becomes something that people jump to: “The internet is going to save us,” or “Some new app is going to save us.”
And so what the “new Jim Code” intends to invoke is this history of white supremacy, racism in this country. It’s to say that we cannot fully understand what many people call machine bias or algorithmic discrimination—which are still ‘softening’ terms when it comes to white supremacy—unless we actually go through the mud of this history.
So it’s basically saying we can’t take an ahistorical approach to technology’s role in sedimenting forms of inequity and hierarchy; we have to go through this history to understand the present.
JJ: Just talk about some of the instances; the book looks at different kind of shapes of this. Some are designs that are actively amplifying existing inequity, some are things that are just not noticing them, and thereby replicating them. What are some of the instances of this kind of algorithmic, machine or embedded bias that folks might not be aware of?
RB: So I’m talking to you from Los Angeles, which—I don’t live here now, but I grew up here, so I come back on a regular basis—and the way that I open up the book and the preface is just to go back to my childhood, growing up in South Central Los Angeles, which is and was a heavily policed part of the city, in which helicopters could be heard grumbling, rumbling overhead at all hours. Routinely police were pulling over my classmates and frisking them at the gates of the school.
And so this is what Michelle Alexander would say is one tentacle of the New Jim Crow, this mass incarceration, in which specific geographies in our country are targeted in terms of policing. Whereas where I studied for a couple years, in Westwood, all kinds of things are happening behind closed doors in these neighborhoods that never see the light of day.
JJ: Right.
RB: And so that is one mode of white supremacist institutional racism, in which police are targeting some areas and another. Now fast forward to the present, where people, companies, organizations, are employing all types of software systems to predict where crime will happen, in order to more empirically—we’re told—more neutrally, objectively, send police to go, right—predictive-policing type of programs.
But these software systems rely on historic data about where crimes happened in the past, in order to train the algorithms where to send police in the future. And so if, historically, the neighborhoods that I grew up in were the target of ongoing over-policing surveillance, then that’s the data that’s used to train the algorithms to send police today, under the cover of a more objective decision-making process.
And this is happening in almost every social arena. Policing is just one of the most egregious, but it’s happening in terms of education, which youth to label “high-risk,” for example; in hospitals, in terms of predicting health outcomes; in terms of which people to give home loans to or not, because of defaulting in the past. And so our history is literally being encoded into the present and future.
And the real danger that I try to highlight in the book is that it’s happening under the cover of a kind of veneer of objectivity, in which we’re less likely to question it, because it’s not coming from a racist judge sitting in front of you, or a racist teacher who’s doing something. They’re turning to a screen that’s producing some results, that says, “You are high risk,” or “You are low risk,” you know, and then they’re acting on that.
And so I really want us to become attuned to this intermediary that is not, in fact, objective in the way that we are being socialized to believe it is.
JJ: When you break it apart, it doesn’t seem that difficult to see the flaw in something like predictive policing. If police are deployed disproportionately to poor communities of color, then that’s where they make the most arrests.
So if you fill a database with that, and then you say, you know, “Alexa, where is the most crime, based on the number of arrests?” Well, it’s going to circle you right back to the data that you fed it, and it’s only predictive because you make it so. But people generally understand the idea of “garbage in, garbage out,” don’t they? What is it where there’s resistance to understanding that there’s a problem with this?
RB: That’s a great question. And as someone who spends all day teaching classes on this, part of the conceit of just being an educator is, if only people would just, you know, “Here’s the data,” and that will lead to certain conclusions. And what we know is that the numbers, the dots that we’re trying to connect, they actually don’t end up speaking for themselves in the way that we hoped, because people filter the story, or filter the data, through all types of interpretive lenses.
So here, I’m thinking about some colleagues up at Stanford who produced a series of studies, both in California and New York, in which they presented white Americans with data, racial disparities in the criminal justice system. And they showed them the much higher rates of imprisonment of black Americans, and then asked them whether they would be willing to support policies that would address that; in California, would they be able to support reforms of the three strikes law, and in New York, whether they would be willing to support changes to the stop and frisk policy.
And the researchers were quite surprised to find that the more data that individuals were exposed to, in terms of the disparities, they were less willing to support the reforms. And so at some point, between the data and the conclusion of the policy, people are filtering this information through all kinds of stories and interpretive lenses.
One of the most powerful is that, “Well, if there are more black people in prison, then they are more criminal. And they actually deserve to be there. And so why would I support reforms to laws that would actually endanger me?”
JJ: Right.
RB: And so we can take that to the realm of predictive policing. Many people will be hearing the same thoughts that we’re trying to connect, in terms of “garbage in, garbage out,” historic overpolicing, the faults of future predictive policing. And they’re not drawing the same conclusions as we are. That historic overpolicing was justified, in the minds of many, thereby the future predictions, we should definitely be acting on them, and actually incorporating them in many more locales, in order to safeguard their own neighborhoods and their own safety.
JJ: Yeah.
RB: So just looking to the flaws of the technology is not going to save us, and it’s not going to help us reach certain conclusions, if the political stories that we’re telling, the social stories that we’re telling, if they themselves are so infected by white supremacy, that it distorts the conclusions that people draw.
JJ: I would say it means people don’t even understand what you are saying when you say “disproportionate.” I think we have to explode that term for people, because they just don’t hear it in the way that it’s said.
Well, there’s every reason to focus on white supremacy. And it’s maybe easier to convince folks of that than it has been for a minute. But it’s also true, as you illustrate in the book, that in some ways, black people are canaries in the coal mine when it comes to this kind of technological, I don’t even know what you want to call it, malfeasance or harm. It goes beyond racism.
RB: Absolutely, absolutely. Yeah. So there’s a way in which we can think about how the groups that are first experiencing harm or neglect in any kind of inequitable system, how we should be learning from that experience.
So in some ways, and the dot I try to connect, is that black communities are already living in the future of a tech dystopia when it comes to policing, or when it comes to all kinds of algorithms that are making life and death decisions.
JJ: Right.
RB: And yet, in part, because blackness and black people are already, in the collective imaginary, disposable, thereby those lessons are not being learned. And so we’re not actually seeing it coming, in many ways. And so I’m hoping that through engaging in more—not just conversations, but the kind of organizing that’s happening around tech justice in many locales, including here in LA, and throughout the country. I think that we have to turn more to some old school forms of political organizing, rather than look for tech fixes and tweaks around the edges to address the new Jim Code.
JJ: Well, that was going to be my next question. So much work is happening here: San Francisco and Oakland banning facial recognition technology, the Amazon and Google workers protesting their own bosses’ collaboration with ICE, and then groups like Center for Media Justice and Center on Privacy and Technology doing their thing.
Are people grabbing it by the right end, do you think? Is the pushback radical enough? What more needs to happen?
RB: I definitely think that there’s a strand of the organizing that is really thinking in terms of fundamental reorganization of our relationship—society’s relationship—with technology, with tech industries. And so there’s a lot of great things happening, some of which you mentioned, and we can think about, in addition to organizing, there’s a growing critical legal community that’s thinking about, “How do you litigate algorithms?”
JJ: Right.
RB: And so there’s a growing movement among legal scholars, as you said, there’s legislation being passed. And then there’s just really popular education examples that are happening. There’s a wonderful organization here in LA called Color Coded, that does a lot of community workshops, along with Stop LAPD Spying.
In every locale, whoever’s listening, I’m sure you could find a group or organization working to deal with this at a very fundamental level.
And so, towards the end of the book, I try to draw some simple contrasts to help us discern the difference between a tech fix‚ something that on the surface seems like it’s a solution, or thinking about addressing bias, versus a much more fundamental questioning of not just the technology, but the infrastructure and the society in which the technology is being deployed. Because if you have just a tech fix that is still operating and circulating within the same structure, then that means that the power hierarchies are not being challenged.
And so we look at these two different apps that are trying to deal with mass incarceration and imprisonment, there are some that are really just a filtering more investment into mass incarceration and the prison system, and some that are actually being used to get people out of cages and have a much more abolitionist approach to the process. And so I think we have to become much more discerning when we’re presented with a solution, to really look beyond the surface and think about the social infrastructure that solutions are being proposed in.
JJ: Finally, you talk in the book about “reimagining the default settings” as a key idea, and I’m guessing that this is part of what is explored in the new collection that you’ve just edited, out from Duke University Press, called Captivating Technology: Race, Carceral Technoscience and Liberatory Imagination in Everyday Life. I wanted to ask you, finally, what’s “liberatory imagination”? Tell us a little bit about that.
RB: Absolutely. So, when we think about where the solutions lie, where the problems lie, we’re more or less trained to look to policy, to laws, even to organizing, and through that idea of liberatory imagination, I want to draw our attention to the way that, even before you materialize some technology, there’s an imagination that engenders that as an object. There are ideas, values, embedded in our material infrastructure. And as of now, there’s a very small slice of humanity whose imagination about the good life and a good world are being materialized.
And the vast majority of people, they are creating, they are imagining, but have far fewer outlets in which to materialize that. And so what I’m trying to invoke through the idea of liberatory imagination is the fact that we need creative solutions that, on one hand, look like artists and humanists getting involved. You know, many people hear a conversation about technology and they think, “Well, I’m not a tech person.” We’re trained to opt out; we’re socialized to think of some people as having the expertise to engage in this discussion and some don’t.
But the fact is, these things are impacting everyone, and that means everyone has the right and the prerogative to weigh in and to say, “We don’t want certain things.” We don’t just have to tweak the edges, we can actually have a kind of informed refusal.
And so liberatory imagination is about reclaiming the space, to say that we’re moving beyond critique. We can critique, and we need critique. But the counterpart of critique is that we want to be creative in terms of presenting alternatives to the techno status quo, and materialize the kind of world that will actually be liberatory, in which everyone can flourish and realize their potential.
And so I want us to make space for that, in addition to critique, and I point to a number of examples of people already doing that, and I’ll just maybe close with one. It’s a kind of parody project, going back to our conversation about predictive policing, in which a few people got together and said, “What if we turn this idea of predictive policing on its head, and had a white collar crime early warning system, and create heat maps of cities all across the country, and predict where crimes of capitalism are likely to occur. And you would have an app that when you go into a city, it would alert you, and then it would create an algorithm to predict the average face of a white collar criminal, and the designers of this used 6,000 profile photos from LinkedIn, CEO photos, so that the average face of a white collar criminal is white.
And it’s funny on one level, but it’s not, because we know that the same types of technologies are targeting darker-skinned people, black and Latinx communities. And so this is a way of using art to actually reflect on the reality that we’re creating right now, and think about where the harms are being monopolized and centered.
And so I guess the last thing I would say, in terms of the default settings: So many of the technologies that are being developed now are about predicting risk, whether it’s risk in who will be a risky person to loan money to, whether it’s risk in terms of criminal risk, all of the different domains of risk. And what changing the default settings would look like is, let’s shift away from thinking about risky individuals to how risk is produced by our institutions, by our policies, by our laws. And so let’s look at the production of risk, rather than the individuals, and if we’re going to predict anything, let’s look at how schools create risk, hospitals create risk, police create risk, rather than the individuals. And that’s flipping the switch on how the settings of our technologies are really burdening individuals, rather than actually holding institutions and organizations accountable.
JJ: We’ve been speaking with Ruha Benjamin, associate professor of African-American studies at Princeton University. The book is Race After Technology: Abolitionist Tools for the New Jim Code. It’s out now from Polity Press. Ruha Benjamin, thank you so much for joining us this week on CounterSpin.
RB: My pleasure. Thank you so much for having me.
|
|
No comments:
Post a Comment