Rashida Richardson Explains 'The Social Dilemma'
Museum-goers are sucked into their screens

Screens and Nightmares

Civil rights lawyer and researcher Rashida Richardson discusses how The Social Dilemma resonates in an era of COVID and crisis.

4 December 202012 min read

Tech innovation has created complications for our democracy, our mental health, and our society. We sit at a pivotal moment: We’re in the midst of a pandemic that’s furthering the divide between the haves and the have-nots. It’s pulling us further into the filter bubbles created by the very same algorithms that promised to make the world more open and connected. At stake: the sway of misinformation, a restructuring of truth, the future of civil rights. For these reasons, Rashida Richardson finds herself pretty busy lately.

Richardson is a civil rights lawyer who focuses on the intersections of technology, race, and society. She got her start at Facebook in 2011, where she lasted all of three months, realizing that she could have more impact outside of the Silicon Valley bubble. She has since become a thorn in the side of powerful tech companies, challenging lawmakers and executives to pay attention to an often-overlooked aspect of tech: who loses in the race to innovate.

Richardson is currently a visiting scholar at Rutgers Law School, researching how A.I., big data, and data-driven technologies are implemented in society. She previously served as the director of policy research at the A.I. Now Institute, and she worked on privacy and surveillance issues for the A.C.L.U. Most recently, she was featured as an expert in Jeff Orlowski’s documentary The Social Dilemma, about the destructive impacts of social networks on our lives and behavior.

The documentary was released in the months before the global pandemic lockdown, but the current environment has only exaggerated problems arising from tech innovation: government surveillance, facial recognition practices that disproportionately impact minorities, algorithmic bias. For Richardson, calling attention to the unintended consequences of technology has become a civil rights matter. I recently spoke with her about those consequences, and about the growing resonance of The Social Dilemma.

A smartphone user is sucked into their screen

Laurie Segall: You got your start about the time that I got my start covering technology — right out of the Recession. It was such an extraordinary time. You had these companies and these economies that were coming out of scarcity. It was almost like this Wild West.
Rashida Richardson: You’re dealing with innovation at a time when Facebook’s slogan, “Move fast and break things,” wasn’t understood as a problematic slogan. What gets broken, who deals with the burden of the things that are broken, and is “broken” a good thing? I’m abstracting it to a certain degree, but we’re now dealing with the burdens of that inability to see the downside of innovation at all costs.

When The Social Dilemma came out, it struck a chord culturally. Why do you think that is?
RR: The documentary helped humanize the issues and break down certain concepts that aren’t typically in basic media coverage, like understanding how an optimization algorithm works. Also, during the global pandemic, people’s views of technology have become more complicated. Society as a whole has been grappling with the relationship between individuals and technology, but COVID really lit a fire under this. Most of our lives are mediated through some form of technology, whether it’s Zoom or the fact that kids with computers are on them for most of the day for education. Everyone is questioning their relationship with technology in a new way.

Not only are we spending more time on screens, but the pandemic is also furthering a digital divide between the haves and the have-nots. What do you think the unintended consequence of this moment will be?
RR: I’m honestly very concerned, because as much as there is more awareness about the problems with technology generally, I don’t think people understand how embedded it is in many different sectors. To take one example, there are tons of education apps that kids are using for remote learning, but I don’t think people understand how data that’s being collected through these apps can have long-term ramifications. We need more consumer education, but it’s hard to grapple with potentially terrible practices if you have to use technology and it’s not an opt-in situation.

As someone who engages with policymakers, it’s very concerning how much of a knowledge gap there is among those in the most powerful positions. The types of questions we’ve gotten at the big tech hearings over the past two years demonstrate how out of touch a lot of our representatives are when it comes to understanding the issues and understanding the gravity and urgency of addressing the issues.

I’ll stop there. That seems dark enough.

Rashida Richardson

Many of the people who created these problems and designed these products are raising red flags. Why now?
RR: There’s been a lot of robust advocacy happening, especially on the state and local levels — whether we’re talking about bans and pushbacks against facial recognition, or even hiring algorithms. In some ways, “why now” is a result of many years of trying to force these issues to greater awareness on the Hill and elsewhere. I hope that policymakers understand that changes need to happen yesterday for us to not go into a nightmare scenario.

Which would be... ?
RR: There are many different nightmare scenarios. On the most basic level, there’s the deepening of inequalities that can happen through the use or expansion of technologies. Then there’s the level of government and community control that’s gone if all cities become smart cities, with companies controlling all our major infrastructure. What does that mean for community members and those in government who have become reliant on private enterprise? Who’s in control of our public life at that point? I’ll stop there. That seems dark enough.

You’ve said that we need to have uncomfortable conversations to ensure that technology is responsible. What is an example of an uncomfortable conversation that we need to have?
RR: I’ll bring up facial recognition because it’s a technology most people are aware of. A lot of the time in policy advocacy, I emphasize that the technology doesn’t work. I don’t think people get that. But even if we get to a point where we have 100 percent accuracy, there are still problems with use. Some people are O.K. with the technology being used to detect a criminal, and they don’t understand that that is a problematic construct: What does a criminal look like? Why do you think it’s O.K. for a technology to have that detection capability? I push for us to start to question some of these statements around technology that we accept as ground truths.

Smartphone users are sucked into their screens

Many of us have lived our lives on social media. Our data is out there. Whether or not you like it, you’ve opted in.
RR: People need to think more critically about their actions and data collection. You don’t need a computer to have data collected about you. A credit card is another common way that information is obtained about us. Discount cards at the grocery store are recording everything you purchase. If we had better public education to understand the wide variety of ways that data is collected about us, then people could make better-informed choices about what they want to opt in and opt out of. You’re always trading convenience for something else. People need to question whether that convenience is worth it, and understand what those inherent trade-offs are.

The doc talks about predictive data analytics getting better and better, and our devices knowing when we’re lonely or depressed, or whether we’re an introvert or an extrovert. What is the danger of predictive data analytics being able to understand such human traits about us?
RR: It’s important to understand that the inferences being made through these technologies are not always accurate, and to understand the harm that can be caused when false inferences are made. To give an example in the government context, there are technologies called “predictive policing” that are essentially trying to predict who may be a victim or a perpetrator of a crime within a given window of time, or where a crime may occur. That’s all based on historic crime data. When you’re dealing with a jurisdiction like New York City and the N.Y.P.D. — which was found to be in constitutional violation by disproportionately targeting Black and brown people for over a decade — then it’s no surprise that those systems are going to predict that those same communities that were subjected to discriminatory policing are inherently more criminal.

There can also be harms in the consumer context. What if it’s the things that you’re not seeing? What if it’s a job opportunity that you’re not being shown or are not even eligible for because of false or misleading inferences in the data? When data is being used in ways that we’re unaware of, we aren’t as aware of the impact that it can have on life opportunities.

Stay engaged and figure out your own perspective.

Rashida Richardson

Could there be any upside with respect to mental health and the gathering of some of that data to help us better understand ourselves?
RR: It’s tough with mental health, but I’ll give you a complicated positive. One thing that predictive analytics or data analytics is very good at is finding patterns in historical data. If you feed a bunch of mammograms into a system, the system can find a pattern and actually spot cancer better than the human eye. I’d see that as a good thing, but it’s complicated if that dataset is not representative of our society. This has been shown in research: That mammogram-scan system may not work well for me, a Black woman, because there’s less data about Black people.

Even though I’m hypercritical of predictive analytics in these systems, it’s not to say that they cannot be beneficial in the future. It’s a question of whether we actually have data that’s good enough to produce the outcomes that are possible through these technologies. In many sectors, we don’t, but that’s not to say that it shouldn’t be developed.

Did the Black Lives Matter protests put a spotlight on surveillance technologies like facial recognition and issues like algorithmic bias? Do you think the movement made a difference?
RR: Definitely. This year, New York City passed the Public Oversight of Surveillance Technology Act, which requires the N.Y.P.D. to report the types of surveillance technologies they’re using. That bill only passed because of the public pressure that happened this year. I had been working on that bill since 2015. As much as people think New York is a progressive city, we couldn’t get council members to meet with us. The level of understanding among legislators was completely different this year than in previous years.

Public awareness of these issues has also risen. Now I get texts from friends all over about cameras and systems they see in their cities. They’re trying to better understand how technology has been embedded in our society. I hope this momentum continues. I hope that as a result of greater interest, there’s more work and money put into public education. In many ways, these are localized issues.

A smartphone user is sucked into their screen

As a civil rights lawyer and someone who’s looked at data, surveillance, and the impact of algorithms your whole career, what would your message be to folks in Silicon Valley who are building the products that impact our lives?
RR: Where do we start? One of the issues is who gets to define the problems that technology is going to solve. Mostly that’s been the Tim Cooks of the world. First, there needs to be more community engagement, trying to integrate different voices in the room around what the problems are and what solutions are being driven. They also need to hire more people that are not represented in the companies right now.

This is a fundamental problem that is not isolated to tech: There’s a level of hypocrisy that needs to be addressed. You have a lot of tech companies that at a hearing will say, “We recognize we have problems and agree that Congress should do something,” and at the same time they’re paying 10 lobbyists to go and say the opposite thing to the same legislators. There needs to be some transparency around the tied interest there. These are companies that need to get money to their shareholders. How can we realign those interests in a way that is more aligned with societal interest?

You were at Facebook in the early days, and you talk about lasting only three months. What was it about the environment that was really challenging?
RR: I knew for a fact that I was the first Black person a lot of my colleagues had ever spoken to. I was experiencing other people’s lack of exposure to difference. People would make either racist or sexist comments I wasn’t prepared for. Also, I was coming from working primarily on the East Coast, so the lack of structure and even training felt like a chaotic environment that didn’t need to be that chaotic.

Now, with a lot of the public scrutiny facing Facebook and many of these other companies, we see how operating in such a chaotic environment so early on has trickled down to today. These companies aren’t able to anticipate the gravity of certain problems that are amplified on their platforms; they aren’t able to address them in critical ways.

The Social Dilemma got folks’ attention. Now what?
RR: I tell people to keep reading. I encourage people not to stop at the movie, but to read a lot of the research. Stay engaged and figure out your own perspective. I don’t think everyone needs to have my point of view. It would make my life a lot easier if they did, but I’d be more encouraged if people were engaged and able to have conversations about these issues on a daily basis.