Search the Site

Episode Transcript

This is Sudhir Breaks the Internet, a new podcast with the sociologist Sudhir Venkatesh that exposes the tangled wiring of our digital society. This is the first episode of a three-part series about how Facebook and Twitter were almost perfectly designed to tear us apart, and what those companies, and the rest of us, can do about it.

Sudhir VENKATESH: When you were observing January 6th and what was happening, were you surprised? 

Alana CONNER: No. I mean, when I was at Twitter, I definitely sounded the alarm many times — as did and do many employees. The details are surprising, that some dude in a Viking helmet was leading the charge and that five people died. But is it surprising? No.  

Alana Conner is a psychologist with years of experience working inside tech companies.

CONNER: There’s a whole ecosystem around this — you cannot say Trump alone caused this insurrection; that would be horribly oversimplifying what happened — but there was this clear march towards violence, and I think anyone who has studied human behavior and human violence would say that the odds were definitely in favor of something like what happened on January 6th.

 VENKATESH: Yeah. There was a lot of discussion of our coworkers feeling as though, “Hey, we have to do something,” for years. And now we have this additional question, which is, “Okay, you’ve been warned; you’ve seen what can happen on your platforms. What are you going to do now?” Are there any signals in terms of what they’re doing, how they’re doing, what’s happening in the ecosystem that makes you feel one way or another? 

CONNER: Well, as a cultural scientist, I know that culture change takes time, right? And that is the frustration. There’s been real societal harm here. And that may not be what these platforms were designed to do, but that is what social media companies need to grapple with — just to get real clear on how people are actually using the platforms and decide, irrespective of their original intent, what are we going to do about that? 

My name is Sudhir Venkatesh. I’m a sociologist at Columbia University. Until five years ago, I was studying the urban underworld: gangs, drugs, gun violence. Then Mark Zuckerberg, the Facebook founder, read my book. And his team made me an offer that was too good to pass up. I went to work at Facebook to manage the team that worked on their safety issues, basically all the ways that people broke the rules. And after that, in 2019, I moved on to Twitter where I was working on similar problems.

It was at Twitter where I met Alana Connor. We were on a team called Health. We built products to keep foreign governments from using Twitter to interfere in our elections, and we were supposed to prevent our platform from being used to incite violence. There were lots of forces that caused January 6th, but as I watched that day unfold, I knew social media was one of them.

What happened on January 6th at the U.S. Capitol tells us a lot about our tech sector and the power of social media companies, but maybe not in the way you think. There’s no grand conspiracy against conservatives at these companies, as those on the right often say — even though most tech workers are left-of-center, politically. And the companies aren’t just ignoring hate speech or white supremacy, which is usually the liberal complaint. It’s deeper than that. 

The way these companies have been built, their products, their mindset, the people that they hire, makes them almost perfect engines to create something like January 6th. The anger, frustration, and eventually violence over the election — that wasn’t an aberration. It wasn’t some really great technology that had gone bad. No, it’s the product of that technology, the way we’ve built it over time and the way all of us have made it part of our daily life.

So how did we get to this point? That’s what we’ll explore in this three-episode series: how these technology companies are a mirror that reveals who we are as a culture and a society. I want to bring you inside the world of big tech and ask: What’s gone wrong with social media? And if it’s not serving our needs, what can we do to change it? 

We’ll start with January 6th and the elections, and with two key questions. First, what did the tech companies miss that might have led to this insurrection taking place? And, second, what are the aftereffects, good and bad, of President Trump and so many of his supporters having been banned by the big social media companies? 

Clint WATTS: No one thought, when they made the Internet, what might happen when you took fringe people with bad ideas who are committed to violence and allow them to connect with each other on a major platform around the world. But I think we were 100 yards from Nancy Pelosi or Vice President Pence being killed. That doesn’t feel good. How did we let it get to that extent?

That’s Clint Watts. Today, he’s a distinguished research fellow at the Foreign Policy Research Institute. But before that, Clint was an F.B.I. agent, and that’s where I met him. Almost a decade before I went to Facebook, I was just sitting in my ivory tower and I was pretty content to spin out research, mostly for other social scientists. 

Clint recruited me to work at the F.B.I., to help them understand how criminal organizations function — organizations like gangs or terrorist networks. What motivated people to join them? How were they structured? And what were signs that serious criminal activity was being planned?

Working at the F.B.I., that was a game changer for me. The experience there would be an amazing training ground for the work I would eventually do at Facebook and Twitter, like foreign agents interfering with elections or terrorist recruiting on tech platforms. And at Facebook, I’d have a lot more data than even the F.B.I. had. It seems like an obvious advantage. But that’s not always the case. 

WATTS: My chief complaint to the social media companies is it’s very hard to warn them about the next thing because they assume they have the big data.

He’s right; the tech industry is very focused on data. It’s a core belief, and that approach works pretty well for a lot of things they do. But that focus on data can actually blind you. Here’s an example of what I’m talking about. My uncle is a pretty well-known psychologist at the University of California, San Diego. His name is V.S. Ramachandran — Ramu to his friends and his colleagues. Ramu became famous for his research into phantom-limb syndrome. That’s the feeling people get when they lose an arm or a leg, but they still think it’s there. 

In the mid-1970s, at the start of his career, he was speaking about this at a conference when a colleague interrupted him, saying, “Why should we care about your research? There’s only, like, seven people who have this ailment. Why don’t you come back to us when you have bigger data?” Well, my uncle had a great response — he said, “Imagine if I had a pig on a leash and I put the pig on this table, and then all of a sudden the pig started to recite Shakespeare. Would you want me to find two pigs before we tried to figure out how this pig could speak such great English?”

That’s what it’s like in the tech industry sometimes. Our first instinct is always to think that more data is the answer, and then, lo and behold, that pig that can speak English walks right in front of us, but we don’t even notice. Or we just don’t listen to the voices of those around us — experts, activists, even our users — who tried to bring that pig to our attention. 

There’s a second problem with data. Clint Watts again:

WATTS: When you’re looking for needles in stacks of haystacks, actually big data makes it harder. I imagine if you’re in any of the big tech companies, on any given day, somewhere in the world feels like a coup or a civil war or atrocity is about to happen. And you don’t really have the on-the-ground sense of what is true and what’s false. 

When people find out that I worked at Facebook and Twitter, they’ll ask me pretty directly: “You had so much data on us, didn’t you see the warning signs? Couldn’t you have prevented January 6th from happening?” I understand this frustration, but it wasn’t so simple.

We have tons of data inside tech companies. But data is only useful — it only becomes information — when you know what to look for. Most of the teams in tech that work on elections, they aren’t really trained to recognize problems or stop the problems from happening when they see them. So their computers might be spotting hundreds of thousands of threats or warning signs, but without training or expertise — like what an F.B.I. agent has — it’s hard for them to know what action to take. What’s a serious threat? What’s a joke? What’s just an angry post? 

In some areas like child exploitation or terrorism, my teams did really well. In fact, most companies work with law enforcement closely to detect this kind of content. And they remove it pretty quickly from the platform. 

But not with elections. Our teams never looked at these issues — what we would call civic engagement — as threatening in the same way. The talk on the platforms about elections, well, we looked at that as disagreeable or heated, but we didn’t look at it as the seeds of future violence. And when people like Clint Watts warned us, we shut them out. The result? Tech was mostly flying blind. 

WATTS: If you see a thousand people show up at a rally, I would bet there’s tens of thousands online that are enthusiasts. They call it stochastic terrorism. You don’t know where it’s going to emerge because there’s lots of signals and they maybe don’t have a criminal record or have never mobilized to violence before. And that’s very difficult to anticipate to prevent violence.

VENKATESH: Could the social media companies have done something differently? I mean, if your research team is seeing it, and others are seeing it?

WATTS: So preemptively, they definitely, by the summer of 2019 to the summer of 2020, could have moved quicker on things like QAnon and the conspiracies that were clearly based in falsehoods and were mobilizing towards violence. Same with some of the militia groups.

We’ll come back to QAnon a little later. But: there’s another part of the culture of tech companies that you need to know about that has a big effect on how the industry works. Tech is made up of mostly young people who’ve been educated in elite and generally liberal institutions. They tend to be politically left-leaning and they hang out with others who are mostly like them. 

Well, that can become a problem when it comes to dealing with the wider world, and with complicated situations like heated national elections. Sociologists would say that tech is homophilous — that there aren’t a lot of different backgrounds or viewpoints. And one thing we know is that people in homophilic networks often fail to understand those who live differently than them. 

CONNER: Social media employees are, by and large, liberals. People who have more conservative ideologies correctly perceive that these companies are different from them.

Alana Conner is right that conservatives felt this way. We would interview conservative users as part of our work. They would tell us over and over again that they just didn’t trust social media companies. They didn’t think we represented their lives. And many even felt tech companies were biased against them. 

CONNER: The evidence I’ve seen within these companies does not suggest that there is — I mean, I could get in trouble saying this, but it doesn’t seem like there are many ideological biases in the algorithms, at least not favoring liberals. In some ways, the reality doesn’t matter. The perception is that these are liberal companies that are discriminating against conservatives. 

But that’s the thing about homophily, it creates these kinds of perceptions and it makes it very hard to mount a good defense. That perception was a real problem for Facebook, and the company was determined to fix it — but not by diversifying their workforce. Instead, their solution was to send C.E.O. Mark Zuckerberg on a year-long cross-country trip.

Mark ZUCKERBERG: We are live from Charlotte.

Inside the company, we referred to it as “the trip to win back the hearts — and the dollars — of conservative America.” We weren’t mocking conservatives, though, just the company’s misguided efforts to appeal to them.

ZUCKERBERG: I’m here with Dale Earnhart Jr. and he’s teaching me about NASCAR. And the reason why we’re here is that I’ve been going around on our “year of travel” to each of the different states in the country and trying to learn more about community. 

Did Zuck’s P.R. effort help? Not really. I was tracking how people felt about Facebook before and after his tour, and the scores didn’t improve. They actually got worse. People hated us even more. 

Tech has upset people on the other side of the political spectrum, too. Maybe you remember all the Black activists who accused Facebook of racial discrimination? Well, one reason was that real estate agents were using the platform to hide their housing listings from Black families. 

On top of this, Facebook was removing the accounts of some Black activists for posting racist material, even though they were only sharing posts that were made by other users. The problem was Facebook’s computers couldn’t distinguish between the author of the post and someone who was just sharing it. As a result, the company lost trust with large segments of Black America, at the same time it lost trust with mostly white conservative voters.

These shortcomings are really just two sides of the same coin. Just as tech companies couldn’t differentiate between conversations that were actually racist or just discussing racism on the left, they couldn’t distinguish between political conversation and extremist recruiting on the right. Instead, they’re busy trying to figure out why users act in these ways in the first place.

CONNER: Time and time again, I’ve seen people be really shocked that users hijack their beautifully-engineered platforms to try to kill each other, organize insurrections or gin up conspiracy theories or take down governments or orchestrate genocides. That just sneaks up on people.

VENKATESH: I almost feel like the people that build the systems that get gamed or they get used in ways that they didn’t anticipate or would not prefer, they almost take it personally, like it’s a personal affront.

CONNER: Yeah, they take it personally. And that taking things personally really paralyzes people. It puts you in this position of defending yourself and trying to assign blame rather than just fixing the problem, which I find ironic since so much of the ideology of Silicon Valley is fix the problem, get it done. This is one of those areas where that ideology and that ethos fall apart.

Let’s put all this together. On the one hand, tech is an industry that wants to grab more and more users, fast. And the single-minded focus on growth leaves little time to deal with the new challenges that they keep confronting. And then they shut out experts, even though the companies themselves, well, they’re not always sure how to respond. And after a decade of upsetting different camps — whether it’s conservatives or Black Americans or progressives — the leadership just becomes more and more defensive. 

This only ends up making people trust the companies even less. And in a time of crisis, like an election season that’s filled with misinformation and escalating threats of violence, the companies end up digging in even further. There’s another reason the companies have dug in. They have a deeply-held belief that their platforms are a force for positive change. This belief goes back a decade, to a place thousands of miles away from Silicon Valley.

CNN HOST: The Egyptian revolution made history because of the internet. 

CBS HOST: Following its pivotal role in Egypt, Facebook pages and Twitter groups are now popping up in at least 10 countries across the Middle East and North Africa.

PBS HOST: You’re giving Facebook a lot of credit for this.

ACTIVIST: For sure. I want to meet Mark Zuckerberg one day and thank him, actually.

Ethan ZUCKERMAN: So let’s go back to the Arab Spring.

That’s Ethan Zuckerman. He’s the founder of the Institute for Digital Public Infrastructure at the University of Massachusetts at Amherst, where he studies the Internet as a tool for civic engagement. 

ZUCKERMAN: These movements are really, really easy to organize. But they’re also really fragile. You had people protesting Mubarak in Egypt who wouldn’t agree on anything else except for the fact that they really hated Mubarak. They come together very quickly because there’s one figure they can all agree on, that they’re for or they’re against, but they haven’t done the work of trying to find their way to a coherent ideology.

VENKATESH: You wrote in a paper of yours that, “QAnon is a predictable outcome of the rise of this new form of participatory civics.” I’m curious, you’re taking QAnon and you’re linking it to a flourishing movement where people have all this civic energy. What are you trying to do with that linkage? What did you mean, that it’s a “predictable outcome”?

ZUCKERMAN: Well, QAnon is what happens when that same form of organizing gets applied in a realm of misinformation. It picks up one hero: Trump. It picks up one enemy: elites. And it essentially says, “Look, we don’t care exactly what you believe. If you believe that 5G is causing Covid-19 and that Bill Gates is going to implant us with microchips, if you believe that Hillary Clinton is trafficking in children, welcome, QAnon is for you.” And everybody has that chance to create their own participatory part of the movement. 

If you watched the Arab Spring, you could see what a high it was for people to not just be part of a movement, but to direct the movement. There’s really no high like that perception of efficacy, like that feeling that you are involved with changing the world in one fashion or another. 

And that’s what QAnon has managed to do. It’s managed to give people a sense that by doing their own research, by going out and finding information and sharing it with people, that they are making real, tangible change and will ultimately pull down this evil conspiracy that they believe to be going on. 

WATTS: I believe the idea of disinformation came from watching the Arab Spring.

That, again, is Clint Watts, the national security expert and former F.B.I. agent. In his view, bad actors also learned a lesson from the Arab Spring — that the platforms are easily manipulated.

WATTS: You can create a large, organized rally online without being in a country; you’re completely anonymous and no one knows each other. That’s the best way to do a coup ever. And I can do it from my house. Twitter and Facebook initially said, “Okay, it doesn’t matter. They don’t have any effect.” They minimize it. So, yeah, I hope they can change that perspective a little bit and just realize every time they make something awesome, 99 percent of the time is going to go great. But that one percent can really undo the other 99.

VENKATESH: As we’ve been watching the dynamics of misinformation and polarization play themselves out, I’m curious to know what jumps out at you.

WATTS: Across the board, the entire Valley has failed to do two things. One, they failed to figure out the right balance between profits and cost centers in security and cybersecurity. The second thing is when I worked with cybersecurity in the private sector, everyone cooperated because they all knew that if one institution got hit, they all got hit. It’s an ecosystem. For some reason, the social media companies have not figured that out. On the plus side, I mean, I can see that they’re trying.

I was one of those people trying and I can tell you that people in tech work really hard. The problem is how we all worked.

And I totally get it when people ask me, “Why can’t tech keep our data safe? Why can’t you guys stop the white supremacists from using the platform for violence?” I heard this kind of thing all the time when I was at Facebook and Twitter, and it felt like being under a waterfall. It was hard to explain to outsiders just how overwhelmed and underprepared we all were. I felt like I was right in the tech bubble — where only my coworkers really understood me.

The thing about feeling like this is that over time, your actions start to reinforce each other. It’s like a cycle where you do the same things over and over and it becomes hard to make changes, especially since you’ve shut yourself off from the outside. Social scientists call this path dependency. They mean that you tend to base your new decisions on the ones that you made previously. 

Here’s an example of what I mean. Let’s say that you’re working on far right extremism on your platform. And maybe you saw what looked like calls for violence. But you’re not really an expert, so you just ignore it and you keep making the same decision to ignore it. Well, okay, after a while, the activity doesn’t really seem that threatening to you. 

But what if the next person who sees something going on in the data thinks it is threatening, that it is alarming? Well, all those past decisions that you’ve made make it harder for the next person to get anyone’s attention. I mean, why revisit things? That’s inefficient from a business standpoint. And just like that, after ignoring all those signals all that time, you turn on the T.V. and the Senate chamber has been invaded. 

We’ll look at how all of this history shaped the company’s reactions to the events of January 6th. According to some, it was a day of reckoning for the big social platforms, a day where they fundamentally changed their outlooks. But me, I’m not so sure.

*      *      *

The big social media platforms banned President Trump after January 6th, after an insurrection at the capital that he seemed to approve of. And I have to tell you, I was a little surprised. Many insiders like me didn’t think the executives at those companies would actually do it. But they did. So what does it mean? What does it tell us about those companies and are they changing their stance on this kind of thing?

Davey ALBA: Twitter’s stance has for so long been that world leaders get a platform, no matter what they say.

Davey Alba is a tech reporter for The New York Times and she covers disinformation on social media.

ALBA: But Trump started to take advantage of this. What do you do with that kind of sustained provocation of your followers? It’s hard to pin down in single posts, “This is the one post that will galvanize a mob to action.” It was Trump very cleverly toeing the line of what Twitter was willing to moderate and what it was not willing to moderate. 

For years, Twitter wasn’t willing to moderate anything when it came to President Trump, so what was different on January 6th? 

ALBA: The threshold for companies is usually real-world harm. But the thing with using that as the threshold is it’s often after the fact that they act on something.

It sounds like a moral decision — and it is, in part. But I have to tell you, the tech companies are always thinking about the bottom line, even when it comes to assessing harm. Listen to Twitter C.E.O. Jack Dorsey. He’s being questioned under oath by Senator John Cornyn in November of 2020. In this case, they’re discussing Twitter’s removal of stories about Hunter Biden, but Dorsey ends up explaining the company’s overall approach to removing content.

Jack DORSEY: All of our policies are focused on encouraging more speech. What the market told us was that people would not put up with abuse, harassment, and misleading information that would cause offline harm, and they would leave our service because of it. So our intention is to create clear policy, clear enforcement, that enables people to feel that they can express themselves on our service and ultimately trust it. 

John CORNYN: So it was a business decision. 

DORSEY: It was a business decision.

It’s a little more complicated in practice. Twitter and Facebook did take a P.R. hit, and it definitely made conservatives believe even more strongly that the platforms were totally out to get them. But I wouldn’t shed a tear. The two platforms have seen their stock price rise in the last few months.

In his testimony, you might have heard Jack Dorsey mention misleading information as being a cause of offline harm. Over the past year, the platforms have been flooded with misinformation, but not just about the election. 

ZUCKERMAN: I think what happens this year has to be understood in the context of Covid-19.

That’s Ethan Zuckerman, of the Institute for Digital Public Infrastructure, again. He points out that Covid-19 actually helped the companies build up a muscle they didn’t have before.

ZUCKERMAN: Generally speaking, platforms were giving people four, five, 10 warnings before their content was getting yanked. And then we hit Covid-19 and we see this wealth of disinformation, particularly coming from the anti-vax community, which is really dangerous. The platforms act surprisingly quickly to pull that information down. 

After that, we saw disinformation around the election, disinformation that was trying to discourage people from voting, or misinform people about how they could vote. And that was an interesting step forward because that started platform decision-making moving into explicitly political spaces. Donald Trump had been warned many times, and there is something shocking about pulling the President of the United States off of your service, but it’s coming after this long road of getting more comfortable removing certain types of content. 

And then it is showing up at a moment of such incredible shock and horror — scenes that I think most people thought were literally impossible in the United States. And when it seems like the President of the United States and his supporters are actively supporting violent insurrection, you can make a public health argument that you can no longer amplify that speech.

CONNER: I mean, so much to say on this. I think these concerns about protecting people’s right to express themselves has come at the cost of not paying enough attention to, “what impacts are you having on the collective?” In the case of Trump’s tweets, calling the coronavirus the Wuhan flu or the Chinese virus looks very likely to have incited violence against Asians and Asian-Americans. 

VENKATESH: What do we need to understand about the inability of senior leadership at these companies to make decisions like this faster? 

CONNER: As you get towards the very tops of these companies, the denial is strong. People have a hard time really acknowledging their own responsibility and, couple that with crazy profits, why would you? And then there’s this gradualism that we know so much about from social psychology, that by the time you’re about to act, it’s like, “Well, why act now?” 

Trump and other political leaders are masters of manipulating these platforms; like, test the waters, then push it a little bit farther. And the position that puts these companies in is, “Well, we didn’t call them on the last thing. Why are we going to call them on this one?” It makes it very hard to justify acting on this outrage if you didn’t on the past one.

It’s a vicious cycle. It’s that path dependency I was talking about before, where the earlier decisions will create certain consequences that you’re not even aware of. In this case, the longer you delay enforcing the rules, the more that one group of users is going to feel alienated. They’ll start feeling like you’re just looking for ways to be unfair to them. And then when you do take action, you alienate a different group of people. The problem here is that platforms are being reactive rather than proactive.

 Tracey MEARES: Many solutions, if you think about it in a systemic way, need to be addressed before you get to the particular problem for which one is seeking a solution.  

My friend Tracey Meares is a law professor at Yale. She helps organizations build better relationships with their communities, like improving relations between police departments and their cities. 

MEARES: The point at which people are looking at months, years of disinformation that culminate in a striking, frightening, surreal event in which individuals are storming the Capitol — it’s very easy to say, “No more,” right? But “no more” is actually not a solution.

When I was working at Facebook, I asked Tracey to help my team get better at enforcing the rules as a way of building more trust with our users. Tracey’s point is that by not making clear and tough decisions early on, you basically end up with your only decision being whether to take an account off the platform.

MEARES: If you think of your goal as just to maximize people’s ability to say whatever they want, whenever they want, then yes, I do think that the consequences that we saw were predictable, right? Especially given the way that the space is designed to maximize the kinds of intense emotions that many people like to feel. And if the organizing principle is maximizing those emotions, then the endpoint is clear.

It’s reasonable to think that banning a president from social media is a pretty big deal. And it is. And you might even think that it would help solve some of the problems we’ve been seeing with social media and our nation’s political culture. Maybe, but I think those problems are actually a product, maybe even a predictable one, of this big tech machine that led us here in the first place. So don’t be surprised if we see the same problem sprouting up again and again, and sooner than you think. 

These companies talk about being fast and agile, but the mindset of Silicon Valley is decades old. An afternoon of unrest, no matter how disturbing it was, is hardly enough to change that. And let me explain why.

When I joined Facebook, my orientation was about a month long. Day after day, I sat with hundreds of other new employees in this convention hall. We all had to sit there and listen to one Facebooker after another tell us how great it was to work there. It was only about a year into the job that I realized why these companies spent so much time asking you to drink all that Kool-Aid. It’s what sociologists call being part of a Total Institution. 

The classic examples of Total Institutions are the military, or the mental asylum, the prison — it’s where groups of people do everything together. They live, work, play, relax only with one another. You spend so much of your life cut off from the rest of society that you feel in opposition to the world around you. Like no one understands what you do or how you feel.

That’s what Facebook became for me. The press coverage was almost always critical. Everyone around me was constantly under scrutiny, and our friends on the outside who didn’t really understand tech or coding, they always claimed to have easy fixes for our problems. 

I often laugh that I lost half of my friends when I went to work for the F.B.I. and the other half when I joined Facebook. After a few months, I really believed I had no one else to rely on — no one else who could understand what I was going through, other than the people that I worked with. We were just like inmates and soldiers, except our uniform was the hoodie.  

Many sociologists do find positive aspects of Total Institutions. Take soldiers, for example: the fact that they separate from the world at large is what lets them build trust with each other. It’s what helps them to focus on their mission. But this can end up working against you. You can end up feeling like you just don’t answer to anyone else. You’re above it all and you don’t need anyone’s help. So what about tech? How does being part of a Total Institution shape the culture and decisions at these companies?

So what about tech? How does being part of a total institution shape the culture and decisions at these companies? To answer that question, you have to come inside the institution with me. You have to hear why working in tech was one of the most intense, difficult times of my life — as much as the years I spent with a Chicago street gang. 

KATSAROS: You know, the strategy of these companies often is:  There are bad people who do bad things. 

PLUMB: You can’t imagine the scale. 

That’s next time, on Sudhir Breaks the Internet.

*      *      *

Sudhir Breaks the Internet is part of the Freakonomics Radio Network, and is produced by Freakonomics Radio and Stitcher. This episode was produced by Matt Hickey. Our staff also includes Alison Craiglow, Mark McClusky, Greg Rippin, and Emma Tyrrell. We had help on this episode from Jasmine Klinger and James Foster. To listen ad-free, subscribe to Stitcher Premium. We can be reached at sbti@freakonomics.com, that’s S-B-T-I at freakonomics.com. Thanks for listening. 

VENKATESH: Do you recall the conspiracy that was circulating in the late ’80s, early ’90s, that the federal government had placed crack cocaine in the inner city for population control?

ZUCKERMAN: Oh, yeah, absolutely. One of the most remarkable pieces of music that’s come out this past decade is “Reagan” by Killer Mike. And this is Mike essentially connecting this all and blaming it on Reagan, which is probably not the full and accurate critique. 

VENKATESH: Well, he’s definitely an empiricist compared to most of what QAnon is. 

Read full Transcript

Comments