Search the Site

Episode Transcript

When I went to work at Facebook in 2016, I wasn’t really sure what to expect. The company had hired me to research how its users felt about the site. But pretty soon I was working on issues that were a lot more difficult and far more troubling. I want you to know what it was like to work there, because I think that if you understand that, you’ll understand how our social media world has gotten so out of control. Let me introduce you to someone who can help me do that. 

Matt KATSAROS: I got to work in the little corner that no one really paid that much attention to. I like that. I don’t like being in the spotlight.

That’s my former Facebook colleague, Matt Katsaros.

KATSAROS: It was a pretty small team that was definitely under-resourced.

But we had big responsibilities.

KATSAROS: That was the part of the organization that was largely responsible for addressing anything that could go bad on Facebook.

And a lot of things can go bad on Facebook.

KATSAROS: From spam, people who are trying to exploit other individuals, trying to steal their information, data, money, nudity and pornography or sexually suggestive content, to hate speech, harassment, bullying, revenge porn, child exploitation, terrorism. That’s the spectrum: spam to terrorism.

Sudhir VENKATESH: What was your sense of what that group was like? I felt like we were like the land of the Misfit Toys. Some of us really were social workers who wanted to help people. Someone else was really jazzed about kids who were exposed to terrorist content and moving them in a different path. Did you ever feel like these were people who were, I don’t know — they were made differently? 

KATSAROS: I mean, that part of the company, the spirit was really trying to address some of these legitimate and serious problems. And these folks cared deeply about that. 

Another one of those folks was Radha Plumb, an economist. She taught at the London School of Economics, and I actually hired her at Facebook after hearing about her work in the Defense Department.

PLUMB: I was asked to go to Afghanistan to develop ways to better measure and assess our counterinsurgency practices.

Radha and I were Ph.D.’s and we were older than most people at Facebook, which, if you don’t know, is a really young company with lots of really young people.

PLUMB: So, when I started working at Facebook, I was seven months pregnant. I was a giant, enormous pregnant lady coming around to talk to people about terrorism, child abuse, and online gender harassment.

VENKATESH: When I arrived, I wore a suit. Everyone thinks I’m a cop and I’m wondering why no one is talking to me. So I realized, “Okay, not only is everyone half my age, but I have to go and get a hoodie.”

PLUMB: I felt like every time I met a 25-year-old engineer, their head was just going to explode; they didn’t know where to look.

Matt had been at Facebook longer than Radha and I had, but he didn’t come from a typical technology background either.

KATSAROS: In high school, I was really into photography. I turned a shed in our backyard into a darkroom. I just put bags over all the windows, covered up all the ventilation, and then brought in a bunch of photo chemicals in the middle of summer when it was like 100 degrees out. 

Matt ended up studying business and marketing in college, and soon after that he went to Facebook. By 2018, Matt, Radha, and I were working together on a team called Protect and Care. We had about 400 people on the team, which for Facebook is not a terribly large group. There were other teams like the ones that managed user growth or ad revenue: they had well over a thousand. But for a relatively small team, we faced some pretty big problems.

KATSAROS: I mean, it’s hard to wrap your head around the amount of content that is posted to Facebook.

What does Matt mean? Well, think about this: every day our team evaluated 250,000 instances of hate speech, 100,000 cases of harassment, and millions more if you count spam, pornography, gun sales. Day after day, we came into the office to work on some of the worst ways that human beings were treating one another. But even then, we stayed positive and felt like we were having an impact.

KATSAROS: When you worked in that part of the company, everyone was there because they were really drawn to working on these issues. 

VENKATESH: I remember taking joy and solace in what people might think of as small victories. Maybe we could lower hate speech or lower the amount of bullying that a young person felt in a particular region of the world. 

KATSAROS: Yeah. And they’re problems that one person or even a small team can’t really address themselves.

VENKATESH: It felt like there was teamwork, like we had a common mission. And as I looked around and saw you and others, I felt like, “Wow, there’s other people experiencing this joy, too.”

KATSAROS: Little did we know that that was going to change pretty quickly.

The story of that change from joy to frustration, from thinking it was possible to keep Facebook safe every day to finding ourselves completely overwhelmed. That’s the story of my time at Facebook.

 In our last episode, I described Facebook as what sociologists would call a Total Institution. Think of the military or a prison, a place that takes over your life. Now I want to take you inside and with Matt and Radha’s help, I want to show you what it was really like to work on the bad things we hear about whenever someone — friends, politicians, anyone, really — whenever they mention the words “social media.” And why those bad things are so hard to stop  

Technology is like most industries where over time the companies start to look like each other. Sociologists have a term for this. They call it organizational isomorphism. Each company feels pressure to mimic their competitors. It’s a way to signal to their customers, their investors, and even each other that they’re relevant, they’re up to date. 

So when I tell you about how things worked at Facebook, it’s very similar to Twitter or Google and the other big tech companies. Most Silicon Valley companies are organized in the same way, and they tend to build products in similar ways. Matt explains:

KATSAROS: So, the three parts that come together to work on these things are, broadly speaking, the engineering and product teams, the policy teams, and then the operations teams. So, you can think of it as the people that build the systems, the tools, etc.; the people that create the rules; and then the people that actually enforce and review content.

Matt, Radha, and I were in this first group; we worked on the team that built the products for safety — maybe you’ve seen the Facebook feature that lets users notify their friends that they’re safe after a terrorist attack or a natural disaster. On our team, we had engineers who wrote the code, then we had the data scientists — they crunched the big data — and we had researchers like us who studied what our users were experiencing. 

We all played a role. But, in every organization, there’s usually one group that has the real power to do things. To understand what you see and experience on social media, you need to know who holds that power. In the universities where I come from, it’s the professors — the researchers — that are in charge. But not in tech.

PLUMB: For me, a big “aha!” moment was understanding how much the product manager and the engineering team really drive decision-making at tech companies. 

Yeah, here’s what Radha means. Whenever we found a potential problem with one of our products, we didn’t go and find the people who made the rules, the policy team. Instead, we went to the engineers who wrote the code because only they could change it to make the product work better. And they were the ones who decided whether to listen to us or not — whether to change a product they were building.

PLUMB: So, the most vivid memory I have was going into a meeting where some folks wanted to launch a new product. And I went in with the policy person who was in charge of women’s safety. And we blocked an hour of time. And we’re like, “We just want to talk about all of the different ways this product could be abused.”

VENKATESH: Wow, they gave you a half hour longer than they gave me, so you must be pretty special.

PLUMB: So, let’s start with pictures of people’s genitalia. And then we’re going to end at cyberstalking and just cover the gamut of potential behaviors and assess how much risk there is in their product before we go to market with it. And I remember it just seemed like we smushed the light out of their eyes. 

Bullying, spam, threats of terrorism, these are all against the rules in some way. But when you think about the scale of the activity on the site and, well, human nature, you’re never going to stop at all. It’s overwhelming. So the product manager and the engineer end up focusing on one thing: being in control. And that control starts with data. In tech, data is king — sometimes to a fault.

PLUMB: I think sometimes there’s a belief in tech companies that when you’re counting something, the thing you’re counting is the right thing.

Whenever we launched a new feature, the company would always ask us to measure two different outcomes. First, they wanted to know whether users were posting a lot, or were they sharing lots of content with each other. This is something that we would call engagement. They cared about engagement because if the users were engaged, that meant that they were seeing more ads and then the company was making more money.

Then they would ask whether there were any safety problems, like people who are harassing each other. And they especially wanted to know if any of these were too big to ignore. But at the end of the day, our teams were really judged by engagement, not safety. 

So if some of us raised our hands and said, “Hey, we need to slow down to deal with the harassment issue,” a product manager or an engineer might overrule us. There’s one huge example that really sticks out for me.

KATSAROS: Facebook Live, the ability to live-stream stuff on Facebook. 

Matt Katsaros again.

KATSAROS: People were like, “It’s not really a question of if people are going to live-stream a suicide or murder or things like that, it’s how quickly will it happen?”  

We warned the company, but the warnings weren’t enough to slow down the launch. 

KATSAROS: It was, “Launch it, hope that it goes well, and if it doesn’t, there’s a team that will deal with it.”

Sadly, our fears were realized.

KATSAROS: The summer after it launched, there were just a series of very high-profile and really sad and horrendous things that got streamed, murders and suicides.

The engagement metrics were very strong and users love Facebook Live, but it wasn’t long before those horrible things started happening at a volume and a frequency that couldn’t be ignored. We had to react, but that reaction was chaotic.

PLUMB: There’s no process. Sudhir, there’s no process. Things could just be decided in ways that were not that transparent and not that clear.

KATSAROS: It’s not that there is no response, there was a massive response.

Matt says the massive response started right at the top — from Facebook C.E.O. Mark Zuckerberg.

KATSAROS: Mark said we’re going to invest in hiring 3,000 more people to review content on Facebook Live. Which is some response. But the response was not, “We need to shut this down and figure it out. Turning it off is not an option. We have to keep it going. And at what cost? 

So why was it hard for us to put the brakes on and deal with safety problems? Well, it’s not because people didn’t care. We all cared, from Mark Zuckerberg down to me. It’s deeper than that.

And here I need to bring in an old sociological concept from the 1970s, which is about how organizations make decisions in the first place. It was developed, coincidentally, at Stanford University — in the heart of Silicon Valley.

Political scientist James March asked whether it was really possible for organizations to make sound, rational decisions based on data rather than emotion. He found that life in organizations is way too chaotic for people to act rationally all the time. Workers are faced with way too many problems. Everyone is under pressure. The decision-makers often choose their solutions quickly just so they can move on.

He had a name for this: he called this the Garbage Can Model of Decision-Making. He didn’t mean that all the solutions were bad, just that the process wasn’t always very pretty. If you had to pick up something from a trash can, you’re going to dip your hand inside, get what you need and get out. 

Most of my time in Silicon Valley felt like I was actually sorting through James March’s garbage can. We were all under pressure, and we all saw the problems, and we did our best to fix them. But there were way too many of them. And more than anything else, the company wanted us to move fast.

At this point, you might be thinking: if Facebook wasn’t equipped to handle the problems flooding the platform, why not ask other people for help? Matt Katsaros explains.

KATSAROS: This isn’t to say that there’s nobody trying to look outside for help, but the culture is we should have everything we need here.

VENKATESH: If you and I were in an organization and we had to figure out how to build a bridge and none of us had the knowledge of structural engineering, I doubt that we would say, “Come on, we’ll figure it out.” You know what I mean? At some point, we have to go and find and hire a structural engineer. 

But in tech, there’s this idea that the team that’s working on the feature or the product — the ‘Like’ button, whatever — we should let them figure out how to do it. I’m curious to know where that comes from.

KATSAROS: I can tell you about a conversation I had with someone who I think is so smart and I respect deeply. I was talking about this approach and I was like, “Well, I’m working on hate speech. I didn’t go to school for this. And I’ve tried to read as much as I can. I feel like I have become pretty knowledgeable. But I still would love help. I don’t think that we can become experts in all of these things.” And this person just looked at me like, “Sure, we can. We can become experts in anything we want.” And that conversation just ended because I was like, “We see the world in a very different way.” I wish I could answer your question. I don’t know where that comes from, but it certainly persists. There was just this notion that well, no one could ever weigh in because they haven’t really seen the inside. 

Even though it’s a really young industry, tech companies have a history. There are things that they’re really good at that have helped them grow and become financially successful, but this can make it hard for them to change when new problems arise. This was true for our safety teams.

KATSAROS: These teams were born out of spam fighters. If you look back in these companies, the early versions were people who were addressing spam on the Internet.

In the early days of Facebook and other social platforms, spam was the problem. The good guys — that’s tech — tried to stop the bad guys: the people flooding the platforms with trashy content. It was like an old-time Western. You didn’t try to educate or reform the spammers. You just brought out your pistols and ran them out of town. 

KATSAROS: Having worked on some of that stuff myself, it’s actually pretty interesting to see the cat-and-mouse games between these platforms and these groups of spammers. You’ll close up some hole. And then, they’ll deploy five more attacks the next week. They have a queue. They have a roadmap ready. But that’s a very different thing than most of what these teams are dealing with now.

You never see a lot of spam on Facebook. That’s because our team was really, really good at detecting it and removing it quickly. We did this by using algorithms. Those are computer programs that tell the computer how to recognize the spammy content and then to remove it before a user ever sees it. You have to do this — you have to use algorithms when you have millions of pieces of spam coming at you day and night. 

But the success soon became part of an overall mindset. Our team looked at everything bad like it was spam. Spam was annoying because it created a bad experience for users over and over. But it wasn’t causing lots of harm. It didn’t involve really complicated relationships between lots of people, like what you would see in hate speech or harassment, and so much of the other stuff that was on our site. 

KATSAROS: Those issues and the people that are engaging in that behavior couldn’t be more different than an organization that has a very clear motivation of installing some malware, extracting some money from people.

VENKATESH: I remember thinking, “Oh, we have a whole new set of problems that we’re dealing with here, like bullying and harassment,” and many of our colleagues were just like, “Well, it’s just a different set of bad actors now.” 

KATSAROS: The strategy of these companies often is: There are bad people who do bad things. And we can’t do anything about the fact that there’s bad people who are going to do bad things. But what we can do is act quickly when they touch our platform.

On all social media platforms, there are some people who will never play nice and you do need to get them off, quickly. This group is pretty small, though, usually around five percent of your user base. But many of the other people who behave badly might not even know it.

KATSAROS: If you went to the bus platform, and you saw everybody yelling at each other, that’s what you would do. If you walked into a library and people weren’t quiet, but instead they were all yelling profanities at each other, you might be a lot more likely to do that. You see how others act and you tend to mimic those behaviors.

We do ourselves a disservice, just continuing to only focus on that discrete moment of when someone called someone a profanity. We’re completely ignoring the whole journey that led them to that point. What happened when they joined the platform, and who are the people that the platform recommended to you? 

Matt’s right. Social media platforms are designed in such a way that users will mimic each other’s behavior. They learn from what others are doing, including how to be hurtful. But there’s another part of the design that doesn’t help. 

PLUMB: The key here is: you have a set of rules that are defining acceptable and unacceptable behaviors online. A core issue tech companies are trying to grapple with is just, how to explain what are the rules? How do they set them? And then what happens when you enforce them?

KATSAROS: Some people don’t even know that there are rules.

VENKATESH: And I remember trying to talk to people and say, “Hey, maybe we could help people understand the rules, maybe we could onboard them and educate them.”

PLUMB: Actually, this isn’t a super new problem. Governments struggle with this all the time. How do we make clear what rules we are setting and how often we enforce them? And what the consequences for enforcement are.

VENKATESH: I got the sense that the people that I was talking to inside the company felt like, “We’re good people and they’re bad people. And those people aren’t really going to change. So, we don’t really want to spend a ton of our effort trying to change them. It’s wasted money.”

KATSAROS: That’s something that I think is shifting, but is still there for sure. And a lot of this stuff is totally within the grasp of these platforms to just better understand and manage.

Matt, Radha, and I started to see some light at the end of the tunnel. All the research we were doing looked like it would help, but we still had to get a lot of other people in the company on board before we could make changes to the product — changes that would help our users. 

And time was running out. Hate speech was rising so quickly; it nearly doubled in 2018 alone. People were getting harassed all the time and they were using the platform to sell all sorts of illegal things. 

Everyone on our team was really stressed out. We weren’t feeling good about our work, and every day we were reading something in the press about how we couldn’t keep our site safe anymore. The P.R. mess had become so serious that even the high levels of user engagement didn’t make our leadership happy. We knew Mark and the other executives were watching. We knew we had to come up with fresh ideas. 

*      *      *

KATSAROS: You and I had this totally radical idea, “Let’s just go talk to some of these ‘bad actors’ and see what we might learn.”

That, again, is Matt Katsaros, my former colleague on Facebook’s Protect and Care team. Matt and I brought a small group together. We wanted to take a different approach to people who behave badly on Facebook. The idea was pretty simple: We wanted to call our users who acted out and try to find out why. It didn’t exactly get a hero’s welcome.

KATSAROS: I would talk to my team members at lunch and be like, “Yeah, we’re doing a new study and we’re going to start interviewing people who post hate speech or who post pornography or whatever.” And people would laugh at me. Not laugh at me, but they’d just laugh. It was like a funny idea. They were like, “Why would they even talk to you?”

But again we are not talking about spammers. We’re talking about people who just had a bad day and got really angry and said something mean on the Internet. They’re happy to talk to us. And in fact, not only were they fine talking to us, they had tried to contact Facebook many times. They said, “Hell, yeah. I’ve been waiting to talk to someone there. I want to tell you what happened.” 

When Matt and I started calling our users, we discovered some interesting things. One was the importance of what sociologists call situational context. Matt explains. 

KATSAROS: Sometimes I would be talking to someone who is unemployed. They’d tell me, “I don’t do anything all day. I just scroll and then get in these fights about K-Pop bands.” And I often got the sense that these people are in immense amounts of pain through a number of circumstances in their life, and what we’re seeing online is just the manifestation of that pain.

My team began to wonder, could we make these circumstances work in our favor? Could knowing the situation our users were in help us to decrease their bad behavior?

PLUMB: So let me put my economist hat back on. 

That’s Radha Plumb again. 

PLUMB: People who are doing behavior just right at the boundary line, how do we make it harder or less desirable for them to do it? That’s a natural evolution of the industry in the same way that in some countries, crime fighting is not just, “should we arrest people who break the law?” But also, “what are key drivers that might make them more likely to commit certain kinds of crimes?”

Our team began paying attention to one part of the situational context that was affecting how our users were behaving. This was how we, Facebook, were interacting with them. Meaning: how we explained our rules to them, how we let people know what they had done wrong, and what they could do to stay on our platform. 

So we began asking ourselves whether changing the circumstances really meant that we had to change. Could we treat our users differently, and would that help?

KATSAROS: I remember there was that time where you were trying to run an experiment where you were going to check in on people.

We worked with several hundred users, with people who kept breaking the rules by posting nudity or by bullying others. They’d end up in a place we would call Facebook jail. This meant that their accounts would be shut down for as long as 30 days. And we really wanted to stop them from getting to that point.

KATSAROS: We’re not changing the fundamental dynamics of, “how did we get there?” We’re dealing with these problems way, way, way, way, way down the line.

VENKATESH: I guess I found it interesting to keep returning to them. We would keep calling them and hearing more about their lives. It was just when we went back to building the products, sometimes I couldn’t figure out how to use all that information. I mean, what are we supposed to do for them?  

KATSAROS: The unfortunate part is the way we were handling it does not help. The things we end up doing is further isolating that person. You’re removing their content. You’re telling them they’re a bad person; you’re kicking them off the platform. These are things that, for people who feel lonely and isolated, it’s just further exacerbating those feelings.

We couldn’t help everyone who felt isolated and lonely, but Mark Zuckerberg liked our ideas so much, he put in the resources to build a team specifically designed to help people learn about the rules. This approach to helping users didn’t solve all the problems, but we did see many people learn and grow and act more civilly toward each other on our site. Unfortunately, just when we started to take a breath, another wave hit us, which is what happens in tech.

KATSAROS: I used to always think of it as this city planning infrastructure, where it’s like there was the garbage crew, and they were responsible for sewage management and taking trash out, and the city had just been built in a way where there was no rhyme or reason. 

Remember when I talked about tech companies making decisions like they were just picking up whatever solution they found in the garbage can? Well, it’s not just a 1970s-era academic theory. We were actually living it.

KATSAROS: Toilets were flushing into every street, and there was really no way that this small crew could handle the volume. And then over time, it became apparent that these teams had to actually build septic systems and waste management and build processes for actually designing their infrastructure in a way that could handle things that might go wrong.

VENKATESH: It’s like, the population doubled all of a sudden. Right? The same team all had twice as many toilets that they had to deal with.

KATSAROS: Exactly.

It turned out that being so big — with over two billion users — was becoming our enemy. Any success we had didn’t last very long.

PLUMB: You can’t imagine the scale. Even I can’t imagine the scale — and I’ve worked in tech for now a number of years. It’s so many more people and so many different kinds of people globally. Language-wise, age-wise, background wise.

KATSAROS: There’s just this constant, never-ending stream of reports that these platforms are dealing with. There’s these events that pop up all around the world that need special attention, And a lot of what stops their addressing some of these things is just constantly feeling like there’s this mountain of work that they just can never work through.  

VENKATESH: I certainly felt that. And then, somebody would come back to me and say, “Well, why can’t you just spend more money? You’ve got billions and billions. Why don’t you just double the size of your company?” I have trouble explaining sometimes just the work that is involved.

KATSAROS: There was way too much to do and not very many people. They’d create an agenda, a roadmap of what they wanted to do over the next couple of months, but that would constantly get sidetracked and hijacked by the thing that was happening that day.

There’s really only one way that a tech company can ever handle this much content. It’s with machines and algorithms. So once again, the engineers were in control.

KATSAROS: So, these are people who are highly trained in statistics, who are very good with numbers and building data sets. On one hand, they are trying to build algorithms that detect content that violates Facebook’s rules. On the other hand, they are trying to build algorithms that detect content that doesn’t necessarily violate those rules, but that people still might find offensive. So, someone posts some sort of derogatory speech that doesn’t cross Facebook’s line of hate speech. But perhaps there are plenty of people who just think that is inappropriate, or not the type of content that they’d like to see on Facebook. What they need are people who can help them get an understanding of, “Well, what does it mean for something to be offensive? And how does that concept change for someone who’s living in San Francisco versus someone who might be living in the Middle East or in Brazil?” 

Now, when I joined, there were a bunch of engineers who — most of them were male at the time — are tasked with building an algorithm to detect content that is sexually suggestive that people around the world might find offensive. That is a difficult thing if your discipline and skill set is mostly in statistics. So, folks like myself and others on our team, we were responsible for trying to build up an understanding of how do people conceptualize these things.

And there was a meeting I attended where people who are labeling this data would be like, “Hey, are these 10 pictures sexually suggestive or not?” And there was someone who was just like, “Well, yeah, no, maybe, no.” And I am sitting there just like, “Well, that doesn’t seem like how you’re supposed to do that.” Because ultimately then you’re just training an algorithm based on that person’s subjective opinion.

Those subjective opinions led to all kinds of unintended consequences, because the algorithm struggled to understand the nuances of how people communicate.

KATSAROS: When we were working on hate speech early on, our algorithm just started reporting lots of content with slur words. Someone might say — I’m going to use a slur word — “I love you. You’re my favorite fag.” And the other one says, “Oh, you’re such a fag” — sorry, I’m using that word so much. And I’d dive in and start looking. 

And you’d see, “Oh, it was in a conversation between two clearly homosexual men. They had L.G.B.T. flags as their profile pictures. One was wishing the other a happy birthday.” It’s not something that I could go solve myself. You have to deploy the entire team. 

I’d have to engage with my engineers to say, “Okay, how do we figure out when these words are being used self-referentially, positively, or derogatory?” And then, you’d have to engage with the policy person to be like, “Okay, having a policy where the existence of a slur word means removal. We have to rethink how we’re approaching this.” 

Algorithms are a necessary part of the work inside any tech company. At Facebook, we couldn’t have managed the scale and the velocity of all that bad content without them. But there were still people involved.

KATSAROS: These algorithms are built on the backs of content moderators. These machine-learning engineers who are really smart, really good at statistics, would have nothing to do without these content moderators. 

Content moderators are the army of contractors who work mostly in developing countries and who look at the content that the algorithms flag as objectionable. 

KATSAROS: The experience of any of us being full-time employees at Facebook who are on product and engineering teams pale in comparison to the content-moderation workers who are spending all day, every day working that queue, just going through, looking at more and more and more and more. 

It’s brutal work. In 2018, one former content moderator sued Facebook after she developed P.T.S.D. Other moderators soon joined the case, and last year Facebook agreed to pay $52 million in compensation to more than 11,000 moderators for all the mental health issues generated by this job.

Matt and I would never compare our working conditions to the lives of those moderators who work for us around the globe. But the work was having a real effect on our lives, too. 

VENKATESH: Did you ever get the feeling that it’s just a losing battle? 

KATSAROS: Yeah, 100 percent.

VENKATESH: I would get a person in their late 20s, just starting their career. And they pull me aside at lunch and meekly say, “I heard that you used to work with gangs and study crime and violence, and I’m just wondering how you did that, because this isn’t so easy for me.” I mean, they’re essentially in pain having to look at this stuff day in and day out. And I often ended up just sitting in my office, wasting the company’s money, just looking around, trying to figure out “Wow, this is really taking an emotional toll on these folks.”

KATSAROS: Largely why I left is the emotional toll. So, coming up with definitions of “What is a dick pic?” Or building a taxonomy of graphic violence. I had to just bow out at one point. The nature of that job is, sorting through the sea of graphic violence and putting it into discrete buckets, and then going through with the people who are labeling it where they — 

One person labeled it “blood and gore” and the other person labeled it “dismemberment.” Now, we have to sit and as a group look at this picture and say, “Well, what do we think? Is this ‘dismemberment’ or is it ‘blood and gore’?” And there are times where it was like, you have to…. Coming up with a definition of infanticide. It’s like that is…. Yeah. 

VENKATESH: Yeah. It’s insane.  

KATSAROS: The thing that helps you get through it is the hope that you are contributing towards something and helping people out or addressing some of these negative experiences. The part that gets super demoralizing is when you’re experiencing this emotional toll and you feel like nothing is getting done. It’s just like you are raising issues constantly, saying, “Look, we need to do something about this. Here’s our very clear recommendation.” And the decisions that come back down from the top are largely just not paying attention or ignoring those or making the tiniest little baby steps that really don’t do anything to address this stuff. And that’s where it becomes really difficult. Because then, it’s just like, “Well, what are we doing here?”

VENKATESH: It just wears on you. Yeah, I know. I couldn’t take it. I was just— At some point, I threw up my hands. What am I doing here?

In any organization, whether it’s a tech company or a government, the more people that you serve, the harder it is to give attention to each one of them. The Protect and Care team was still only a few hundred people, but now Facebook had become a platform of two billion users. To deal with all the safety issues around the globe, the team and our approach would have to change.

Mark Zuckerberg and the executives, they realized this. They renamed our team Integrity. They also moved us into part of the company called Growth. It felt to me that what Matt and Radha and I cared about, well, it wasn’t as important as recruiting the next billion users for the platform.

I understood this decision. It’s a business, after all. But that’s not why I went to work at Facebook.

Let me tell you one more story. Before these changes occurred, the Protect and Care team had a small group called Compassion. This was the team that helped users who faced suicidal ideation or depression or the loss of loved ones. We all took pride in this group because they really did help users in distress. Their work inspired all of us and it felt like the right thing to do.

The new leadership got rid of the Compassion group. They told us the work was too soft and vague, not rigorous enough. “How do we measure our investment?” they asked. Well, we came back with personal stories and some data, but we all knew it. The Compassion group was a goner. If an engineer or a product manager wanted control — if they wanted that certainty, the solid metrics of success — well, the work of Compassion was too messy for that.

To keep from crying, we would joke that the company wanted to change us from social workers into SWAT teams, dispatched here and there like an elite commando unit. Most of us hadn’t signed up for that. I moved to Twitter. Matt left shortly thereafter and Radha went to work at Google.

Here’s what I do know from my time at Facebook and Twitter: if these companies keep getting built and run the same way with the same outlook and the same cultures, well, you’re going to keep getting the same results. 

So how can we create a different future? What can we learn from what’s gone wrong?

Mark WEINSTEIN: We’re not going to amplify outrageous content into your newsfeed to get you aggravated and irritated.

Ethan ZUCKERMAN: That conversation is remarkably well-moderated because so many people participate in the work of making and shaping and enforcing the rules of the community.

 That’s what we’ll explore next time on Sudhir Breaks the Internet. 

*      *      *

Sudhir Breaks the Internet is part of the Freakonomics Radio Network, and is produced by Freakonomics Radio and Stitcher.  This episode was produced by Matt Hickey. Our staff also includes Alison Craiglow, Mark McClusky, Greg Rippin, and Emma Tyrrell. We had help on this episode from Jasmin Klinger and James Foster. To listen ad-free, subscribe to Stitcher Premium. We can be reached at sbti@freakonomics.com, that’s S-B-T-I at freakonomics.com. Thanks for listening.

VENKATESH: That was kind of emotional for me and Matt. But I’m glad he kind of, you know, talked about that time because I don’t think he’s ever done that, and I don’t think I have. 

Read full Transcript

Comments