Originally shared on FastCompany.
I have many friends who work at Facebook, whom I’ve watched over the years enthusiastically defend the company’s mission “to give people the power to build community and bring the world closer together.” As someone who has worked in tech for nearly 15 years and built major businesses, I’ve admired Facebook’s stunning growth, and I have tried to remain open-minded as evidence mounted that Facebook drives misinformation and radicalization, encourages hate speech, and incites violence around the world. Meanwhile, my friends at Facebook have remained true believers.
Then, last week when I saw Mark Zuckerberg on Fox News defending Facebook’s laissez-faire approach to the content that populates its site, I thought, “Ok, this is it. Surely when they see their boss on the side of bigotry, hate, and racism, they are going to realize that they are on the wrong side of history.”
So, I called my friends at Facebook to ask them how they were feeling, and to see if they needed to vent. But instead of expressing doubt about the company’s position, most of them doubled down, telling me that “Mark is really the only grownup,” that Twitter is acting irresponsibly by “censoring” President Trump, and that free speech is fundamental—too essential to democracies for Facebook to stifle it.
As I think about these phone calls, it is painfully obvious to me that future dinner parties with these pals will get heated. So, I put together my own cheat sheet to keep in my back pocket for heated conversations to come. Here are my friends’ claims (in italics) and my responses:
- “Facebook shouldn’t be the arbiter of the truth.” Facebook’s sophisticated algorithms already make the company the arbiter of people’s truths. Its algorithms decide which information gets displayed, amplified, and buried on your Facebook. Most (not all!) users are human beings, and most human beings believe to be true what is commonly accepted by other human beings in their circle; similarly, they reject as false what is commonly dismissed by other humans in their circle. Whatever information, true or false, it distributes, Facebook is de facto directing three billion people around the world what to believe and what not. If Facebook won’t change its algorithms, it then needs to be a defender of the facts.
- “It’s different to let people collectively decide what they believe in versus telling them what they should believe.” But again, Facebook is not really “letting” people decide. What it does is amplify, through its algorithm, highly emotional and addictive messages that make us want to come back to the platform for more. It enables all kinds of extremists to come out of obscurity and pushes their inflammatory or blatantly false messages to millions. Election meddling, radicalization, normalization of hate, and false and dangerous medical advice are a very high cost of doing business—a high cost that society, not Facebook, bears. Facebook’s “neutrality” (“The algorithm made us do it . . .”) is a magic trick to make us believe that the company can’t be held responsible for anything happening on its platform.
- “You are asking to end free speech.” Let’s start with the fact that, at least in the United States, private companies don’t have to protect freedom of speech. The government does that job for them. Let’s also not forget that Facebook has explicit terms of service when it comes to incitement of violence, bullying and harassment, hate speech, graphic content, cruel and insensitive content, misrepresentation, false news, and manipulated media, etc. These terms of service codify the company’s commitment to police what is being published on its platform. At some point in time, Facebook leadership thought these terms were good things to stand up for and protect. Somehow, over the past few years, they decided to ignore them while still standing on their mission soapbox.
- “Section 230 requires for social media to be neutral.” Section 230 of the Communications Decency Act protects social media companies from being liable for the content posted by its users. The word “neutral” is not even used in the text of this federal statute. Section 230, while shielding internet companies from liability for untrue posts, implies nothing about their role in exercising editorial judgment.
- “Fact-checking every post is impossible.” But Facebook does not have to adopt an all or nothing approach. What about starting with the posts of people and entities that have, let’s say, one million followers, or more than one million likes/shares in a month? As for inevitable errors once in a while, why would we consider Facebook incorrectly labeling a post as problematic more terrible and unacceptable than repeatedly and systematically amplifying misinformation spread by rogue individuals and groups? Moreover, there are ways to limit damages, i.e. link to other sources (as Twitter now does) rather than take down a post, or track changes (as Wikipedia does for editable pages).
- “It’s important that we represent all sides.” Representing and promoting are different things. Facebook needs to make a distinction between freedom of speech and freedom of reach. Truth relativism has been on the rise. Anybody is now not only entitled to express their opinion but to be considered an expert and treated as one, no matter how qualified they are or whether their opinion is actually based on facts or just their political agenda. Would you have helped Stalin advocate gulags as a way to boost industrialization in the USSR?
- “While most of us at Facebook are liberals, we don’t want to be biased against conservatives.” Totally honorable. Now, to use a favorite approach of tech people, is there a way to measure and A/B test that hypothetical bias and, if proven, correct it? This would be one way to avoid giving a free pass to every hate-filled post. As an isolated data point, Vice last year reported that conservative Breitbart News has more engagement on Facebook than the New York Times, the Washington Post, the Wall Street Journal, and USA Today combined (57.8 million likes, comments, and shares versus 42.6 million).
- “At the time of COVID-19 and mass protests around the murders of black people, now is not the time. This is a distraction.” When is it the time? Hate speech and false news have been escalating for years. What do we need to wait for? The U.S. presidential election is around the corner. Other important elections around the world are not far behind. It’s been proven over and over again that social networks have become favorite propaganda tool of autocrats and despots. Frankly, it already feels too late, but better late than never.
Beyond these points, the fundamental questions I keep asking myself are: When you have achieved the size and the power that you have, why not stand for unifying rather than dividing people? Why not do all that’s possible to distribute fact-based information rather than falsehoods? Why not use your global might to advance tolerance over hate and racism? The world is on fire. It’s time.
Maelle Gavet has worked in technology for 15 years. She served as CEO of Ozon, an executive vice president at Priceline Group, and chief operating officer of Compass. She is the author of a forthcoming book, Trampled by Unicorns: Big Tech’s Empathy Problem and How to Fix It.