Originally shared on Fast Company
Over the past several months, the conspiracy theory known as QAnon, which posits that politicians and A-list celebrities are engaging in child abuse and a “deep state” effort to undermine President Trump, has gone from being a curiosity to an increasingly dangerous force in American politics and society. And the group has flourished thanks to social media platforms.
It took until July 21 for Twitter to ban 7,000 QAnon accounts and until October 6 for Facebook to finally make the decision to remove “any Facebook Pages, Groups and Instagram accounts representing QAnon” and ban any group representing the conspiracy theory.
In a recent TV interview, Susan Wojcicki, CEO of YouTube, demurred rather than straight-out condemned QAnon, saying, “We’re looking very closely at QAnon. We already implemented a large number of different policies that have helped to maintain that in a responsible way.” Three days later, YouTube finally announced that it would “prohibit content that targets an individual or group with conspiracy theories that have been used to justify real-world violence” by groups such as QAnon. But it still allows content discussing QAnon if it doesn’t target individuals.
It is mind-blowing to me that it has taken more than two years for the social platforms (with the exception of LinkedIn, which moved quickly) to crack down on QAnon. But it is not surprising.
QAnon is yet another example of everything that is wrong with our social media platforms and how their executive teams make decisions. And its persistent presence on these platforms demonstrates the worst in politicizing difficult problems, rather than attacking them with reasoned, rational thought.
The response to QAnon’s rise is all too familiar: Politicians are calling for a repeal of Section 230, those 26 words written in 1995 that shield internet companies from any liability for content created by users. Social media executives, meanwhile, continue to advocate for self-regulation and managing content as they wish. And all kinds of people—from journalists and executives to politicians and ordinary citizens—foresee the end of free speech if we do anything at all to curtail this content. But none of these reactions are going to solve the most pressing problem: Social networks are threatening Western democracies and social cohesion.
Calls for repealing Section 230 are clearly coming from people who have yet to understand how most of the internet works (not just social media platforms, but also Google, Amazon, and any internet company that displays user generated content, including reviews). Repealing this legislation would make companies liable for any content published on their platform, which would result in either these platforms being sued into oblivion or becoming so cautious that they start censoring content—the very thing that many “repealers” fear. Repealing Section 230 simply makes no sense.
Social media companies’ calls for self-regulation—while continuing to play dumb or simply “look closely” at bad actors on their platforms—don’t make any more sense. These platforms have been self-regulating for 15 years and continue to demonstrate, in scandal after scandal, that they don’t know how to, don’t want to, or can’t make the decisions that are necessary to preserve civil discourse and facts on their platforms.
As for the people advocating for the status quo and defending free speech as an absolute right, I wonder how much longer they are going to tolerate the petri dish of outrageous content that social platforms are cultivating. These companies pretend to be neutral despite the warping presence of their recommendation algorithms and have no liability about any decision or nondecision they’re making. At some point, even the staunchest free-speech advocate should recognize the increasingly high price of such absolutism—and that somehow society is paying it rather than the social media platforms who ran up the bill.
What’s most concerning is that these emotional, political, and factless debates are so overwhelming that many people seem to believe the problem is impossible to solve. And though they are multifaceted and require the participation of multiple shareholders, some solutions do exist.
First, social media platforms should start applying Section 230 as it was initially envisioned: not just as a shield but as a sword that empowers them to wield their content oversight capabilities without legal consequence. All these platforms have substantial and well-thought-out content moderation guidelines. And per these guidelines, most of the content posted by QAnon members should have been deleted long ago and together with it the accounts that propagate these harmful lies. The same is true for misinformation about COVID-19 and content that denies genocide—such as the Holocaust denials that Mark Zuckerberg finally acknowledged fall under Facebook’s hate speech ban and the denials of dozen of other genocides that are still proliferating on the platform. Rather than applying their guidelines purely and simply—and bearing the political and potentially short-term financial consequences—the leadership of these media platforms seems to constantly hesitate and look for exceptions. The problem with this lack of backbone is that every decision feels like a one-off action, something to be debated. We shouldn’t be caught up in discussions over specific posts: The only debate should be over the guidelines and policies around content and ad moderation, as well as the rules and processes to implement them.
Second, the platforms need to fundamentally review the way their recommendation systems suggest groups to join, people to follow, and videos to watch next. Facebook discovered in 2016 that “64% of all extremist group joins are due to our recommendation tools.” Little seems to have changed since this 2016 study: Facebook continues to believe that connecting people is inherently good and, as a result, doesn’t want to prevent people from forming and joining communities. This means that the majority of QAnon members on Facebook likely became members thanks to Facebook. As the company itself said in 2016: “Our recommendation systems grow the problem.” The same dynamic occurs on Twitter and YouTube, helping QAnon move from the fringes to the mainstream.
Third, the platforms need to increase dramatically transparency around the volume and nature of actions they take to moderate content and groups. Right now, each platform cherry picks information to communicate to the outside world. YouTube, for example, claimed last week that following a policy change around borderline content, “the number of views that come from non-subscribed recommendations to prominent Q-related channels dropped by over 80% since January 2019.” Unfortunately there is no information about the absolute number of views, nor how many unique users are still being exposed to QAnon content. The result of such opacity (which isn’t unique to YouTube) is that only a few people inside each company know the true size of the problem and true impact of any content moderation efforts. The rest of us, including regulators and most employees, are just guessing.
In the absence of proactive moves from social media platforms, the U.S. government should implement a few policy measures to hold these companies accountable for the role they play in spreading hate speech and misinformation. It should start with a federal privacy law that would restrict how companies use people’s digital profiles and behavior to target them with content—a practice that has taken the manipulation of people through disinformation to new levels. It should also mandate regular audits of recommendation algorithms by neutral third parties, as well as corporate disclosures around targeted advertising and misinformation.
Beyond this, Western governments need to require platforms to maintain a public ad database that would permit them to compel companies to comply with all privacy and civil rights regulations when engaging in ad targeting. It would also allow for better tracking of potential domestic and foreign interference.
Facebook, Twitter and Alphabet (YouTube’s parent company) will appear before a Senate committee on October 28th to discuss their content moderation policies and whether they have used Section 230 to censor conservative opinions. More likely than not, this will lead to nothing more than more political posturing and rehearsed empty answers. This is particularly sad when there are simple measures that tech executives and regulators could take to pull us from the foul swamp that these platforms have created—a swamp that they still expect us to be grateful for.
Originally published in Fast Company on October 22. Link here. Image credit: WhiteBarbie/iStock; MchlSkhrv/iStock
We Don’t Need Less Tech, We Need More #EmpatheticTech
A tech executive’s revealing and in-depth examination of Big Tech’s failure to keep its foundational promises and the steps the industry can take to course-correct in order to make a positive impact on the world. Available Now.