Nevo: Banning Trump does little to address the root of social media extremism

Lily Nevo, Columnist

Following the insurrection at the U.S. Capitol last week, many politicians, companies and institutions, some of whom were previously indifferent or even supportive of Trump’s efforts to overturn the election, have quickly tried to distance themselves from this nightmarish assault on democratic legitimacy.

So far, Trump is being held accountable. On Wednesday, he became the first president to be impeached twice, with 10 House Republicans joining the effort, garnering the most bipartisan support of any impeachment vote in history. The charge of “incitement of insurrection” is largely based on numerous speeches and tweets encouraging supporters to show up for a protest at the Capitol on the day that the electoral votes were certified and “fight” for democracy.

Twitter has removed 70,000 far-right conspiracy accounts in connection to the violent siege, including Trump’s personal account. While the move to ban the President from many social media platforms is important, it is still too little too late. These Big Tech firms must be denounced for their role in promoting extremism.

For years, the question has always been about which statements to censor. Where is the line between hate speech and free speech? What is a platform’s role in mitigating the spread of false information? To be sure, this is not a question of legality. The first amendment only protects against government censorship, so private companies have the right to control what is shared on their platforms.

Still, censorship is extremely controversial. Though the recent bans on Trump and other right-wing conspiracy theorists may appear definitive, there are still no clear guidelines on what speech will be censored in the future. Terms and conditions condemn violence, but determining what qualifies as such will forever be subjective.

By fighting the problem with bans, there is an assumption that users are at the root of this problem. The content that is posted is what needs to be moderated, not the way in which the platform promotes such content. But if the impossible reconciliation with free speech is the major impediment to combating online extremism, why focus on fighting it with censorship? So long as platforms exist, polarizing content will be produced, but without a means for it to spread, its impact is diminished.

The real problem is not only with users, but also with the algorithm. In the age of information overload, inflammatory content is the most attractive, and studies have shown that content involving conspiracy theories is more likely to go viral than neutral content. Furthermore, other studies have shown that the provocative nature of lies causes them to spread faster, even in the absence of a platform promoting such information.

Though human nature does account for users being more attracted to extreme posts, the algorithms encourage such interests. As soon as someone clicks on one conspiracy-related post, the platform assumes that this is something of interest to the user and thus pushes them down a self-reinforcing hole of misinformation. In 2020, civil rights auditors of Facebook reported just this, stating that “The auditors do not believe that Facebook is sufficiently attuned to the depth of concern on the issue of polarization and the way that the algorithms used by Facebook inadvertently fuel extreme and polarizing content.”

But for tech companies, algorithms are everything. The highly specific content keeps users clicking for longer, and therefore makes their platforms more valuable to advertisers. And a company that is powerful enough to silence the president of the United States is unlikely to surrender the strategy that keeps them afloat.

For now, it seems as though mere case-by-case evaluation of dangerous online communities is the most likely path forward, unless these companies suddenly decide to choose responsibility over revenue and abandon their algorithms. While many are praising social media platforms for finally holding Trump accountable, this is not a happy ending. Five people had to die for Big Tech to finally acknowledge their role in amplifying the President, who will soon be powerless and of no use to these companies, and his hateful words.

The next week and subsequent years will be crucial in evaluating the efficacy of mainstream social media bans. Will a lack of platform result in incurable disorganization within conspiracy groups, or will they thrive under the cover of encrypted sites, fueled with motivation to combat the mainstream forces that have wronged them? Either the strategy is effective and Big Tech is off the hook, or many more structural changes will be necessary to stunt the growth of online extremist groups.

Lily Nevo is a Weinberg Freshman. She can be contacted at [email protected]. If you would like to respond publicly to this op-ed, send a Letter to the Editor to [email protected]. The views expressed in this piece do not necessarily reflect the views of all staff members of The Daily Northwestern.