Ban Telegram? Censorship and disinformation online

Image from Wired.

By BYRON CLARK. Written for Fightback’s magazine issue on Organisation. Subscribe to our magazine, or e-publication here.

The new iteration of the far-right, termed the alternative right or alt-right, has in recent years risen to prominence online. It gained wide attention during the 2016 US election, then became more prominent with the rise of the QAnon conspiracy theory in 2017. Next came the spate of mass shootings carried out by men radicalised in online spaces – Charlottesville, Christchurch, Poway, El Paso, Buffalo.

Today, the encrypted messaging app Telegram has become the go-to space online for alt-right organising and propaganda dissemination, but it’s not the first space used for this purpose. The online far-right has existed almost as long as there has been an “online”.

After World War II, “no platform for fascists” was not a radical leftist demand, but instead the policy of every respectable publisher and broadcaster. Of course, the defeat of fascism wasn’t the end of systemic white supremacy, which persisted in segregation in the US south, and apartheid in Rhodesia and South Africa (and, to some degree, still persists in every European country and white settler colony). After social movements for civil rights successfully ended segregation and apartheid, it became harder for overtly white supremacist ideas to get a platform in wider society.

Barred from mainstream media, white supremacists saw the potential of the internet to spread their beliefs, before most people even knew what the internet was. In 1985 Tim Miller wrote in the Washington Post about a ten year old boy who was able to dial up a computer message board and access articles with titles such as “The Case Against the Holocaust,” “The Jew in Review,” and “How the Scum of the Earth Rule Us.” It was one of about half a dozen bulletin board systems (BBS) operated by ex-Klansmen, neo-Nazis and other white supremacists. Miller quotes Tom Metzger, a former California Ku Klux Klan leader who operated one of these bulletin boards: “We feel the white nationalist movement is 20 years behind in technology and we’re going to catch up whether they like it or not.”[1]

Online utopia vs. Nazis

By the mid-1990s, the World Wide Web was superseding bulletin boards. Stormfront began in 1995 as a discussion forum for white supremacists. During the years it existed (1995-2017) it was linked to almost 100 murders, most of those committed by Anders Breivik.[2] Most Stormfront users were white supremacists before they started using the website. It connected white supremacists with people who shared their views, but for the most part didn’t radicalise people (because why join the discussions on Stormfront if you weren’t already a white supremacist?) Stormfront encouraged its users to spread their beliefs elsewhere on the internet; for example, any forum where they wouldn’t be banned for starting conversations questioning the Holocaust or talking about the supposed link between race and IQ. It was surprising how easy it was for the far-right to spread out across the web, likely because many of the first people on the web believed strongly in the principle of free speech. If your web forum had Nazis on it, that just showed how deep that commitment to free speech was.

Utopian ideas about the internet and its potential for freedom from traditional gatekeepers of information underpinned a kind of techno-libertarianism. John Gilmore, a pioneer of internet technologies and one of the founders of the Electronic Frontier Foundation (EFF), a non-profit that advocates for online civil liberties, once stated that “the Net interprets censorship as damage and routes around it”.

The naive utopianism of the early web is best encapsulated in 1996’s Declaration of the Independence of Cyberspace: “We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.” This was, at best, a blind spot on the part of the manifesto’s author, John Perry Barlow, another of the EFF’s founders. There’s no reason to think that the power relations that existed in the ‘offline’ world would somehow not be replicated ‘online’.

Libertarian ideals around free speech were the norm online in the late 90s and 2000s. When 4chan (est. 2004) later established its “politically incorrect” message board, /pol/, for uncensored political discussion, it very quickly became dominated by white supremacists. You could express any political opinion you wanted on /pol/, it wasn’t an inherently far-right space; but why discuss politics in a space full of white supremacists and fascists when you could do so somewhere else, without them?

Far-right politics spread from /pol/ to the wider 4chan community, and then to the subculture around online gaming. In 2014, a coordinated harassment campaign targeting women involved or adjacent to the video game industry began on 4chan, that would later be dubbed ‘Gamergate’. Steve Bannon, at that time chair of Breitbart News (who would later become Donald Trump’s senior counsellor and chief White House strategist) realised the value that the angry young men of Gamergate had for a hard right political movement. ‘You can activate that army,’ he told a biographer. ‘They come in through Gamergate or whatever and then get turned onto politics and Trump.’[3]

Far from the high hopes of mid-90s techno-utopianism, our modern internet has nurtured prejudice and violence. When 4chan founder Christopher Poole reneged on his laissez faire attitude toward moderation and banned Gamergate from the boards, many users fled to 8chan, a message board site with even less content moderation. Poole eventually sold his site, sick of dealing with controversies like Gamergate. 8chan went on to nurture the Qanon conspiracy theory (which began on 4chan) and was the place where the Christchurch shooter chose to disseminate his manifesto.

Algorithmic radicalisation

Alongside spaces like 4chan and 8chan, social media platforms have driven people toward more extreme content via algorithms, designed to keep people’s attention on a site for as long as possible. American sociologist Jessie Daniels has described the rise of the alt-right as being the result of both centuries-old racism, and the new social-media ecosystem powered by algorithms.[4]

The Royal Commission report into the Christchurch shooting found this algorithmic radicalisation at work, noting that while the shooter had participated in forums including 4chan and 8chan, YouTube played a much larger role in his radicalisation than these sites.

In the past, YouTube has been often associated with far right content and radicalisation. There has been much debate about the way YouTube’s recommendation system works. One theory is that this system drove users to ever more extreme material into what is sometimes said to be a “rabbit-hole”. An alternative theory is that the way in which YouTube operates facilitates and has monetised the production of videos that attract viewers and the widespread availability of videos supporting far right ideas reflects the demand for such videos. What is clear, however, is that videos supporting far right ideas have been very common on YouTube. YouTube has made changes in response to these criticisms, in particular to their recommendation system, so it is less likely to continue recommending increasingly extreme content and has also made it more difficult to access extreme content.[5]

YouTube, and other major social media platforms such as Facebook, have made changes to the way their recommendation algorithms work in response to the increased scrutiny on them following the spate of mass shootings and events such as a January 6, 2021, insurrection in Washington DC. In part these changes have been in response to the Christchurch Call, an initiative by governments, online service providers, and civil society organisations to eliminate terrorist and violent extremist content online that was started following the mass shooting in Christchurch.[6]

When the question of deplatforming comes up, arguments about free speech always ensue. Freedom of speech, in a legal sense, is the principle that the state will not prevent you from speaking, or punish you for speech the state does not want heard. The concept of “no platform for fascists” does clash with this principle. Someone on the political left may believe that the state should not censor or oppress the speech of anyone (including fascists), while advocating for media (including social media) to not provide a platform for fascists to speak. Likewise, advocating for universities and public spaces such as community centres not to provide a venue for these speakers is not abandoning the principle of free speech.

This attitude is often shared by those on the political right, who hold the view that a private entity has the right to decide what views they will give a platform to. Where the concept requires some nuance (wherever one sits on the political spectrum) is in the case of public entities, such as city council-owned buildings, or public universities (a debate beyond the scope of this article).

For those on the political left, in particular on the socialist left, there is a recognition that power in society does not just lie with the state, and there is reason to be concerned about handing the ability to decide what kind of political speech is permissible to private corporations, such as Alphabet (the parent company of Google and YouTube) and Meta (the parent company of Facebook). There is an argument that these corporations, given the power to decide what content can be posted and shared on their platforms, could censor any form of political speech, and that this would be a negative given how much discussion now happens on these platforms. This line of thinking may lead to a kind of free speech absolutism, the idea that social media platforms should not censor any speech, and the platform being given to the far-right is the price we have to pay for the platform now available to the far-left, whose views were also largely excluded from public discussion in the pre-social media era.

This attitude, however, leads to a problematic conclusion – if social media shouldn’t censor any speech, then the workers at these firms must be compelled to build and maintain platforms for fascists. Arguably this is not a political position that any socialist should take, it is at odds with the position of the Alphabet Workers Union who issued the following statement after the events of January 6, 2021.

We, the members of Alphabet Workers Union, part of Communication Workers of America Local 1400, are outraged by this attempted coup.

We know that social media has emboldened the fascist movement growing in the United States and we are particularly cognizant that YouTube, an Alphabet product, has played a key role in this growing threat, which has received an insufficient response by YouTube executives.

Workers at Alphabet have previously organized against the company’s continued refusal to take meaningful action to remove hate, harassment, discrimination, and radicalization from YouTube and other Alphabet-operated platforms, to no avail.

We warned our executives about this danger, only to be ignored or given token concessions, and the results have been suicides, mass murders, violence around the world, and now an attempted coup at the Capitol of the United States.

Once again, YouTube’s response…was lacklustre, demonstrating a continued policy of selective and insufficient enforcement of its guidelines against the use of the platform to spread hatred and extremism…

The battle against fascism will require constant vigilance on many fronts, and AWU stands in solidarity with all workers fighting for justice and liberation, in the workplace and the world. We must begin with our own company.

YouTube must no longer be a tool of fascist recruitment and oppression. Anything less is to countenance deadly violence from Gamergate to Charlottesville, from Christchurch to Washington, D.C., from Jair Bolsonaro to Donald Trump.[7]

Telegram or “Terror-gram”?

With YouTube, Facebook and Twitter not only tweaking their algorithms to reduce radicalisation, but deplatforming individuals and groups who were using those platforms to spread bigotry and misinformation, many of those individuals and groups – and their followers – have migrated to more niche platforms. Numerous platforms have emerged to cater to this audience. Donald Trump, after his ban from Twitter, backed one called Truth Social, while Miles Guo, a business associate of Steve Bannon, founded Gettr, and Andrew Torba, a noted anti-Semitic conspiracy theorist and Christian nationalist, founded Gab.[8]

None of the above platforms have seen the growth that Telegram has. The encrypted messaging app has been popular for some time, in many countries more so than Facebook’s messenger app or WhatsApp. The introduction of ‘channels’ allowing a user to communicate in a more one-to-many style, sharing content with a channel’s followers, has made it a useful tool for those wanting to get a message out to an audience. Notably, Telegram does not use algorithms to promote content to users, in this way it has more in common with the bulletin board services of the 1980s, or Stormfront in the 1990s, you get to the content because you are explicitly looking for it.

Before Telegram became a haven for the far-right, it was also the app of choice for ISIS terrorists. In 2015, Pavel Durov, one of the platforms founders, responded to questions about this stating “I think that privacy, ultimately, and our right for privacy is more important than our fear of bad things happening, like terrorism.”[9] (A few weeks later, though, Telegram would remove 78 public channels promoting ISIS propaganda).[10]

Telegram’s terms of service prohibit the promotion of violence, and while the platform has removed several dozen far-right channels for violation of this provision,[11] the Anti-Defamation League has noted it is “extremely easy to find content that violates this agreement”, including the live streamed video of the Christchurch shooting. Even if the prohibition on promoting violence were more widely enforced, many groups that stop short of promoting violence would remain. These groups are not harmless just because they don’t directly advocate violence. Spreading misinformation, like the great replacement conspiracy theory that inspired the Christchurch terrorist, can contribute to violence even if violence is not directly called for.

In New Zealand, the anti-vaccine group Voices for Freedom (which is now pivoting to other conspiracy theories) has built a sizable audience on Telegram since being deplatformed from Facebook, and recently encouraged their followers to stand in local body elections- without revealing their affiliation to the group.

Counterspin Media, an online talk show that promotes disinformation about COVID-19 and a number of other topics, also has built an audience on Telegram. It was on their Telegram that links to a ‘documentary’ which claims the Christchurch shooting was a hoax and incorporated footage from the livestream was shared. The hosts of Counterspin were later arrested on an objectionable publications charge.

If New Zealand were to ban Telegram, it’s likely that these groups would continue to reach an audience on other platforms. Voices for Freedom claims an email mailing list of 100,000, and Counterspin Media, which began on the (now bankrupt) Miles Guo owned platform GTV has had a presence on Gettr since its inception. After losing their platform on GTV, they have continued on the video sharing site Rumble and banned.video, one of the sites in a network operated by American conspiracy theorist Alex Jones. John Gilmore’s words about the network routing around censorship remain true.

If someone had been done earlier about the kind of algorithmic radicalisation that occurred on mainstream social media sites in the late 2010s, it’s possible we wouldn’t be in the situation we are in now when it comes to disinformation and bigotry online. But we’re at a point where banning a particular platform would not help, not to mention that there are still many people using Telegram for perfectly legitimate reasons, such as those with friends and family in countries where it’s the dominant messaging app. The rise of the far-right is a social problem that does not have a quick-fix technical or legal solution.


[1] https://www.washingtonpost.com/archive/lifestyle/magazine/1985/07/14/the-electronic-fringe/17955294-9c94-4b5d-99e4-9af799b45eae/

[2] https://www.splcenter.org/hatewatch/2014/04/17/splc-report-nearly-100-murdered-stormfront-users

[3] Joshua Green, Devil’s Bargain: Steve Bannon, Donald Trump, and the Storming of the Presidency [e-book], Penguin Press, 2017.

[4] Jessie Daniels. 2018. ‘The Algorithmic Rise of the “Alt-Right”’, Contexts, 17(1), pp. 60–65. https://doi.org/10.1177/1536504218766547

[5] Ko tō tātou kāinga tēnei report: ‘Royal Commission of Inquiry into the terrorist attack on Christchurch masjidain on 15 March 2019’, December 2020, www.christchurchattack.royalcommission.nz

[6] www.christchurchcall.com

[7] https://twitter.com/alphabetworkers/status/1347331587315171330

[8] https://www.adl.org/resources/blog/andrew-torba-five-things-know-0

[9] https://techcrunch.com/2015/09/21/telegram-now-seeing-12bn-daily-messages-up-from-1m-in-february/

[10] https://www.telegraph.co.uk/technology/news/12004892/Encrypted-messaging-app-Telegram-shuts-down-Islamic-State-propaganda-channels.html

[11] https://techcrunch.com/2021/01/13/telegram-channels-banned-violent-threats-capitol/

%d bloggers like this: