WORDS LEAD TO ACTIONS, TIME TO REGULATE DANGEROUS IDEAS

The internet can be a very terrifying place – maybe not as terrifying as the YouTube “dark web” videos that keep you up at night with the fear of assassination, snuff films and child pornography – but as of late, it’s getting close.

If anyone has paid attention to news in the last several weeks, in the midst of everything else that surrounds news media like political scandals and foreign affairs, there were the Christchurch mosque shootings in New Zealand. Everything surrounding this massacre terrifies me, not just as a Muslim, but also just as a human being!

A big topic, not just in this situation but internet-based media in general, is censorship.

Graphic of the carrier pigeon carrying messages.
Graphic by Angeles Ramirez / the Advocate

The Christchurch shooter live-streamed the shooting on Facebook with a camera placed on his head. Upon viewing it looks fake, like a video game, and that’s a problem. On one hand of the problem, we have graphic (disturbing) media, such as the shooting, that are easily accessible and easily desensitizing. On the other, we have things like tweets from controversial figures that feed into ideologies, such as the infamous “alt-right.” In the case of the Christchurch shooting, the manifesto left by the shooter leads us to believe that he went down the alt-right rabbit hole to the point where he was radicalized.

One side of the problem, like the tweets and videos from publications, doesn’t necessarily cause the other problem (radicalization) – but the mosque shootings show us that it did, in this case. What happened after the shootings is a mixed bag: a re-evaluation of social media policies that would tackle things like snuff films and graphic video being streamed or uploaded, and banning certain figures from their platforms altogether.

Many notable people associated with the banning situation call it censorship and bias, since most of those banned were outspoken for their right-wing views, but that shows me there is a bigger problem with explicit, controversial views that a group like the alt-right have in relation with the internet.

As the Christchurch shooting and the Charlottesville incident (white supremacist rally in August 2017) show us – these ideas, while acceptable in the context of free speech, are dangerous.

A corporation like Facebook or Twitter isn’t the government, and they are free to ban whomever they want. I understand there is the fear that such an action is a slippery slope to larger censorship, but with the stuff I see on Twitter and Facebook such as fake news articles, there needs to be some regulation for ideas that are dangerous.

Writing off alt-right rhetoric as “unpopular opinions” is normalizing internet-age tribalism, and puts platforms such as Twitter in an awkward position when it has to regulate media sources that caused the deaths of 51 people.

Until we stop describing neo-fascist views we see online as simply a “marketplace of ideas” contender, we will continue to see social media platforms digging themselves into a deep hole until nobody, including their users, can get out. 

Leave a comment

Your email address will not be published.


*