A report has surfaced revealing that the spread of disinformation and fake news played a part in the elections of at least 18 countries over the past year alone. This came just after the European Commission decided to open a public consultation on the issue to gain advice and opinions on what it can do to get a handle on the spread of misinformation. One thing is for sure: the problem is seemingly out of control and it doesn’t look like major tech companies or the law are really sure about how to fix it. But can either side really fix it?
The internet big boys are trying to be more transparent about the people advertising on their platforms and putting more regulations and policies in place to regulate both content and ads on their sites. The reaction of the likes of Facebook, Google, and Twitter – to the revelations of Russia’s influence on their sites during the 2016 US presidential election – has been a good one, and they are all actively trying to put an end to the misinformation spreading.
Now, it has just come out that more than 400 fake Russian accounts peddled misinformation on Twitter with the aim of whipping up discord surrounding Brexit and promoting anti-Islamic sentiment. This came shortly after Facebook revealed that Russia may have also had some influence during Brexit on its social network.
So, the full extent of Russia’s meddling in Western politics is still far from being revealed, and it’s clear, at least in the eyes of some, that major tech companies are not doing quite enough yet.
Stop that fakery at once
The unsatisfactory progress of the tech giants, and their lack of speed in identifying the Russia problem, has drawn criticism from US and European lawmakers, governments and organisations alike.
The aforementioned tech trio have been in congressional meetings with US lawmakers, where all three admitted to their mistakes but gave assurances they were working hard to deal with the problem. Meanwhile, the EU have decided it’s about time we take a look at bringing in some form of law to tackle the issue head-on, and would like everybody’s input on creating said law.
So, as you can see, those with a vested interest in stopping the dissemination of fake news are all trying desperately to figure this out. The problem is: it’s not that straightforward, and they all have to be careful of not stifling freedom of speech. On top of that, isn’t some of it merely propaganda and – whether true or not – just part and parcel of anything and everything political?
Careless talk costs lives, doesn't it?
The spreading of propaganda existed long before any of us alive now were living. Most people have seen the posters and adverts that were distributed during the World Wars, many of which had wholly incorrect information being spread from both sides of the battlefield in order to incite hatred. And a lot of the content that gets floated around on the internet nowadays is just the same. For instance, the fake accounts on Facebook were spreading ads and content that amplified divisive social messages – touching on issues such as race, the LGBTQ community, immigration, and gun rights – rather than referencing anything particularly linked to last year’s US election.
Though we may not all agree with some of the messages being spread, for the most part, it sure sounds like propaganda to me – and, oddly enough, it’s something that certain traditional media companies aren’t afraid to take part in also. But, hey, it’s okay for some people not to let the facts get in the way of a good story, right?
Because of the above, I don’t think tech companies, governments or lawmakers could ever get a handle on the vast majority of content being spread without infringing on freedom of speech laws. Of course, this relates to content sent as propaganda, which one could argue is simply an opinion of one person or group toward another. At the end of the day, you can’t stop people having opinions until their opinions turn into hate speech and, even then, there’s a fine line between the two and you have to be careful about how much you punish one side compared to the other. For context, I’m referring to the far right and far left, which are more-or-less the same thing (if you’re a believer of the political horseshoe) but not always punished to the same extent.
Now, when accounts and bodies from abroad are directly getting involved in influencing an election, it’s a completely different story and a lot simpler to give a definitive answer on how it should be dealt with. Any form or sign of this involvement must be dealt with either in law or by the platforms where this direct meddling is being attempted.
Teach me the right way
All-in-all, I don’t believe the spread of misinformation can ever be completely stopped. There’s always going to someone or somebody with their fingers in the pies of others. The spread of fake news can be limited, yes, but the best course of action is to not try overly hard to completely eradicate it. Instead, it’s up to tech companies and governments to educate people on how to identify fake news and really teach people to start thinking for themselves – though I’m not entirely sure how that second part would ever be in the best interest of a government.
A lot of emphasis has been placed on the political implications of Russia’s involvement in the US presidential election and now Brexit. But, at the end of the day, I don’t believe Russia actually had very much influence on either result. You know Donald Trump still lost the popular vote by nearly 3m, and Brexiteers were promised £350m for the NHS, right?
The real problem with the spread of divisive messages from places like Russia is that it continues to destabilise the harmony and incite hatred between groups in the countries it’s targeting. We need to stop caring so much about how it affects our poor little political results and pay more attention to how it affects the groups targeted – then the problem may be solved.
You don’t fight the influence of divisive messaging and hatred with politics, laws, or policies, you fight it with education, discourse, and love.