When the threat of disinformation can be worse than disinformation itself

This article originally appeared in Dutch for SamPol Magazine

According to the World Economic Forum, “AI-generated disinformation” will be the biggest threat for 2024. For its Global Risks report, experts and policymakers are annually asked to look into their crystal ball and predict the state of the world. In the 2023 report, disinformation was still nowhere to be seen in the top ten, but this year it even surpassed “extreme weather events,” which has dangled (appropriately) in the top three for a decade.

This incredible rise in the threat of disinformation is curious, yet not unexpected. Some 4 billion people will vote in democratic elections in 2024; citizens in the U.S., India, Indonesia, Mexico, all of Europe and, of course my country Belgium, among others, will be casting their vote. And this after a year of dizzying AI developments where ever more realistic images and texts are artificially generated, in an era where, according to the Digital News Report, there is a global record of news avoiders. It seems like a perfect storm.

The fear of disinformation focuses most on elections because it is a moment where an opinion gains real power in the form of a vote. Influencing that opinion with lies, rumors and half-truths could lead to drastic shifts in power. However, it is difficult to measure what impact disinformation effectively has on such an opinion shift; a political vote does not simply change quickly because of a lie. Time and again, researchers find few significant correlations between fake news and election results, only that the convinced become even more convinced.

So we should take this threat of disinformation for power shifts with a grain of salt, and certainly the AI factor. Some politicians keep sharing obvious satire or poorly photoshopped disinfo over and over again, which passes almost silently among their constituents. What does that say about where the problem really lies?

The threat of disinformation can also have unintended consequences. Indeed, several studies show that widespread fear of disinformation actually leads to increased distrust in established media and even a reduced ability to identify disinfo. A “backfire effect” that has long been warned about when media literacy is based solely on increasing skepticism, and not on regaining trust in journalism and science.

Moreover, the threat of disinformation can also lead to a reaction that becomes more dangerous than the actual effects of disinformation. There, too, scholars found that greater fear of disinformation leads to greater support for undemocratic interventions. Several countries have created new legislation in recent years to call a halt to disinformation. But “fake news” is often not clearly defined. As a result, these laws can also be abused to silence independent press. According to the Committee to Protect Journalists, some 39 reporters worldwide have even been imprisoned by this type of legislation.

In Europe the Digital Services Act (DSA) entered into force on Feb. 17. This vital legislation imposes certain much-needed transparency obligations on VLOPS (euro-lingo for Very Large Online Platforms) and gives users more opportunities to appeal moderation decisions made by the platforms. However, the legislation can also be used as a blunt instrument by platforms when they are required to engage in “risk mediation”.

This was made clear recently when Thierry Breton invoked DSA obligations around harmful content when hate speech and disinformation spread on various social media platforms following Hamas’ Oct. 7 attacks on Israel. Research by NGOs such as Human Rights Watch and the Arab Center for the Advancement of Social Media found that platforms like Meta systematically silenced voices advocating for human rights in Gaza during that period. They were lumped into the same category of “Hamas glorification”. Although the DSA has various provisions and safeguards against unilateral and political interventions, a threat of legal obligations by zealous bureaucrats can lead to the hasty removal of content by the platforms.

Disinformation can certainly pollute debates, protests and legit criticism; it is the Achilles heel of liberal democracies that value free speech. That openness is also eagerly exploited by those who do not care about the importance of keeping the public space open. But with too narrow a restriction on freedom of speech, one can also stifle those who raise social injustice. The biggest mistake that can be made is to suppress legitimate criticism as a ‘precaution’.

There are unpleasant conclusions to be drawn from the WEF forecast, where legitimate concerns such as the “cost of living” and the “housing crisis” drop in the rankings and are replaced by the threat of disinformation. If we really want to protect elections against undemocratic actors, it doesn’t help to attribute social unrest to an allegedly deceived public. That kind of thinking leads to simple solutions that can make complex problems worse.

Image by jakob5200 from Pixabay