INSIGHT: How to see through the fake news shared online

By Malcolm Strachan

My phone pinged the other night with a message – a Whatsapp note being circulated describing a scandal involving a senior figure in the government. Except, after even a few moments of reading the alleged scandal, it was clearly untrue.

With election season coming up, you can bet we will see more of this kind of thing than ever – so how can you sort the lies from the truth?

That message that hit my phone was one of several of late. There was another talking about an old court case involving a prominent funder of the PLP, dug up to circulate again as if it was new. There was an old video of a likely FNM candidate, again dusted off and sent around again with no suggestion of what the circumstances of the video were or what the resolution might be.

In the case of the claims about a senior government figure, there was no evidence presented – simply a message claiming to be from someone with inside knowledge, with no specifics, nothing to prove what was said, and only a vague suggestion that something would happen in coming weeks to make it all clear. If someone told you all that at the bar, you’d laugh them off and tell them they’d had one drink too many, but on Whatsapp, these things get a life of their own and keep on circulating.

Since the last election, we also have the arrival of artificial intelligence (AI), which can be spotted with a careful eye but which can convince at a glance.

Take an example – when US President Donald Trump met Ukraine President Volodymyr Zelenskyy in the White House, the atmosphere was more than tense. As close as a full-on argument as you will see in such a diplomatic setting broke out, and the Ukraine leader left the White House far sooner than was expected.

Over on social media, however, someone took the video from the meeting and had AI turn it into a fist fight between the two. The video was shared widely – presumably by many people thinking it was funny, but it resulted in more than an occasional person commenting in ways that suggested they thought it was real.

We have seen the use of AI here – principally by scams trying to convince people that a message really did come from Prime Minister Philip Davis urging people to part with their money.

There have been repeated warnings from officials about various instances of messages circulating, purporting to be from senior figures and trying to get the recipients to part with their cash.

Will such a technology be used as we move into election season? Will there be videos trying to discredit candidates that have been created by AI? Even if political parties rule out using such technology under any circumstances, that would still not rule out the mischievous individual actors on social media from trying to stir the pot.

Then there is the other risk – that genuine documents and videos are dismissed as having been created by AI when in fact they are real. How likely is that? Well, just recently we have seen a sitting MP caught out by the statements he made in an interview to The Tribune, standing up in Parliament to say he had not said the things reported in the newspaper. The Tribune released the audio. That MP, Leroy Major, made no claim about AI, but in originally claiming he never said what he had, he did so in the House, which should mean he ought to return and correct the record after his misleading statements. If that kind of denial can take place when the evidence is so easy to bring, what kind of denials will there be when it is harder to verify some of the things that might be in circulation?

So how can you protect yourself in this age of misinformation? Well, UNICEF offers some advice – it says to get your news from a range of sources. If the story only appears in one outlet, it might well be suspicious, but if many outlets are reporting the same, there’s a higher chance that it is not misinformation.

UNICEF also recommends being critical about the credibility of sources. If the news comes from social media, can you track it back to where it first came from? See what other news that source provides. If it seems sketchy, then be wary. This is particularly a challenge with Whatsapp, where unsourced messages can be shared on and on and on.

Recognise the good sources – the real experts, as UNICEF puts it. Do they have expertise in the field?

So why do people want to believe misinformation? Again, UNICEF points out some elements here – including the emotional side. Sometimes, people want to believe, or the report plays into their confirmation bias. It tells them what they want to believe. Then there are those who believe because they want to be part of a group. Then there are the conspiracy theorists out there. Is that an argument to have with people? Perhaps, perhaps not. As UNICEF notes, such conversations are about “helping someone build their critical thinking skills and [to] see things from a new angle”.

Then there are the AI-generated pieces of text that get circulated around. These can be harder to spot – AI detectors can themselves be fooled, and the tips provided such as looking for typos which suggest the text is more likely to be real or watch for the repetitive use of words such as “the”, “it” and other common words could be outdated as soon as the next model of such generators comes along.

And frankly, it does not matter so much whether the text was AI generated – as people use that for legitimate reasons anyway. What matters is the message, not the method.

More than ever, now is the time to pay attention to where you get your news – what is likely to be real, and what is likely to be fake, or at the very least embellished.

And consider one more thing – when you do see what appears to be fake news being circulated, ask yourself one question. Who stands to gain from such allegations about an individual being shared around? That in itself can be as informative as anything else.

Log in to comment