In my last couple of posts I have gone over the origins and development of fake news, and our present situation. Here, I will explore the potential future we face with fake news. I’ll briefly discuss how different generations process fake news, growing threats from technology, and developments in how we combat fake news and the harm it causes. Is every catastrophe we face, whether local or global, going to grow the spread of misinformation? Or can we find a way to limit its impact?
How we process information changes as we age
There is a stereotype that older people are more likely to fall for fake news stories, especially those found on social media. However, a study from the University of Florida during the early stages of the Covid-19 pandemic found that older and younger adults are equally as likely to believe fake news stories. There are differences in how generations consume news stories however. Older adults will generally spend more time online following news stories than younger ones. Therefore, they come into contact with more fake news. They therefore engage with it and share it more. The differences in critical thinking with news stories comes only with the “oldest old”, those over 70. This is when cognitive decline does begin to impact whether someone gets fooled by fake news.
All this is relevant for a couple of reasons. Firstly, a large number of the countries in North America, Europe and Asia, and of course in other parts of the world, have ageing populations. This means, larger and larger percentages of their populations will reach the “oldest old” category, and will find it increasingly hard to differentiate between what’s real, and what’s fake news. This leaves nations open to manipulation from internal and external malignant forces.
Secondly, it’s the older generations now who tend to still consume news through print and television. Whilst these sources can’t be trusted entirely, they at least face stronger regulation than social media. As these mediums die out and are replaced with the internet, the primary news source for the majority of young people, fake news can only proliferate.
Future threats from Technology
The term ‘Deep Fake‘ has been in circulation for a few years now. I class this as future problem because the technology to perfect them is still developing. If you haven’t heard the term before, it refers to AI generated videos showing events that didn’t happen, but are made to look realistic. Here is a good example from last year. Very few would doubt that isn’t a video of Morgan Freeman talking. But, an artist generated the image, and the voice is an impersonator. One can even deep fake voices, if a talented impersonator isn’t available.
It’s not hard to imagine the damage technology like this could do. How can we discern whether news is fake when the person it’s about is confirming it? Or something that looks and sounds like that person. We could end up in a state where we can’t believe anything we see online ever again. Many social media platforms, including FaceBook, have banned deep fakes. As they become more sophisticated, will be they detectable any more?
Are new AI the solution?
As I mentioned in my previous post, social media companies are incorporating AI technology into their platforms to help identify and fight fake news. However, are AI the solution? Psychologically, the more we see of a piece of information, the more likely we are to believe it. Unfortunately, social media algorithms work to show you more of the same topic or theme that you have shown interest in. So, if you see fake news, it’s easy to see it again and again. The AI can break this cycle by flagging and removing items with this piece of fake news in, preventing belief through repeated exposure.
However “making such distinctions requires prior political, cultural and social knowledge, or common sense, which natural language processing algorithms still lack.” AI may never become sophisticated enough to completely protect us from fake news. Over time, the AI may improve, so but will the methods the fake news generators use to make the news seem real. In their article for The Conversation, Sze-Fung Lee and Benjamin C. M. Fung outline the importance of ‘Human-AI partnerships’ when fighting fake news. Humans can have the required knowledge and critical skills to identify fake news when AI cannot. Perhaps the AI flags a particular topic that generates a lot of fake news, such as vaccines, and then the human partner uses their skills to double check it. Fighting fake news cannot be left to the technology of social media alone.
Is there any hope?
The Social Decision-Making Laboratory at the University of Cambridge has explored another method for tackling fake news. They describe it as an ‘inoculation’. Using a video game, they allowed players to simulate being a fake news mogul, playing to successfully generate fake news. Rather than encourage the spread of fake news, the game allows people to understand the methods and tactics fake news creators employ, and better discern what’s real and what’s not. So, like with a vaccine, we’re showing our body a sample of what to fight. Then, when we encounter it in the future we have a better chance of resisting it. As with vaccines, only time will tell if the resistance we gain from the inoculation stays long term.
And now we come to the method that may benefit us the most of all, and what LifeBonder stands for. The best way to stay away from fake news, and avoid the cycle of repeated exposure to it, is to vastly reduce how much of our lives are spent on social media. LifeBonder’s goal is to help people make meaningful and lasting connections online, with the purpose of moving offline to continue those relationships. If the world can reverse its relentless social media addiction, then maybe we can resist the scourge of fake news. in the future.