How fraudsters use 'fake news' and social media to misinform
Junk news is far more likely to go viral and the bad guys know it.
Today we’re involved in an information war some call an ‘infodemic’ on an unprecedented scale fueled by social media channels that can spread misinformation at warp speed.
While social media is relatively new, the use of “fake news” to misinform is centuries old. Dial back to the late 6th century when in his seminal book, “The Art of War,” Sun Tzu wrote, “All warfare is based on deception.”
“Fake news” has become a politicized term. The media claim they’re being served fake news; the public increasingly claims the media itself is complicit in delivering it.
The problem is the ability to discern fact from fiction is increasingly more challenging. The struggle is so real — and people now question truth so much — that in 2016, Oxford Languages selected “post-truth” as their word of the year.
The allure of junk news
The coronavirus pandemic has become a scintillating case in point. Everyday consumers are bombarded with conflicting information about COVID-19. Junk health news sources have become prevalent and, although not as credible as mainstream media (with full-time fact-checkers on staff), junk health sources’ appeal to emotion, shock and confirmation bias makes for compelling share-worthy content.
In a two-week analysis of engagement trends across all media sources, a study by The University of Oxford found that while junk health news gets less reach, it earns higher engagement (100 on a per-article basis, compared to 25 for mainstream sources). Truth crawls; rumors fly. And, on social media, fake news travels about 6x faster than truth.
Junk news is far more likely to “go viral,” and the bad guys know it. They’re adept at using junk news to further misinform, capitalizing on the high levels of social anxiety, outrage, and the curiosity of most consumers along with their incessant need to share sensational information — whether accurate or not.
Even in the face of information to refute the accuracy of sensational reports, people will persist in sharing the same information via their social channels.
How the “bad guys” use social engineering
Who are the bad guys? They range from individuals with personal motivations to “hacktivists” and governments with highly coalesced strategies to misinform.
Social media is used by some countries to control the flow of information. Facebook is one highly prevalent example. Its significant reach — 1.73 billion daily active users, according to Statista — and massive content engine (about 2.5 billion pieces of content disseminated daily) make it a go-to target for a wide range of bad actors. The University of Oxford reported that Facebook is the most common platform used by governments, fringe groups and political parties to spread disinformation.
Identifying misinformation
Misinformation can take a variety of forms — some seemingly innocent and not purposefully intended to mislead, others designed with malicious intent.
Fabricated content — wholly false information. Pizzagate, for instance, a now-debunked attempt by Russian operatives to spread the news that Hillary Clinton ran a child sex ring from a pizza restaurant — led a man to attack the restaurant fully armed.
Manipulated content that distorts actual information. For example, a headline that is rewritten to make it more likely to earn a click (commonly known as clickbait), but with actual content that doesn’t follow through on the promise of the headline.
Imposter content presented under the guise of being from a reputable source by impersonating the reputable source’s branding or look to give the appearance of legitimacy.
False context — content presented out of context and intentionally meant to misinform. For instance, presenting numbers of COVID-19 cases, hospitalizations or deaths in countries around the world (or states around the country) without the context that the overall population numbers would provide.
It’s important to look at the source of everything you receive, and consume, before choosing to believe and share with others.
For organizations, training employees to identify misinformation — not only to avoid embarrassment but also to fend off malicious phishing attacks and other potential cyber breaches — is critical. Employees need regular, in-depth cybersecurity education to keep the issue top of mind.
Perry Carpenter is the author of “Transformational Security Awareness: What Neuroscientists, Storytellers, and Marketers Can Teach Us About Driving Secure Behaviors (Wiley, 2019).” He is chief evangelist for KnowBe4, developer of security awareness training and simulated phishing platforms with over 30,000 customers and 2 million users.
Related: