The weaponisation of information

This is one win India isn’t proud of. In July, our country earned the dubious distinction of being the top source of misinformation spread during the Covid-19 pandemic. Think about it. Your social media feeds, YouTube videos, WhatsApp forwards bombard you with information every day. But it is increasingly becoming difficult to separate the grain from the chaff, the authentic from fake news. 

It’s a struggle the government and regulatory bodies are yet to win. Many countries have tried to fix the issue with laws, arrests and crackdowns, but most have had limited to no success. In India, the government is contemplating adding misinformation to the “Digital India Act,” which is set to replace the Information Technology Act. A draft is expected early next year, and reports suggest “fact checking portals” will be dealt as a separate category.   

Let’s take a step back

Misinformation, as a concept, is nothing new. It has existed as part of the media for centuries and has played a role in furthering propaganda in times of wars, elections, and social movements.

The added dimension now is digital and social media. For example, consider the amount of misinformation spread during the ongoing Russia-Ukraine war. From misidentifying soldiers to spreading fake news of Ukrainian surrender, there have been a host of misinformation campaigns run during this war. Lotem Finkelstein, head of threat intelligence at Check Point Software, said something apt about the situation. “For the first time in history, anyone can join a war.” Much of this war-related misinformation has been spread through TikTok, Twitter, Facebook, YouTube, and Instagram, with some calling it World Cyberwar I.

It isn’t just fake news articles anymore. One is subjected to a barrage of video, audio, and text. Things such as deepfakes — videos that look extremely real and convincing — have been a big part of the new-age disinformation campaigns. Because of the proliferation of social media and the thousands of new upstarts, content is everywhere. It is difficult to identify the real from the made-up.

However, misinformation or disinformation online is not a manual process. Much of it exploits the way platforms are designed, using their algorithms to amplify harmful content. It further gets amplified by targeting people who already hold certain beliefs. As a result, platforms are also struggling to understand how to align their existing policies to these new forms of misinformation.  

Breaking the cycle

A recent study found “that the internet may act as an ‘echo chamber’ for extremist beliefs; in other words, the internet may provide a greater opportunity than offline interactions to confirm existing beliefs”. This has led to an increase in radicalisation globally. 

There is a reason ‌you keep seeing content related to things you like. Social media platforms have algorithms that are learning from you. If you watch or “like” a dance or cooking video, for instance, the algorithm will know you liked it and offer you more content choices catering to your preferences. This is the reason behind the rise of platforms like TikTok. 

If it was just dance videos, it wouldn’t have mattered much. But people also watch political content or get introduced to political ideas through content that interests them. Once the algorithm figures out the ‌content you like, it will show you similar stuff. The more you watch, the more it learns. The more you see, the more time you spend on the app. Thus creating an echo chamber, leaving little room for alternative or contrary points of view. 

A breakdown in trust in mainstream media is also an oft-cited reason for people to believe what they see and hear on social media platforms, further amplifying extreme views. The spread of misinformation through closed platforms like WhatsApp, Signal, or more recently, Discord is also a challenge that is being tackled at some level by fact-checking organisations.

The world is only getting more online. As consumers of news, it is important to build a culture of objectivity and ask the right questions. If a piece of news comes to you on WhatsApp, instead of accepting it as the gospel, an ideal way would be to look into other news sources.

Apart from being sceptical of news sources, there are a few things you can do to not fall prey to disinformation campaigns: 

  1. If certain content looks outlandish and evokes a strong reaction, making you want to amplify it, it’s time to do a quick search for primary sources before hitting the share button. 
  2. Satire often gets amplified as news, especially on chat apps. So checking the original source of the text through a quick online search is another way to confirm its authenticity.
  3. Fact-checking organisations like Snopes, BOOM, FactCheck.org, provide tools to fact-check content. Some even have WhatsApp helpline numbers where you can share a forwarded message and get a near-instant fact check done.
  4. Read beyond the headline in the news because they can often be misleading. 
  5. If someone forwards you an article, make it a habit to check the date. Misinformation campaigns often amplify old news to create chaos in a new context. If there is no date, that’s another red flag.

Amplifying a fake message makes us part of the chain of disinformation that can have catastrophic consequences. So getting to the source of a piece of content is an effort worth making. 

Our internal biases will always tempt us to believe information that conforms. This new digital world will require a significant amount of rewiring in how we approach what we see online. It is easy to surrender to our biases, but for the sake of the future, let’s not pick the easy way out.  

Author

  • K K Mookhey

    K. K. Mookhey (CISA, CISSP) is the Founder & CEO of Network Intelligence (www.niiconsulting.com) as well as the Founder of The Institute of Information Security (www.iisecurity.in). He is an internationally well-regarded expert in the field of cybersecurity and privacy. He has published numerous articles, co-authored two books, and presented at Blackhat USA, OWASP Asia, ISACA, Interop, Nullcon and others.


Leave a Reply

Your email address will not be published. Required fields are marked *