On 30 December, 2019, a strange thing happened in India's social media universe. Spiritual guru Jaggi Vasudev's Isha Foundation had published a poll on Twitter asking users if they believed protests against the Citizenship Amendment Act were justified. However, later in the day, it deleted the poll after 62 percent of users voted in support of the protests " an outcome that was presumably contrary to its expectations.
" Saikat Datta (email@example.com) (@saikatd) December 30, 2019
Similarly, Zee News editor-in-chief Sudhir Chaudhary, a vocal supporter of the Citizenship Amendment Act and of the Central government in general, put out a poll on 25 December asking people if they supported the legislation. This poll also ended on unexpected lines for him, with 50.4 percent users saying that did not support the Act. Chaudhary later alleged"
Have you seen poll rigging on Twitter? Check this out. This is called online booth capturing, unleashing trolls to vote against the CAA, to fix the narrative and hijack true public opinion. This is deplorable! pic.twitter.com/s2zaHPLapz
" Sudhir Chaudhary (@sudhirchaudhary) December 28, 2019
This was part of a general trend in recent weeks of anti-establishment narratives dominating Twitter trends, as opposed to earlier, when pro-government hashtags and trending terms often dominated the conversation. For example, soon after the BJP launched the online campaign #IndiaSupportsCAA with a video by Jaggi Vasudev, a counter-campaign #IndiaDoesNotSupportCAA gained traction and overtook the hashtag promoted by the right-wing brigade on the micro-blogging site.
One of the causes behind this change may be a sustained campaign for the past four months by a coder to delete bot accounts that were part of co-ordinated political trends, thereby creating a more level playing field. Considering the sensitive nature of this piece, the coder requested anonymity. Let's call him Rajesh.
As part of the campaign, Rajesh says he successfully deleted at least 1.6 lakh bot accounts, which may have distorted trending topics on Twitter on any other day. To do this, he had to create bots himself to mass report bots which appeared to be linked to political parties.
A study titled A social media report for bot detection published by him on 30 December has thrown up some startling findings.
The fact that political parties have formal as well as informal IT cells has been widely documented in the past. These IT cells have often promoted fake news and hate speech, aided in no small measure by automated behaviour. For example, on 1 July 2019, there were reports of vandalism at a temple in old Delhi after a parking dispute. However, the next day, outrage exploded over the incident on Twitter, and a provocative hashtag #TempleTerrorAttack became the top trend on the website. A factually incorrect hashtag #TempleDestroyed also began trending. An analysis by Huffpost show that these trends were driven by accounts with barely any followers who posted hundreds of tweets, suggesting "co-ordinated inauthentic behaviour."
However, it is not just hate speech that is a cause for worry, but also political propaganda disseminated through arguably unethical means. An example of this was in February 2019, when hashtags supporting and opposing Prime Minister Narendra Modi began trending on Twitter when he visited Tamil Nadu ahead of the Lok Sabha elections. According to a report by a US-based think tank, the Atlantic Council's Digital Forensic Research Lab (DFRLab), the posts driving both the Twitter campaigns were heavily manipulated. It said that that the pro-Modi hashtag was far more heavily manipulated than the anti-Modi one.
In an exclusive interview with Firstpost, Rajesh described the role of such bots in influencing the narrative on social media, and how he succeeded in getting many such accounts deleted. Here are the edited excerpts from the interview:
What was the motive behind this campaign?
An IT cell " whether the BJP, Congress or any other party " is essentially a group of people who create content to favour a particular party. Much of this content is fake or misleading. However, creating such content is only one part of the strategy, while the other part is that of using technology to propagate that content. (For the IT cells), there is a need to ensure that posts have different content, and they are spaced out so that fact-checkers do not figure out that a large number of people are posting the same thing.
The sheer volume of content that is put out makes it clear that this is not merely a human activity, and there is a lot of technological intervention involved. So, I became interested in the accounts that were not run by actual human beings.
Essentially, if a group of human users are up against a group of bots, there is an imbalance of power. The starting point of the race is not the same. I sought to understand how compromised we are in terms of access to social media due to this artificial intelligence.
What methodology did you adopt to tackle bots and the effect that they have on Twitter trends?
First, I parsed through influential trending and used Twitter API (application programming interfaces) to identify accounts that were likely to be bots (accounts which were not verified, had no profile picture, had random objects as profile pictures, did not have accounts on other social media platforms).
In order to get bot accounts deleted, there was a need to report them on a large scale. I did not have the human resource to do that. So, I created bots for the sole purpose of reporting bots that were pushing out political content, and I succeeded in getting about 1.6 lakh such accounts deleted. Later, I deleted the bots that I had created complying with Twitter policy.
My analysis has found that once such (bogus) accounts are reported, there is less than 1 percent chance of them raising a dispute. This is because they do not have verifiable personal information through which they can show themselves as accounts run by humans.
I started this process in September 2019, and on 30 December, 2019, I released a report laying down my findings. However, the process is an ongoing one. After I saw that a certain narrative was repeatedly being defeated on Twitter - in polls or otherwise - I realised that this strategy may be having an effect. Now, I am seeing that other people are reporting such accounts.
You mentioned an analysis that you conducted on this issue. What did it reveal about the behaviour of these bots?
During my analysis, I broadly found three kinds of activities that bot accounts engage in. The first kind are abusive tweets that are aimed at accounts which can be termed as "influencers" " those who have a significant number of followers.
The second kind are posts that distort or hijack Twitter trends. This is important as many people get their news by looking at social media trends.
The third kind constitute the biggest problem but are largely ignored " tweets that are posted in high volumes regularly even when nothing is happening, which creates a content overload on the platform. This interferes with the mechanism of searching content on the platform, which is very important to win the battle of narratives.
One thing that came as a surprise was that 89 percent of the bot accounts that I identified were created during one very specific time period - June and July 2013. I had earlier believed that the bots are being created as part of a continuous activity, but I found that this was not the case. One possible reason for this is that purchasing bots is a money transaction, and if it happens continuously, the chances of an investigative journalist spotting it are higher.
Further, as many as 18 percent of these accounts had tweeted over 60,000 times, at a rate of one tweet every eight minutes. I cannot say whether the majority of these accounts belonged to the IT cell of the BJP or any other party. However, in the recent past, most of them had tweeted content in support of the Citizenship Amendment Act, Hindutva, etc. Unfortunately, Opposition parties like the Congress and Aam Aadmi Party did not take up the strategy of reporting bots. Instead, they created their own bots, which also contributed to the content overload.
Are you thinking of doing any similar work on other social media platforms?
At present, my work is restricted to Twitter. However, I am contemplating doing something similar on WhatsApp and YouTube, since these are also platforms through which a lot of misinformation is spread.