Since the furor surrounding Russia’s alleged use of Twitter bots to influence the 2016 presidential election in the United States, social media bots have been most commonly associated with carefully planned, long-term campaigns. However, we have observed a shift whereby automated bots increasingly are established to provide an opportunistic reaction to events or individuals, in very short and targeted campaigns. Advances in artificial intelligence will likely facilitate the creation of more believable throwaway bot networks with less investment needed to deliver expedient effects.

We recently worked on a fascinating Request for Information (RFI) from a client. Without disclosing too much, the organization suspected one of its employees had been targeted by Twitter bots. Following research, it appeared our client’s suspicions were correct: bots had been automatically spamming the employee’s Twitter page. Case closed and on to the next RFI.

However, as the dust settled from the task, we began thinking that this reflected a change in the way bots are used to spread disinformation. Bots and their many variants have been around for years and are used by a range of actors in many different ways, be it ISIS “ghost tweeting” its messages to give the appearance of a wider worldwide following, fake Chinese social media posts on Weibo intended to drown out messages about bad news and politically sensitive issues or celebrities using fake followers to increase their online influence. This particular case was interesting for two reasons:

  1. The focused targeting of an individual outside of significant geopolitical event (albeit with crudely executed content)
  2. The short-term nature of the bots’ activity, initiated in response to a specific event and ended when the campaign’s ostensible goal was achieved

From the Masses to the Individual

Mass targeted disinformation is a well-known phenomenon, given press coverage of the growing number of “troll farms” springing up globally. Since a troll farm is staffed by humans, the farm’s masters can target individual users and engage them in complex and intelligent dialog that appears authentic in its spontaneity. The Holy Grail for this type of malicious actor would be a bot that could engage millions of users with the authenticity of a human troll.

In the case of nation states, campaigns may be part of long-term projects to influence other countries’ public discourse, such as the bots used to influence British politics in the 2016 EU referendum and subsequent election in 2017. This case was different. The bot campaign we were investigating appeared to have been established soon after particular actions by the targeted individual and disbanded immediately after the bots achieved their purpose. The “pop-up” nature of this bot campaign has been reflected in recent media stories: a widespread story about a Muslim woman walking past and ignoring injured victims of the March 2017 terror attack in Westminster has been attributed to a “fake news” bot campaign, and bots were observed attempting to influence the discourse about gun control laws following the February 2017 school shooting in Florida. This suggests actors are establishing bot networks to provide immediate, opportunistic reaction to events.

 

Where is this trend going in the future?

Technically, the key factor to watch is the development of artificial intelligence (AI), specifically regarding the Turing Test (a computer’s ability to convince a human user they are speaking to another human and not a computer). Given the textual, non-real-time medium of many social media platforms, computers have a distinct advantage in this area, and as early as 2014 some researchers claimed to have AI programs that could pass the Turing Test (Google “Eugene Goostman”).

With this level of authenticity, mass targeted disinformation campaigns become a realistic possibility for the disinformation peddler. These ideas have been expanded upon by authors such as Keir Giles (see: Handbook of Russian Information Warfare), who proposed scenarios whereby bots conduct mass targeted disinformation campaigns on the eve of a large-scale NATO troop mobilization. Such advances in AI also play into the hands of malicious actors creating bots for short-term purposes as they enable more believable bots to be set up swiftly, without spending months teaching bots what to say on a particular topic.

These ideas are not only interesting but important given the current influence that social media-driven news and propaganda currently have across the globe. This applies to nation states at election times, but it also relevant to businesses. You can read more about disinformation campaigns affecting organizations (as well as how to combat them) in a recent research paper of ours, “The Business of Disinformation.”

Subscribe to our weekly newsletter to get the latest news and research by Digital Shadows (now ReliaQuest).