How To Win [Fake] Friends & Influence People with Political Botnets

Touching the Hand of Bot

Epolitics.com contributor Deepak Puri has held executive positions at Oracle, Netscape and VMware, and is the founder of SkilledAnalysts.com, an IoT consulting firm, and co-founder of Democracy Labs, a non-profit hub in San Francisco that connects technical volunteers with progressive causes. Considering the role of botnets in spreading “fake news” in the 2016 elections, we can’t ignore the dark side of this technology.

Dale Carnegie would be horrified. The concept of winning friends and influencing people has been perverted for online political manipulation…by robots.

How do botnets work? How are they used for mass deception? How do you recognize when you might be interacting with a political bot?

Background

Social media has evolved from personal communications to impersonal ones. Chatbots can be programmed to post and tweet automatically. They’re versatile and interactions appear human. They even get smarter as they learn from interactions with people. In reality, they’re programs that interact through a chat interface and respond based on preset rules and artificial intelligence. Chatbots are now responsible for much of the traffic on Facebook Messenger and Twitter. Botlist features hundreds of bots, everything from checking stock prices to online dating.

Facebook reports that its Messenger and WhatsApp now process 60 billion messages per day, more that three times the volume on SMS (sending a message via Facebook Messenger costs a fraction of sending one by SMS).

Fake online identities can praise, defend or attack someone in bulk to manipulate public opinion.Click To Tweet

Bots & Botnets

Most social media users interact with information through a web interface. Bots, however, interact through an application programming interface (API), which enables them to analyze posts and respond in real-time. Botnets are networks of bots. A botnet may consist of hundreds of accounts, all controlled by a single user. Social botnets are interconnected and usually programmed to follow and re-message each other.

Bot creation is now simpler than ever. ChatFuel even allows bots to be developed without coding, with conversational rules are defined on a dashboard. Bots recognize phrases from users and reply with predefined answers using Natural Language Programming (NLP). Jerry Wang explains how to develop bots for Facebook Messenger with Heroku and Node.

chatfuel dashboard screenshot ai https://chatfuel.com/

When Chatbots Go Bad

Chatbots can be used for both legitimate and malicious purposes. Maliciously, they’re used to suppress voices and promote hate speech, and increasingly for political purposes.

To simulate a big following, it helps to have many “virtual” users. But recruiting real followers and developing bots takes time. Online marketplaces offer both for a fee, using different IP addresses and further obfuscated with proxy servers to mask true identities. These chatbots are a form of “sockpuppet’, an online identity used for deception. Misleading online identities are used to praise, defend or attack a person or position to manipulate public opinion.

How Do You Spot a Bot?

No matter how well they masquerade, botnets can often reveal themselves through common traits.

  1. Response time: Bots are programmed to respond automatically. So when tweets and responses appear within a fraction of a millisecond, it’s a good bet that the response came from a bot.
  2. Volume: Political bots lurk in certain chat groups, waiting for the mention of certain topics or keywords. This could result in dozens of opposing, vicious posts and tweets immediately. Some twitter accounts bots work hard: one account was reported to have sent 400–500 tweets around the clock, every day for six days a week. In real life, even puppet masters have to take a break for a day.
  3. Novel words: Real human conversations include multiple words and phrases. Bots, however repeat “novel words” to emphasize one viewpoint and drown out others. This suggests that the sentences were written by a single author, or a group of authors working from a shared messaging playbook. “Instead of many of thousands of unique, individual voices, it was as if one voice became dominant”, explains Jonathon Morgan, co-founder & CEO of NewKnowledge.io
  4. Unusual names: Online merchants generate thousands of fake bots for sale, so they’re often not picky about the names. A would-be user named “@stanbieberfan” might be worth scrutinizing. “Bot or Not” is a free online service to determine if you might be talking to a bot.
  5. Number of followers: Bots tend to not have many followers themselves.
  6. Devices used: Most response tweets originate from iPhone, Android and Windows devices. Many bot responses seem to originate from a Windows phone.
  7. Bots are anti-social: Bots never retweet or mention any other Twitter user.
  8. Mob behavior: Political botnets are usually controlled by a single person (puppet master). The volume of bot posts on an issue typically increase and stop in unison.

Political Games

How can army of political bots influence political campaigns? A research paper on the “Star Wars Botnet” lists some chilling examples.

  1. Make someone more prominent: Bots become “fake” followers for the tweets by someone. This helps raise their profile and attract more real (human) followers.
  2. Fake trending topics: Bots can fake how many (ture) followers a particular topic has. Most social media platforms do not distinguish between human and bot followers. This results in topics that a puppetmaster selects to “trend”.
  3. Manipulate public opinion: Botnets can be programmed to make positive or negative posts in a coordinated manner. This information distorts the input used by researchers and pollsters to report on public sentiment.
  4. Astroturfing: An army of bots can be programmed to “agree” amongst themselves on a topic. These bots have different names and locations, giving the semblance that an actual human community has come to an agreement on a topic by itself.
    The threat from political bots and automated speech is real.Click To Tweet

Fighting the Good Fight

Is an army of bots seeding the web with negative information about The Enemy? The threat from political bots and automated speech is real. Today it’s politics, but tomorrow bots could also be used to slander a competitor’s commercial product as well. What can we do?

  1. Understand the technology and trends in this field. Political Bots perhaps the best resource to stay current about new developments in this field. They reports on bot algorithms, computational propaganda, and digital politics.
  2. Support efforts to combat automated speech. Social media platforms have the technology to label automated speech generated through APIs as “Bot Generated”. They should use it!
  3. Donate to groups creating bots for transparency and civic activism. Some of the best are: ResistBot (turns your text messages to “50409” into daily letters to Congress), @StayWokeBot (helps answer tweets related to the Black Lives Matter and other causes), Congress Edits (tweets about anonymous Wikipedia edits made from IP addresses in the US Congress, Call With Jefferson (this friendly bot uses your location to identify the three congressional representatives for your area and gives you their contact information and a script designed to give the switchboard operator the information they need to properly connect you in as few words as possible).
  4. Stay abreast, by attending conferences such as the upcoming SuperBot Conference to learn more about bots and to meet the experts.

“There is only one way… to get anybody to do anything. And that is by making the other person want to do it”, wrote Dale Carnegie. Bots can’t vote (yet), but botnet masters may achieve the same outcome…if their bots influence public opinion and manipulate people’s voting.

Resources:

Editor’s note: my father is a huge fan of Dale Carnegie, and he had me read “How to Win Friends and Infuence People” young. It sounds cheesy, but the book’s really just a series of practical rules of thumb for social interactions. Once in southern Armenia, I had a long conversation with a local guy in his 20s who said the book had changed his life. True fact!

Top image via Pixabay

Written by
Deepak Puri
View all articles
Leave a reply