Can’t spot the bot? In California, they have to reveal themselves | Innovation
California Governor Jerry Brown drew plenty of headlines on Sunday when he signed the state’s net neutrality bill into a law, a move that was immediately met with a lawsuit from the Department of Justice.
On Friday, he signed another bill that drew less attention — a new law that bans automated accounts, more commonly known as bots, from pretending to be real people in pursuit of selling products or influencing elections. Automated accounts can still interact with Californians, according to the law, but they will need to disclose that they are bots.
The law comes as concerns about social media manipulation remain elevated. With just more than a month to go before the 2018 U.S. midterm elections, social media companies have pledged to crack down on foreign interference.
A big part of that effort has been targeting bots that spread misinformation and divisive political rhetoric. Twitter said it took down 9.9 million “potentially spammy or automated accounts per week” in May and has placed warnings on suspicious accounts. Dorsey has even publicly floated the idea that Twitter may try to identify bots and label them as such.
Bots are also not limited to social media. Google caught the attention of the tech industry in May when it rolled out Google Duplex, a new voice assistant that could talk over the phone with humans to schedule appointments or make restaurant reservations — complete with “ums,” “ahs” and pauses just like a human.
The demonstration sparked a discussion around the ethical issues of having people unknowingly interact with bots.
While “bots” at this point might be considered a pejorative term, automated programs are essential to the workings of the internet, with many working behind the scenes to catalogue websites for search engines and others providing a way to help people take a break, track earthquakes or just know how far the world is through 2018.
Tony Veale, associate professor of computer science at University College Dublin, said the nuanced world of bots and their potential may not be getting the attention they deserve.
Veale is the co-author of “Twitterbots: Making Machines that Make Meaning,” a book released in September that explores the world of bots — and even offers a guide on how to make them.
“There are many, many more innocent people on Twitter and Facebook than there are malicious people,” Veale said. “A book like this is inevitably going to help many more good people.”
The book explores how machines can be used as an outlet for creativity, theorizing that it’s worth giving people yet another tool to experiment with.
Veale’s co-author, Mike Cook, a senior research fellow at the University of Falmouth, U.K, said giving people more of an idea of how bots are made can help the public guard against malicious bots.
“I think there’s an argument that mastering this technology helps us defend ourselves a little bit against it,” says Cook.
That notion was fully embraced by Robert Hertzberg, a state senator from California who pushed for the new law forcing bots to disclose their lack of humanity.
Hertzberg told The New York Times he was the subject of a bot attack over a bail reform bill. So he decided to fight bots with bots — launching @Bot_Hertzberg in January.
And, per California law, the account discloses its automated nature.
“*I AM A BOT.*” states the account’s Twitter profile. “Automated accounts like mine are made to misinform & exploit users. But unlike most bots, I’m transparent about being a bot.”