Did Facebook Shut Down A “Creepy AI” For Inventing Its Own Language?
Image Credit: Telegraph | Tech Viral

Did Facebook Shut Down A “Creepy AI” For Inventing Its Own Language?

  • Facebook engineers panic, pull plug on AI after bots develop their own language
  • Creepy Facebook bots talked to each other in a secret language
  • Facebook kills AI that invented its own language because English was slow
  • Facebook engineers panic, pull plug on AI after bots develop their own language
  • Stopping a Skynet scenario before it begins

These were some of the eye-catching, eyebrow-raising headlines that went viral in recent days. Accompanied by eerie graphics of ominous-looking robots and too many references to The Terminator franchise, these articles

The articles were covering a story regarding Facebook shutting down an artificial intelligence (AI) unit designed to interact and negotiate with humans after the AI units allegedly created their own “language” to subvert human supervision and communicate with their AI counterparts.

It was a Skynet-esque display of all our fears about AI becoming smarter than human beings and taking over the world.

Covered by the Mirror, the Sun, the Independent, the Telegraph and in many other online publications, this story, however, was both exaggerated and a non-issue. It was another instance of clickbait corrupting the truth.


What exactly happened?

In June this year, Facebook published a blog post about research into chatbots and their negotiation with each other or with humans over ownership of virtual items. It was an effort to understand how linguistics played a role in the way such discussions played out for negotiating parties.

In this experiment, the bots were intentionally programmed to experiment with language to analyse how linguistics affected their negotiating skills. The researchers knew that negotiation and cooperation would be necessary for bots to work more closely with humans in the future.

To do this, the computers were fed dialogue and code from thousands of games between humans so that the AI could comprehend the language of negotiation. After this, the bots were allowed to use trial and error (also called reinforcement learning) to develop their negotiation skills. The AI units had to divide a collection of objects like hats, balls and books between themselves through negotiation.

It was during this phase of reinforcement learning that the bots used language not recognisable by their human masters in order to negotiate with each other. Or, as the researchers put it, “We found that updating the parameters of both agents led to divergence from human language.”

The language used was visibly English in script but was largely non-decipherable for the researchers. Here is how the communication between the bots went:

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to


Not unprecedented, not shocking, not unheard of

This trespass was predicted, programmed and mundane. The programmers were neither taken aback nor faced with the potential for apocalypse.

In fact, AIs reinventing a language to better complete a task is not unheard of. Google’s translation software had done the same during development. As Google wrote on its blog last year, “The network must be encoding something about the semantics of the sentence.”

And earlier this year, Wired reported on a researcher at OpenAI who is working on a system in which AIs invent their own language, improving their ability to process information quickly and therefore tackle difficult problems more effectively.

Facebook did shut down the project: but not because they had accidentally created a cyborg that could control minds and conquer countries. Facebook shut down the AI because it did not meet their expectations. As researcher Mike Lewis told FastCo, they had simply decided “our interest was having bots who could talk to people,” not efficiently to each other, and thus opted to require them to write to each other legibly. Which means that Facebook did not shut down the AI because it was too smart – on the contrary, Facebook shut down the AI because it was too dumb.

As Gizmodo wrote, “In their attempts to learn from each other, the bots thus began chatting back and forth in a derived shorthand – but while it might look creepy, that’s all it was.”

And as Dhruv Batra, one of the programmers involved the project, wrote on his Facebook timeline: “.. agents in environments attempting to solve a task will often find unintuitive ways to maximize reward. Analyzing the reward function and changing the parameters of an experiment is NOT the same as “unplugging” or “shutting down AI”. If that were the case, every AI researcher has been “shutting down AI” every time they kill a job on a machine.”


I have just returned from CVPR to find my FB/Twitter feed blown up with articles describing apocalyptic doomsday…

Posted by Dhruv Batra on Monday, July 31, 2017


The Logical Indian take

In the past week, the difference of opinion between Elon Musk and Mark Zuckerberg over the future of AI renewed interest on the issue. There is no question that many of the misleading articles attempted to exploit this mainstream discussion on AI to score social media traction. To do this, they misread a research paper, misrepresented a research project and misled millions of readers.

The response to the Facebook AI story depicts all the basic features and consequences of clickbait and misleading headlines and articles. Attributing Facebook shutting down its AI because the bots created their own language to bypass humans and create their own intelligence adds to paranoia and fears over AI and machine learning that already exists in our society.

Words matter. When mainstream media outlets and popular online news portals misrepresent a mundane story about AI tweaking words to ease the process to make it seem like the rise of the machines and a Skynet-esque robot overlord, they insult the researchers and the public.

The debate over AI is an increasingly crucial one. We need to have it with proper facts and without yellow journalism and clickbait-induced hysteria. Because, for the moment at least, people preparing for a Terminator-styled global showdown will be terribly let down.

Contributors Suggest Correction
Editor : The Logical Indian

Must Reads