Facebook engineers panic, pull plug on AI after bots develop their own language Creepy Facebook bots talked to each other in a secret language Facebook kills AI that invented its own language because English was slow Facebook engineers panic, pull plug on AI after bots develop their own language Stopping a Skynet scenario before it begins
These were some of the eye-catching, eyebrow-raising headlines that went viral in recent days. Accompanied by eerie graphics of ominous-looking robots and too many references to The Terminator franchise, these articles
The articles were covering a story regarding Facebook shutting down an artificial intelligence (AI) unit designed to interact and negotiate with humans after the AI units allegedly created their own “language” to subvert human supervision and communicate with their AI counterparts.
It was a Skynet-esque display of all our fears about AI becoming smarter than human beings and taking over the world.
Covered by the Mirror, the Sun, the Independent, the Telegraph and in many other online publications, this story, however, was both exaggerated and a non-issue. It was another instance of clickbait corrupting the truth.
What exactly happened?
In June this year, Facebook published a blog post about research into chatbots and their negotiation with each other or with humans over ownership of virtual items. It was an effort to understand how linguistics played a role in the way such discussions played out for negotiating parties.
In this experiment, the bots were intentionally programmed to experiment with language to analyse how linguistics affected their negotiating skills. The researchers knew that negotiation and cooperation would be necessary for bots to work more closely with humans in the future.
To do this, the computers were fed dialogue and code from thousands of games between humans so that the AI could comprehend the language of negotiation. After this, the bots were allowed to use trial and error (also called reinforcement learning) to develop their negotiation skills. The AI units had to divide a collection of objects like hats, balls and books between themselves through negotiation.
It was during this phase of reinforcement learning that the bots used language not recognisable by their human masters in order to negotiate with each other. Or, as the researchers put it, “We found that updating the parameters of both agents led to divergence from human language.”
The language used was visibly English in script but was largely non-decipherable for the researchers. Here is how the communication between the bots went:
Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i . . . . . . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i i i i i everything else . . . . . . . . . . . . . .
Alice: balls have 0 to me to me to me to me to me to me to me to me to
Bob: you i i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Not unprecedented, not shocking, not unheard of
This trespass was predicted, programmed and mundane. The programmers were neither taken aback nor faced with the potential for apocalypse.
In fact, AIs reinventing a language to better complete a task is not unheard of. Google’s translation software had done the same during development. As Google wrote on its blog last year, “The network must be encoding something about the semantics of the sentence.”
And earlier this year, Wired reported on a researcher at OpenAI who is working on a system in which AIs invent their own language, improving their ability to process information quickly and therefore tackle difficult problems more effectively.
Facebook did shut down the project: but not because they had accidentally created a cyborg that could control minds and conquer countries. Facebook shut down the AI because it did not meet their expectations. As researcher Mike Lewis told FastCo, they had simply decided “our interest was having bots who could talk to people,” not efficiently to each other, and thus opted to require them to write to each other legibly. Which means that Facebook did not shut down the AI because it was too smart – on the contrary, Facebook shut down the AI because it was too dumb.
As Gizmodo wrote, “In their attempts to learn from each other, the bots thus began chatting back and forth in a derived shorthand – but while it might look creepy, that’s all it was.”
And as Dhruv Batra, one of the programmers involved the project, wrote on his Facebook timeline: “.. …