Facebook has been conducting research on Artificial Intelligence (AI) creating bots, software to execute certain commands with minimal human intervention. Suddenly, in the last few weeks, some newspaper headlines and television scrolls started claiming that the social media platform was forced to shut down one of its AI systems because “things got out of hand.”
Playing on the fears of the people of a takeover by Artificial Intelligence (AI), the reports claimed that the shut down happened after researchers found some chatbots (programs which have short, text-based conversations with humans or other bots) developing their own language that could not be understood by humans.
However, AI researchers have dismissed such reports as “clickbaity and irresponsible.”
Dhruv Batra, Visiting Researcher at Facebook AI Research, in a Facebook post said, “I have just returned from CVPR (Conference on Computer Vision and Pattern Recognition at Hawaii July 21-26, 2017) to find my FB/Twitter feed blown up with articles describing apocalyptic doomsday scenarios, with Facebook researchers unplugging AI agents that invented their own language.
“I do not want to link to specific articles or provide specific responses for fear of continuing this cycle of quotes taken out of context, but I find such coverage clickbaity and irresponsible,” he added.
Batra goes on to explain that while the idea of AI agents inventing their own language may sound alarming/unexpected to people outside the field, it is a well-established sub-field of AI, with publications dating back decades.
“Simply put, agents in environments attempting to solve a task will often find unintuitive ways to maximize reward. Analyzing the reward function and changing the parameters of an experiment is NOT the same as “unplugging” or “shutting down AI”. If that were the case, every AI researcher has been “shutting down AI” every time they kill a job on a machine,” he explained.
Batra also asked everyone to read the actual research paper or the overview blog post from FAIR (Facebook AI Research Lab): https://code.facebook.com/…/deal-or-no-deal-training-ai-bo…/
The entire controversy erupted after Facebook published an academic paper in June about a scientific experiment in which researchers got two artificial agents to negotiate with each other in chat messages after being shown conversations of humans negotiating. The agents’ improvement gradually performed through trial and error.
While the chatbots conversation did deviate from correct English, the purpose of the research was to make the agents negotiate effectively. The researchers comprising Mike Lewis, Denis Yarats, Yann Dauphin, Devi Parikh and Dhruv Batra of Facebook’s Artificial Intelligence Research group noted that the chatbots even figured out how to pretend to be interested in something they didn’t actually want, only to later ‘compromise’ by conceding it.
Researchers at Alphabet and Elon Musk-backed OpenAI have also dismissed fears about AI. Researchers using AI have found the bots reworking the language to compute a task. Google, in a blog post, said its translation software had behaved in a similar fashion during development. “The network must be encoding something about the semantics of the sentence,” it said.
Similarly, Wired quoted a researcher at OpenAI working on a system in which AIs invent their own language, improving their ability to process information quickly and thus tackle difficult problems more effectively.
A verbal scrap over the potential dangers of AI between Facebook chief executive Mark Zuckerberg and technology entrepreneur Elon Musk had added to the controversy. Musk had tweeted that Zuckerberg had only “a limited understanding of AI”. Several other experts including Professor Stephen Hawking have raised fears that humans, who are limited by slow biological evolution, could be superseded by AI.