facebook-shuts-down-ai-system-after-it-invents-own-language.jpg

Facebook shuts down AI system after it invents own language

One British tabloid quoted a robotics professor saying the incident showed “the dangers of deferring to artificial intelligence” and “could be lethal” if similar tech was injected into military robots.Scientists have built the robots to switch to more efficient call-and-responses, but it ended up being gibberish to humans. The two AI have no problem understanding each other.The reality is somewhat more prosaic. When they had been assigned a task, they started using this language and communicated with each other to fulfill that specific task.The negotiation system’s GUI. At OpenAI, the artificial intelligence lab founded by Elon Musk, an experiment succeeded in letting AI bots learn their own languages.Well, we aren’t there yet, but Facebook’s bots conversing amongst one another using their own language – defying the code – which humans cannot understand sure looks like a scary sign. “After learning to negotiate, the bots relied on machine learning and advanced strategies in an attempt to improve the outcome of these negotiations”. Batra stated that sometimes humans use “shortcuts” that are easily understood while talking to each other to help get things done quickly.”If I say “the” five times, you interpret that to mean I want five copies of this item”.If AI agents are allowed to speak in a language that they created, they might no longer even need human intervention.Facebook did indeed shut down the conversation, but not because they were panicked they had untethered a potential Skynet.Mike Lewis, a researcher on the FAIR team, confirmed that Bob and Alice had been disabled, but said the researchers were not really afraid that the bots were planning a coup to take over the world; the program was shut down so that they could fix the glitch and ensure that the bots could communicate with people.As they explained in their June post, the researchers could decode the new language with fairly little trouble as it was still English-based, but they could never be certain that their translations were 100 percent correct. It’s worth noting that when the bot’s shorthand is explained, the resulting conversation was both understandable and not almost as creepy as it seemed before.Digital Journal reports that the system was trained in English but decided this was an inefficient and illogical way of communicating.