top of page
Writer's pictureKen Ecott

Facebook's new AI was allowed to negotiate and soon adopted a very human tactic, lying


Facebook AI Learns to Lie

Facebook’s Artificial Intelligence Research division (FAIR) released a fascinating new study aimed at giving A.I. chatbots some of the basic social skills that allow human beings to be as versatile as we are. The goal was to teach the chatbots how to negotiate, to get what they want in situations where their own success requires cooperation from someone else.

The aim of this is to help A.I. get around the sorts of unforeseen barriers that crop up in everyday human life, and it could make consumer A.I. products far more useful. What they found was that when even simple A.I. actors were given the means and the motivation to negotiate the terms of their interactions, they quickly invented some very (and perhaps disturbingly) human strategies.“You can imagine a world where this accelerates and eases interactions,” FAIR research scientist Mike Lewis told Inverse via email. “For instance, future scenarios where people might use chatbots for retail customer support or scheduling, where bots can engage in seamless conversations and back-and-forth, human-like negotiations with other bots or people on our behalf.”The Facebook team presented their A.I. agents with a common pool of different objects — balls, hats, and books, for instance — and gave each A.I. a unique understanding of what was worth more or less. The A.I. communicate purely through text chat and, as in life, these simple vocalizations aren’t enough to prove how the other actor values the different items.

This means that even if there is a perfect agreement between the opponents, like if one wants only books and the other doesn’t want any books, there could still be negotiation over books because each side is playing the odds about the other’s true desires. The overall task amounts to a sort of feint-counter-feint charade that requires long-term planning.

The system can be set up so two A.I.’s duel it out, or so one of them negotiates with a human being, which is only possible because the chatbots communicate solely in written English. Since each English statement represents a move in game of the negotiation, Facebook created a concept called a “dialog rollout.” This is the graph that the computer builds of all possible negotiating paths forward, and the A.I. simply chooses the best from among these simulated conversational options.

It’s not unlike how Deep Blue played chess, by simply looking at all the possible games that could arise from the current arrangement of pieces, and always making the move that should lead to the best possible end game. Facebook’s A.I. does the same thing but for the verbal trading of hats and balls rather than the movement of chess pieces on a board.

The A.I. came up with one truly fascinating strategy all on its own, where it pretends to care about an item that it actually gives little or no value, just so it can later give up that item and seem to have made a compromise. This novel A.I. invention is, of course, a common human tactic as well, arrived at for precisely the same reasons.

“The models weren’t trained to use any specific negotiating strategy,” writes Lewis. “What we did was to give them the ability to learn from experience and plan ahead, which enabled them to invent their own strategies to achieve their goals.”

These sorts of robust negotiation skills could let A.I. start to find solutions to conflicts without compromising on its own core goals. By teaching this first in the medium of spoken human language, Facebook is ensuring that its abilities can always roll out directly to social media users.

These are the sorts of abilities that might let users trust a software service to intelligently schedule a meeting for them, taking the other participant’s scheduling needs into account and finding the best compromise. You could have the A.I. transparently share those calendars with one another and find a workable slot, but this cooperatively agreed-upon time might not be as good as the one you could have gotten with a more ruthless A.I. negotiating for the time that best fits your schedule.

Modern A.I. research is progressing to the point that we now legitimately have to ask: Do we want our future A.I. to be good citizens of the world or ruthless actors on our behalf?

130 views0 comments
bottom of page