I've been researching AI for an upcoming keynote address I'm delivering. I've now read several articles about a combination of AI algorithms named Libratus. In January Libratus, developed at Carnegie Mellon, beat four top poker players over a twenty-day contest.
Once it figured out the patterns in the play of its human competitors, it went on a winning streak to the tune of $1.7 million (theoretical) dollars. The article link contains a very interesting quote from Noam Brown, one of Libratus' co-creators.
“When I see the bot bluff the humans, I’m like, ‘I didn’t tell it to do that. I had no idea it was even capable of doing that.’ It’s satisfying to know I created something that can do that.”
I get why he's satisfied. That's quite an accomplishment, but there's something else here. Bluffing isn't lying. Lying would be saying you have a pair of kings when you actually have a pair of 2's. Bluffing is deception. Brown apparently didn't program Libratus to deceive. He programmed it to learn the game on its own.
On its own, it determined that winning required deceiving its opponents. And then, also on its own, it got really good at figuring out when they were trying to deceive it.
Very cool, but what else is AI going to do its creators didn't anticipate?
You can read more here.