Tuesday, June 10, 2014

Imitation Of Life

You may have seen news this week that a program passed the Turing Test for the first time.  If you're not familiar with it, Computing pioneer Alan Turing proposed this way to determine if a computer program could be considered "intelligent" or not.  He suggested that if a person typing a conversation with the program could be convinced that they were actually talking to a person typing in responses in another room, then the program could be considered intelligent.  This became known as the Turing Test.  It should be noted that the researchers weren't trying to create a program that all people find indistinguishable from a person; they were trying to fulfil Turing's prediction that within fifty years a program would succeed 30% of the time.  This program succeeded 33% of the time, but it's fourteen years late for Turing's prediction.

Anyway, I was sceptical, because I've seen many claims over the years of programs that could come close to passing the test (say, by getting nearly 30%, or by passing based on very short conversations, or conversations limited to a predetermined subject.) In each case, I'd see transcripts of the conversations, or even online demonstrations, only to find that the conversation was unconvincing gibberish little better than that of early Artificial Intelligence attempts like ELIZA.

Sure enough, it turns out that this program cheats a little. It isn't pretending to be any old human being; it's trying to convince you that it is a 13-year-old Ukrainian boy. That gives it the advantage of imitating an immature person using a second language. They aren't releasing transcripts of the conversations, but given the screen capture on the BBC report, it looks like it really leans on that advantage. People must have a very low estimation of either 13-year-old boys or Ukrainians if they found that convincing.

And that brings up a question I've always had about the Turing Test: should the judges be A.I. experts? Off hand, it would seem unnecessary: Surely any ordinary person could judge whether a conversant is human or not. But it's really looking like publicity-hungry researchers are taking advantage of naive judges. I think people who were at least acquainted with the A.I. tricks of the trade would be able to make more informed decisions.

Take, for instance, the aforementioned ELIZA. It doesn't imitate an Eastern-European teen, but rather a psychoanalyst in the style of psychologist Carl Rogers. That's because Rogerian therapists concentrate on turning the patient's statements into questions to guide the patient to their own conclusions. That's relatively easy for a program to fake: just quote the user's words back as a question, relating in the user to carry the conversation.

Another approach was PARRY, a program impersonating someone with severe paranoid schizophrenia. Such people are prone to going off topic, ignoring others, and making non-sequitur statements. Again, that's easy to fake: if the program can't interpret what the user days, it just blurts out something random.

I always found out hilarious that someone had this program talk to ELIZA. Then they had experts read the transcripts of these conversations, and compare them to transcripts of actual Rogerian therapists talking to actual paranoid schizophrenics. The experts couldn't tell which transcripts were real. I still think that's the most impressive attempt to pass the Turing Test.

But there is one way in which the media hype about this test may be warranted.   A few stories have launched into fear-stoking discussions of how we don't know who online might not be human.  Actually, that could be a problem.  While this cheated Turing Test may not be a good indication of the program's intelligence, the simple fact that it can fool people - however it accomplishes the task - could prove useful in online scams.  As several people have joked on Twitter, if your program can convince people it is a Ukrainian boy, then pretending to be a Nigerian Prince shouldn't be much harder.  Turns out that's already happening.

No comments:

Post a Comment