By Dr Llewellyn Cox, Principal, LieuLabs
On an otherwise unremarkable Saturday in June 2014, a group of computer scientists, public figures, and celebrities gathered at London’s Royal Society. They were all there for one reason — to engage in a text-based chat game to determine if a computer could pass the “iconic” Turing test.
A few hours later, the results were in. Professor Warwick of Reading University announced that a chatbot had successfully tricked 33% of the judges into thinking it was a real boy, and had therefore become the first computer to have passed the Turing test:
It is fitting that such an important landmark has been reached at the Royal Society in London, the home of British science and the scene of many great advances in human understanding over the centuries. This milestone will go down in history as one of the most exciting. — Prof. Kevin Warwick
Within hours, breathless tweets, likes and pins swept across the internet, announcing this amazing result to the world, or at least across the subculture that apparently really f***ing loves science, but doesn’t seem to have much time or inclination toward actual critical analysis. A day or so later came the rebuttals and debunkings from the more inquisitive corners of the online universe. So what really happened, and what does a machine passing a Turing test mean for society?
At the Royal Society, “Eugene Goostman’’, a 13-year old boy from Ukraine had been busy. As a participant in the test, he was engaged in 30 different five-minute text conversations with the celebrities and scientists that made up the panel of event judges. At the end of the day, 11 out of the 30 judges who talked to Eugene thought he was a real young man from Eastern Europe. The majority 19 interviewers who realized that Eugene was, in fact, a very well-programmed computer were probably alerted to the fact by a series of evasive answers, awkward linguistic phrasing, and very limited social knowledge displayed by Eugene.
Nevertheless, the organizers declared that Eugene had passed the Turing test by deceiving more than 1 in 3 of the judges — a hypothetical threshold Turing had proposed in his thought experiment (but nowhere close to the 95% confidence level that most scientific studies are measured by). Turing’s experiment had also required the interrogator determining which of two subjects — one human, one a computer — was human; at the Royal Society, Eugene Goostman was interrogated in isolation, with the judges making a yes/no determination as to whether he was human or not. Even if Turing’s 1/3 success rate could be validated, it would not be applicable to this experiment because it recorded a fundamentally different output from the judges.
Ultimately, Eugene Goostman was able to beat the Turing test because Eugene Goostman was specifically designed to beat the Turing test. The persona of a teenage subject, a non-native English speaker, gave cover to its limitations in conducting a live conversation; enough judges were tricked into ignoring the machine’s shortcomings by the illusion that it was a linguistically-challenged young person with limited social skills and knowledge.
Turing proposed this experiment as a way to test whether a machine could think for itself. The purpose of the Turing test is to assess creative, integrated strategic thinking that is the hallmark of human consciousness. By gaming the way that the test was assessed, reducing it to textual trickery, the experiment at the Royal Society was a superb demonstration of programming finesse, but did not shed any new light on the feasibility of genuine artificial consciousness.
In many ways, the experiment at the Royal Society epitomizes our civilization’s schizophrenic relationship with science and technology. This is far from the only example of a “scientific” event that was more focused on grabbing headlines and publicity for technology, rather than a rigorous examination of its veracity. In a society that seeks heroes and superstars for worship, we elevate figures like Turing, Einstein, and Tesla to almost superhuman status — and Turing’s very real suffering at the hands of primitively-minded authorities only serves to exacerbate his status as a martyr to the cause. Our understanding of science as a deliberate, exact examination of the Universe has been supplanted by an Instagram-ready hunger for stunning imagery and witty non-sequiteurs to be deployed against unbelievers.
Our obsession with data, but not with its rigorous scientific examination, has led us to a world where it is too easy to lose sight of our real goals. As companies and governments find more effective ways to track their progress against benchmark metrics, it is all too easy to forget the real goals of a project and simply work on maximizing the metric scores — leading to schools “teaching to the test”, or hospitals minimizing [reported] waiting times at the expense of delivering effective care, among many other sad examples. Similarly, by creating a machine designed to “win” at the Turing test through misdirection and obfuscation, we lose sight of the diligent work of thousands of people who seek to understand how to make machines smarter, more responsive, and better able to respond to people’s needs.
As artificial intelligence becomes more and more a part of our lives, it is quite conceivable that we will someday need an effective Turing test to tell humans apart from machines. If computers were to develop real consciousness, self-awareness, and agency, then we would have to answer serious questions about our relationship with them. Does real consciousness bestow rights, such as the right to live? If so, could you then “murder” a computer? What consists the essence of the conscious computer — the hardware or the software? Would synthetic beings be treated as a special class, akin to apes or whales — not equal with humans, but legally protected from unnecessary killing and experimental treatments? In this scenario, would a “robot rights” movement emerge to challenge their second-class treatment by humans?
The world is changing rapidly. Disparate societies are more interconnected than ever, and systems are ever more autonomous. As our machinery develops higher levels of consciousness, our relationship with it changes. In order to remain masters of our own destiny, humans must ask the hard questions about our relationships with technology — and ask them scientifically — in order to stay ahead of ourselves. To do otherwise is to surrender to assumption, superstition, and decline.