Communication, interaction and the Turing Test

The kind of machines to which people are often tempted to attribute consciousness are ones which just sit somewhere handling data, and which are not animate in any real sense. It is animation which gives us the clue to when a human being is conscious and when not. However, it would be an easy matter to build a machine which jumped around and suchlike, but to which no one would have the least temptation to attribute consciousness. For there is more to genuine animation than simply doing what a jack-in-the-box or ‘nodding dog’ does. It is the ability to communicate: to enter into a dialogue, or into a shared practice with other creatures.

One thing, then, which might quite rightly lead us to suspect that there was some real thinking going on in a machine, would be if we found ourselves reacting to the machine as though it were a conscious being, due to the likeness of its responses to those of a human person. If, that is, we found ourselves entering into what seemed like real communication with it. In particular, if we found it impossible to distinguish between communicating with the machine and communication with a human being, this tendency would be at its strongest. This was, in fact, the test suggested by the pioneer of computing, Alan Turing, in 1950. Turing concluded there must be some test for whether a thing is really thinking or not – a test which most human beings pass, and which most, if not all, machines don’t.

The test which Turing came up with was precisely that of finding out whether communication with the machine was indistinguishable from communication with a human being – whether a person could be ‘fooled’ by a machine.

Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain – that is, not only write it but know it had written it. No mechanism could feel (and not merely artificially signal, an easy con-trivance) pleasure at its successes, warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants. (Turing: 1949)

Turing goes on to give an exchange between an ‘interrogator’ and a ‘witness’, in which  the interrogator is attempting to establish whether some lines written by the witness were written with genuine understanding or not. The gist of the argument is that if, in a situation like this, the interrogator cannot distinguish between a machine as ‘witness’ and a human being, then there are no grounds for holding the machine’s intelligence to be less a case of genuine thought than that of a human person.

Inevitably, attempts have been made to create computer programs which are capable of passing a test such as the Turing Test: programs which will cause a machine to converse with someone in a believable human way. One of these became well known as a just credible human substitute. It has actually been known to fool an employee of the company where it was developed. The program is called ELIZA, after Eliza Doolittle, the heroine of Shaw’s Pygmalion, who was taught to speak ‘good’ English…It will be noticed that ELIZA’s conversation resembles that of a psychiatrist talking to a patient. Such an interview also tends to keep off ‘hard facts’ and revolve more around feelings and reaction, so that a detailed knowledge of actual things is unnecessary.

As Professor Margaret Boden has put it:

Such ‘speaking machines’ do not behave like someone conversing in her native language. Rather, they resemble a person resorting to trickery and semantic sleight-of-hand in order to hide their own lack of understanding of a foreign tongue.

There is, of course, no reason in principle why a combination of this sort of interrogation of the user, with a phenomenally large data base of everyday facts, should not produce something realistic. But ‘in principle’ is a very big qualification to make.

…it ought now to be obvious that one aspect of this will lie in the question of what it is for a creature to be actually understanding what it is doing, and in particular understanding the language it uses. The kinds of systems we glanced at above are not such that one would suspect them of genuinely attaching meaning to the utterances they produce, however sophisticated these utterances might look on the surface. To put this another way, systems of this type have no real semantics: the meanings of their utterances are only meanings imposed by us, the creators and users of them, and not meanings to the machine or system itself. And one reason why the vast majority of artifacts cannot seriously be suspected of possessing any semantics, is that they do not interact in the relevant ways with the ‘outside’ world. They cannot pin meanings on words in the way that we can, because they do not have genuine access to the things which the words stand for in the way that we do. (pp53-61)

Brown, G (1989) Minds, Brains and Machines. Bristol Classical Press. Bristol


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: