The Mind and the Computing Machine

Alan Turing and others
(edited by Jack Copeland)


These notes, hitherto unpublished, were taken during a discussion at the University of Manchester involving Turing, Emmet, Jefferson, Newman, Polanyi, Young, and others. The notes are headed 'Rough draft of the Discussion on the Mind and the Computing Machine, held on Thursday, 27th October, 1949, in the Philosophy Seminar.' The editor is grateful to Wolfe Mays for providing a copy of the original typescript.

NEWMAN TO POLANYI: The Gödel extra-system instances are produced according to a definite rule, and so can be produced by a machine. The mind/machine problem cannot be solved logically; it must rest on a belief that a machine cannot do anything radically new, to be worked on experimentally. The interesting thing to ask is whether a machine could produce the original Gödel paper, which seems to require an original set of syntheses.

TURING: emphasises the importance of the universal machine, capable of turning itself into any other machine.

POLANYI: emphasises the Semantic Function, as outside the formalisable system.

JEFFERSON: will admit that the respiratory system is mechanical, but not 'Mind'.

TURING: one may 'play about' with a machine and get the desired result, but not knowing the reason; an element of this kind enters into both engineering and operating it.

Resumption at 6.40 p.m.

EMMET: Questions to be considered –

  1. Machine brain analogy;

  2. Physiological aspects;

  3. Are there any limitations to the kind of operations which a machine can do?

Questions asked: Is it possible to give a purpose to a machine? Can you 'put a purpose into a machine'?

TURING: This kind of thing can be done by 'trial and error' methods: purpose is 'use of previous combinations plus trial and error'.

BARTLETT: Even in the ideal calculating machine you have a small statistical error, the latter we find also in the brain.

YOUNG: Speaking from the physiological point of view: he is looking at the point from a purely practical point of view. He feels that none of the collaboration which has so far taken place has enabled him to ask the right questions. Neuro-physiology is not progressing: Cybernetics is based on 'Models' by which we work. Another point: do agglomerations of brain-cells act in the same way as individual ones do in other parts of the body? There seem to be chains which are not functioning as conductors only, as has been hitherto thought. But if this is so – how should we proceed? No one knows just what to measure, and it is in the ordering of the attack that some collaboration might result. The physiologist starts with a system not made by himself, but in the case of the mechanical brain we start with something which has been made by us. Is the approach then identical? If not, can the right approach be suggested? The physiologist can stimulate points and see what happens – do the 'mechanicians' do the same?

NEWMAN: Possible approach: it might be asked how the calculating machine was designed – approaching the thing from the outside as it were. Could methods used in answering this question be applied to the other?

....?: replies that impulses could also be applied to the machine to see what happens in this case.

JEFFERSON: claimed that histological investigations which give visual data are easier.

YOUNG: The E.E.G. gives interesting elements – they do not depend on 'circuit' conceptions.

JEFFERSON: E.E.G. gives a very general result (produced diagrams). The exposed brain gives us no better trace than one still in the skull. The E.E.G. has therefore a certain use, but it is not a fine enough method.

YOUNG: disagrees, since we are dealing with cells in large groups.

JEFFERSON: points out that if a man is set a problem, e.g. to multiply 13 by 17, then the trace stops.

YOUNG: suggests that it might be of use on scanning large areas of the brain, with the aim of finding then an assumption applicable to each individual cell.

HEWELL (?) (I.C.I. Research): dealt with the analysis of E.E.G. traces; pointed out that it is possible to tell in which stage of a fit a patient finds himself by analysing the E.E.G. records. The Electrocardogram gives a typical waves-form from the heart, and if the latter is reduced to one small piece of fibre, the trace continues even when there is no beating motion. Hence we have something which is passing over the organism, and not something in it. (?)

JEFFERSON: thinks that E.E.G. is not much use.

YOUNG: disagrees.

JEFFERSON: E.E.G. trace-vibration depends on many factors (blood-sugar etc.) hence we are never sure when we are dealing with a 'normal' subject.

YOUNG: thinks comparative method, either between people, or between animals, would be useful.

HEWELL: if certain areas (two of them) of the hypothalamus are stimulated, we have then a certain reaction ('petit mal') which leads us to the supposition that 'scanning' is going on in the brain.

JEFFERSON: spoke of the reduction of the importance of considerations regarding the cerebral cortex in neuro-physiology. Regarding 'petit mal' – in the case of traces taken from cats – we have a 'spike' in the wave-trace which is very similar to what we get from human traces under similar conditions. This is associated with the optic thalamus.

HEWELL: regarding 'scanning'; the idea is that the networks are being scanned by a discharging system. If a piece of the cortex is removed it continues to discharge in the way that is does before removal.

YOUNG: told of the experiment of cutting into a frog's brain, in which case the pulses resume when the two halves are put back together. Hence 'connexion' does not seem to be essential to the brain-functions as far as the traces show us them. In Octopus there is a centre of cells which, if cut out, result in the animal's being unable to retain the memory of a very simple trick – it retains it now for 5-6 minutes only, whereas the normal retention time is 8-10 days. Hence this seems to be a reinforcement of a cell-theory for memory at least.

NEWMAN: does this support a theory of electrical-charge stores, like we have in the machine?

....?: said that there was an analogical process in the machine.

HEWELL: this is the difference between the machine and the brain; the brain is 'reminded' by a 'leak' into established circuits: in the meantime the current flows around them. (This is McCulloch's memory-store-cells theory.)

YOUNG: disagrees on clinical grounds.

JEFFERSON: we have no idea where these 'stores' are, but there is merely no better suggestion. But if part of the brain is removed memory remains.

YOUNG: 'limited space' conception in this matter seems to be false.

NEWMAN: there are some limits, surely, to the possible 'cutting away' – some 'compression' does take place, but there is after all a limit.

JEFFERSON: yes, but what were the last remaining cells doing in the meantime?

YOUNG: storage seems to be in the whole, and not in any particular part. Gestalts seem to be involved.

NEWMAN: what we ought to do is to start like the atomists with a 'billiard ball' hypothesis – a hypothesis which is obviously wrong, yet which is after all a point of departure.


YOUNG: Logic might help to ask the right questions, and to set up hypotheses.

NEUMANN: spoke of attempted consistency proofs as regards the theory of neural networks. (H. Copeland.)

NEWMAN: crude models can at any rate be eliminated.

POLANYI: how can e.g. 'seeing stereoscopically' be made the subject of a 'model'? What is the connection?

....?: replied that the use lay in guiding advancing hypotheses.

....?: the question is not one of a 'reality' relation, but of the use to which a model can be put.

NEWMAN: spoke of 'logical similarity' between e.g. animals and mendelian heredity tables. The model is to be distinguished from the explanation.

JEFFERSON: said that many of these 'models' are not worth making because you already know what is going to result from them.

YOUNG: agrees logically, but says that 'intuitively' you learn a great deal.

NEWMAN: agrees.

....?: says that before you get results there must be correspondence between the model and the reality, e.g. neurological model lacks certain correspondence.

POLANYI: what meaning have such models? Can we derive from the model the conception of 'seeing in depths'?

NEWMAN: In making models we assume that some quantitative solution is possible, and the rest is left out.

HEWELL: regarding 'choice': implies two or more potential incompatibles, hence there must be an element of choice here, i.e. inhibitory power must be exerted. In the animal a path is established such that the preferred action results.

TURING: yes – random operation can be made to become regular after a certain prevailing tendency has shown itself.

HEWELL: respiratory centres have movements which can not be inhibited, but in choice the incompatible can be accepted and the normal rejected.

TURING: machine may be bad with incompatibles, but when it gets 'contradiction' as a result, there is then a mechanism to go back and look at things which led to the contradiction.

JEFFERSON: but this is an argument against the machine: do human beings do this kind of thing?

TURING: yes – mathematicians.

(Murmur – are mathematicians human beings?)

(Details of this 'going back' process asked for.)

NEWMAN: suggested that this kind of thing was more on the subject of lines of conduct, and was not covering the logical aspect only.

TURING: declares he will try to get back to the point: he was thinking of the kind of machine which takes problems as objectives, and the rules by which it deals with the problems are different from the objective. Cf. Polanyi's distinction between mechanically following rules about which you know nothing, and rules about which you know.

POLANYI: tries to identify rules of the logical system with the rules which determine our own behaviour, and these are quite different things.

EMMET: the vital difference seems to be that a machine is not conscious.

TURING: a machine may act according to two different sets of rules, e.g. if I do an addition sum on the blackboard in two different ways:

  1. by a conscious working towards the solution

  2. by a routine, habitual method

then the operation involves in the first place the particular method by which I perform the addition – this is conscious: and in the second place the neural mechanism is in operation unconsciously all the while. These are two different things, and should be kept separate.

POLANYI: interprets this as suggestion that the semantic function can ultimately be specified; whereas in point of fact a machine is fully specifiable, while a mind is not.

TURING: replies that the mind is only said to be unspecifiable because it has not yet been specified; but it is a fact that it would be impossible to find the programme inserted into quite a simple machine – and we are in the same position as regards the brain. The conclusion that the mind is unspecifiable does not follow.

POLANYI: says that this should mean that you cannot decide logical problems by empirical methods. The terms by which we specify the operations of the mind are such that they cannot be said to have specified the mind. The specification of the mind implies the presence of unspecified and pro-tanto unspecifiable elements.

TURING: feels that this means that my mind as I know it cannot be compared to a machine.

POLANYI: says that acceptance as a person implies the acceptance of unspecified functions.

....?: re-raises the point regarding the undiscoverability of a programme inserted into machines. Could this be clarified?

Next came a return to the 'model' question as regards memory storage.

YOUNG: was unable to see any possible 'picture' of memory storage.

TURING: suggested that a machine containing neuron-models might help.

YOUNG: gave technical details.

TURING: asked what could be taken as model cells?

YOUNG: gave suggested diagrams of nerve cells in star-shaped arrangements, and much discussion with TURING ensued.

On this note the meeting closed.


Alan Turing (copyright Beryl Turing)
(click to enlarge)

Max Newman
(click to enlarge)