Highlighted Selections from:

Ascribing Mental Qualities to Machines


McCarthy, John. "Ascribing Mental Qualities to Machines." (1979) http://www-formal.stanford.edu/jmc/ascribing.html

p.1: To ascribe certain beliefs, knowledge, free will, intentions, consciousness, abilities or wants to a machine or computer program is legitimate when such an ascription expresses the same information about the machine that it expresses about a person. It is useful when the ascription helps us understand the structure of the machine, its past or future behavior, or how to repair or improve it. -- Highlighted mar 22, 2014

p.2: While we will be quite liberal in ascribing some mental qualities even to rather primitive machines, we will try to be conservative in our criteria for ascribing any particular quality. -- Highlighted mar 22, 2014

p.17: Conway and Gosper have shown that self-reproducing universal computers could be built up as Life configurations. Poundstone (1984) gives a full description of the Life automaton inlcuding the universal computers and self-reproducing systems.

Consider a number of such self-reproducing universal computers operating in the Life plane, and suppose that they have been programmed to study the properties of their world and to communicate among themselves about it and pursue various goals co-operatively and competitively. Call these configurations Life robots. In some respects their intellectual and scientific problems will be like ours, but in one major respect they live in a simpler world than ours seems to be. Namely, the fundamental physics of their world is that of the life automaton, and there is no obstacle to each robot knowing this physics, and being able to simulate the evolution of a life configuration given the initial state. Moreover, if the initial state of the robot world is finite it can have been recorded in each robot in the beginning or else recorded on a strip of cells that the robots can read.

Since these robots know the initial state of their world and its laws of motion, they can simulate as much of its history as they want, assuming that each can grow into unoccupied space so as to have memory to store the states of the world being simulated. This simulation is necessarily slower than real time, so they can never catch up with the present—let alone predict the future. This is obvious if the simulation is carried out straightforwardly by updating a list of currently active cells in the simulated world according to the Life rule, but it also applies to any clever mathematical method that might predict millions of steps ahead so long as it is supposed to be applicable to all Life configurations. (Some Life configurations, e.g. static ones or ones containing single gliders or cannon can have their distant futures predicted with little computing.) Namely, if there were an algorithm for such prediction, a robot could be made that would predict its own future and then disobey the prediction. The detailed proof would be analogous to the proof of unsolvability of the halting problem for Turing machines.

Now we come to the point of this long disquisition. Suppose we wish to program a robot to be successful in the Life world in competition or cooperation with the others. Without any idea of how to give a mathematical proof, I will claim that our robot will need programs that ascribe purposes and beliefs to its fellow robots and predict how they will react to its own actions by assuming that they will act in ways that they believe will achieve their goals. Our robot might acquire these mental theories in several ways: First, we might design the universal machine so that they are present in the initial configuration of the world. Second, we might program it to acquire these ideas by induction from its experience and even transmit them to others through an “educational system”. Third, it might derive the psychological laws from the fundamental physics of the world and its knowledge of the initial configuration. Finally, it might discover how robots are built from Life cells by doing experimental “biology”.

Knowing the Life physics without some information about the initial configuration is insufficient to derive the psychological laws, because robots can be constructed in the Life world in an infinity of ways. This follows from the “folk theorem” that the Life automaton is universal in the sense that any cellular automaton can be constructed by taking sufficiently large squares of Life cells as the basic cell of the other automaton.

Men are in a more difficult intellectual position than Life robots. We don’t know the fundamental physics of our world, and we can’t even be sure that its fundamental physics is describable in finite terms. Even if we knew the physical laws, they seem to preclude precise knowledge of an initial state and precise calculation of its future both for quantum mechanical reasons and because the continuous functions needed to represent fields seem to involve an infinite amount of information. This example suggests that much of human mental structure is not an accident of evolution or even of the physics of our world, but is required for successful problem solving behavior and must be designed into or evolved by any system that exhibits such behavior. -- Highlighted mar 22, 2014

p.23: The second level of self-consciousness requires a term I in the language denoting the self. I should belong to the class of persistent objects and some of the same predicates should be applicable to it as are applicable to other objects. For example, like other objects I has a location that can change in time. I is also visible and impenetrable like other objects. However, we don’t want to get carried away in regarding a physical body as a necessary condition for self-consciousness. Imagine a distributed computer whose sense and motor organs could also be in a variety of places. We don’t want to exclude it from self-consciousness by definition. -- Highlighted mar 22, 2014

p.27: This paper is partly an attempt to do what Ryle (1949) says can’t be done and shouldn’t be attempted—namely to define mental qualities in terms of states of a machine. The attempt is based on methods of which he would not approve; he implicitly requires first order definitions, and he implicitly requires that definitions be made in terms of the state of the world and not in terms of approximate theories. -- Highlighted mar 22, 2014

p.29: Philosophy and artificial intelligence. These fields overlap in the following way: In order to make a computer program behave intelligently, its designer must build into it a view of the world in general, apart from what they include about particular sciences. (The skeptic who doubts whether there is anything to say about the world apart from the particular sciences should try to write a computer program that can figure out how to get to Timbuktoo, taking into account not only the facts about travel in general but also facts about what people and documents have what information, and what information will be required at different stages of the trip and when and how it is to be obtained. He will rapidly discover that he is lacking a science of common sense, i.e. he will be unable to formally express and build into his program “what everybody knows”. Maybe philosophy could be defined as an attempted science of common sense, or else the science of common sense should be a definite part of philosophy.) -- Highlighted mar 22, 2014