The Function of Conscious Experience: An Analogical Paradigm of Perception and Behavior

Steven Lehar

slehar@vision.eri.harvard.edu

Submitted to Consciousness and Cognition July 2000

Abstract

The question of whether conscious experience has any functional purpose depends on a more fundamental issue concerning the nature of conscious experience. In particular, whether the world of experience is the external world itself, as suggested by direct realism, or whether it is merely a virtual- reality replica of that world in an internal representation, as in indirect realism, or representationalism. There is an epistemological problem with the notion of direct realism, for we cannot be consciously aware of objects beyond the sensory surface. Therefore the world of experience can only be an internal replica of the external world. This in turn validates a phenomenological approach to studying the nature of the perceptual representation in the brain. Phenomenology reveals that the representational strategy employed in the brain is an analogical one, in which objects are represented in the brain by constructing full spatial replicas of those objects in an internal representation.

Introduction

The question of the functional role of conscious experience is currently an active area of debate. On the one side Dennett (1988, 1991) argues that consciousness is an epiphenomenon, with no direct functional value. Humphrey (1999 p. 250) on the other hand argues that consciousness must have some adaptive value on evolutionary grounds, for nothing can evolve by natural selection unless it has some effect on behavior. The question is a paradigmatic one in the Kuhnian sense (Kuhn 1970), because the differences of opinion on the function of conscious experience reflect deeper differences on the more fundamental question of what consciousness itself actually is. I propose that the debate over the ontological status of conscious experience in turn rests on the epistemological question of whether the world we see around us is the real world itself, or whether it is merely a virtual-reality replica of that external world in an internal representation. Until this central issue is resolved on sound logical grounds, the sciences of psychology and consciousness studies are condemned to remain in a pre-paradigmatic state, with opposing camps arguing at cross-purposes due to lack of consensus on the foundational issues of the science.

In this paper I argue that the epistemological question is not open, but that there is only one reasonable interpretation of the ontology of conscious experience, i.e. that consciousness is in fact an internal replica of the external world rather than the world itself. This in turn validates a phenomenological approach to the study of conscious experience, i.e. to examine the world around us not as a scientist examining an objective external world, but as a perceptual scientist examining a rich and complex internal representation. I will show how the phenomenological approach can be employed to examine both the structure of conscious experience, and also the detailed workings of the computational strategy or algorithm that guides behavior. This approach to the study of conscious experience clearly demonstrates that consciousness is not an epiphenomenon, but serves an essential functional role, which is to provide an analogical representation of the external world, that operates in conjunction with an analogical computational strategy that guides behavior. In other words perception and behavior are intimately coupled through the agency of conscious experience, and careful examination of the properties of that experience offers insights into the nature of both perception and behavior.

The Epistemological Divide

The debate over the nature of conscious experience is confounded by the deeper epistemological question of whether the world we see around us is the real world itself, or merely an internal perceptual copy of that world generated by neural processes in our brain. In other words this is the question of direct realism, also known as naive realism, as opposed to indirect realism, or representationalism. Although this issue is not much discussed in contemporary psychology, it is an old debate that has resurfaced several times, but the continued failure to reach consensus on this issue continues to bedevil the debate on the functional role of conscious experience. The reason for the continued confusion is that both direct and indirect realism are frankly incredible, although each is incredible for different reasons.

Problems with Direct Realism

The direct realist view (Gibson 1972) is incredible because it suggests that we can have experience of objects out in the world directly, beyond the sensory surface, as if bypassing the chain of sensory processing. For example if light from this paper is transduced by your retina into a neural signal which is transmitted from your eye to your brain, then the very first aspect of the paper that you can possibly experience is the information at the retinal surface, or the perceptual representation that it stimulates in your brain. The physical paper itself lies beyond the sensory surface and therefore must be beyond your direct experience. But the perceptual experience of the page stubbornly appears out in the world itself instead of in your brain, in apparent violation of everything we know about the causal chain of vision. The difficulty with the concept of direct perception is most clearly seen when considering how an artificial vision system could be endowed with such external perception. Although a sensor may record an external quantity in an internal register or variable in a computer, from the internal perspective of the software running on that computer, only the internal value of that variable can be "seen", or can possibly influence the operation of that software. In exactly analogous manner the pattern of electrochemical activity that corresponds to our conscious experience can take a form that reflects the properties of external objects, but our consciousness is necessarily confined to the experience of those internal effigies of external objects, rather than of external objects themselves. Unless the principle of direct perception can be demonstrated in a simple artificial sensory system, this explanation remains as mysterious as the property of consciousness it is supposed to explain.

Problems with Indirect Realism

The indirect realist view is also incredible, for it suggests that the solid stable structure of the world that we perceive to surround us is merely a pattern of energy in the physical brain, i.e. that the world that appears to be external to our head is actually inside our head. This could only mean that the head we have come to know as our own is not our true physical head, but is merely a miniature perceptual copy of our head inside a perceptual copy of the world, all of which is completely contained within our true physical skull. Stated from the internal phenomenal perspective, out beyond the farthest things you can perceive in all directions, i.e. above the dome of the sky and below the earth under your feet, or beyond the walls, floor, and ceiling of the room you perceive around you, beyond those perceived surfaces is the inner surface of your true physical skull encompassing all that you perceive, and beyond that skull is an unimaginably immense external world, of which the world you see around you is merely a miniature virtual-reality replica. The external world and its phenomenal replica cannot be spatially superimposed, for one is inside your physical head, and the other is outside. Therefore the vivid spatial structure of this page that you perceive here in your hands is itself a pattern of activation within your physical brain, and the real paper of which it is a copy it out beyond your direct experience. Although this statement can only be true in a topological, rather than a strict topographical sense, this insight emphasizes the indisputable fact that no aspect of the external world can possibly appear in consciousness except by being represented explicitly in the brain. The existential vertigo occasioned by this concept of perception is so disorienting that only a handful of researchers have seriously entertained this notion or pursued its implications to its logical conclusion. (Kant 1781/1991, Koffka 1935, Köhler 1971 p. 125, Russell 1927 pp 137-143, Smythies 1989, 1994, Harrison 1989, Hoffman 1998)

Another reason why the indirect realist view is incredible is that the observed properties of the world of experience when viewed from the indirect realist perspective are difficult to resolve with contemporary concepts of neurocomputation. For the world we perceive around us appears as a solid spatial structure that maintains its structural integrity as we turn around and move about in the world. Perceived objects within that world maintain their structural integrity and recognized identity as they rotate, translate, and scale by perspective in their motions through the world. These properties of the conscious experience fly in the face of everything we know about neurophysiology, for they suggest some kind of three- dimensional imaging mechanism in the brain, capable of generating three-dimensional volumetric percepts of the degree of detail and complexity observed in the world around us, that appear to rotate and translate freely relative to the space in which they appear. No plausible mechanism has ever been identified neurophysiologically that exhibits this incredible property. The properties of the phenomenal world are therefore inconsistent with contemporary concepts of neural processing, which is exactly why these properties have been so long ignored.

Problems with Projection Theory

There is a third alternative besides the direct and indirect realist views, and that is a projection theory, whereby the brain does indeed process sensory input, but that the results of that processing get somehow projected back out of the brain to be superimposed back on the external world (Ruch 1950 quoted in Smythies 1954, O'Shaughnessy 1980 pp 168-192, Velmans 1990, Baldwin 1992). According to this view, the world around us is part real, and part perceptual construction, and the two are spatially superimposed. However no physical mechanism has ever been proposed to account for this external projection. The problem with this notion becomes clear when considering how an artificial intelligence could possibly be endowed with this kind of external projection. Although a sensor may record an external quantity in an internal register or variable in a computer, there is no sense in which that internal value can be considered to be external to that register or to the physical machine itself, whether detected externally with an electrical probe, or examined internally by software data access. Unless the principle of external projection can be demonstrated in a simple artificial sensory system, this explanation too remains as mysterious as the property of consciousness it is supposed to explain.

Selection from Incredible Alternatives

We are left therefore with a choice between three alternatives, each of which appears to be absolutely incredible. Contemporary neuroscience seems to take something of an equivocal position on this issue, recognizing the epistemological limitations of the direct realist view and of the projection hypothesis, while being unable to account for the incredible properties suggested by the indirect realist view. However one of these three alternatives simply must be true, to the exclusion of the other two. And the issue is by no means inconsequential, for these opposing views suggest very different ideas of the function of visual processing, or what all that neural wetware is supposed to actually do. Therefore it is of central importance for psychology to address this issue head-on, and to determine which of these competing hypotheses reflect the truth of visual processing.

The problem with the direct realist view is of an epistemological nature, and is therefore a more fundamental objection, for direct realism is nothing short of magical, that we can see the world out beyond the sensory surface. The projection theory has a similar epistemological problem, and is equally magical and mysterious, suggesting that neural processes in our brain are somehow also out in the world. Both of these paradigms have difficulty with phenomena of dreams and hallucinations (Revonsuo 1995), which present the same kind of phenomenal experience as spatial vision, except independently of the external world in which that perception is supposed to occur in normal vision. It is the implicit or explicit acceptance of this naive concept of perception that has led many to conclude that consciousness is deeply mysterious and forever beyond human comprehension. For example Searle (1992) contends that consciousness is impossible to observe, for when we attempt to observe consciousness we see nothing but whatever it is that we are conscious of; that there is no distinction between the observation and the thing observed.

The problem with the indirect realist view on the other hand is more of a technological or computational limitation, for we cannot imagine how contemporary concepts of neurocomputation, or even artificial computation for that matter, can account for the properties of perception as observed in visual consciousness. It is clear however that we have yet to discover the most fundamental principles of neural computation and representation, and therefore we cannot allow our currently limited notions of neurocomputation to constrain our observations of the nature of visual consciousness. The phenomena of dreams and hallucinations clearly demonstrate that the brain is capable of generating vivid spatial percepts of a surrounding world independent of that external world, and that capacity must be a property of the physical mechanism of the brain. Normal conscious perception can therefore be characterized as a guided hallucination (Revonsuo 1995), which is as much a matter of active construction as it is of passive detection. If we accept the truth of indirect realism, this immediately disposes of at least one mysterious or miraculous component of consciousness, which is its unobservability. For in that case consciousness is indeed observable, contrary to Searle's contention, because the objects of experience are first and foremost the product or "output" of consciousness, and only in secondary fashion are they also representative of objects in the external world. Searle's difficulty in observing consciousness is analogous to saying that you cannot see the moving patterns of glowing phosphor on your television screen, all you see is the ball game that is showing on that screen. The indirect realist view of television is that what you are seeing is first and foremost glowing phosphor patterns on a glass screen, and only in secondary fashion are those moving images also representative of the remote ball game.

The choice therefore is that either we accept a magical mysterious account of perception and consciousness that seems impossible in principle to implement in any artificial vision system, or we have to face the seemingly incredible truth that the world we perceive around us is indeed an internal data structure within our physical brain. If science is to triumph over mysticism therefore, we are compelled to accept the latter view, and accept the reality of conscious experience as a direct manifestation of neurophysiological processes within our physical brain. This in turn validates a phenomenological approach to the study of conscious experience, i.e. to examine the world around us not as a scientist examining an objective external world, but as a perceptual scientist examining a rich and complex internal representation. I will show how phenomenological observation can be used to determine the dimensions of conscious experience, and what the structural form of conscious experience tells us about the representational strategy used in the brain. I will then show how the phenomenological technique can also be employed to determne the functional properties of the conscious experience, or how the information encoded in perception is used to elicit behavior.

The Dimensions of Conscious Experience

The phenomenal world is composed of solid volumes, bounded by colored surfaces, embedded in a spatial void. Every point on every visible surface is perceived at an explicit spatial location in three- dimensions (Clark 1993), and all of the visible points on a perceived object like a cube or a sphere, or this page, are perceived simultaneously in the form of continuous surfaces in depth. The perception of multiple transparent surfaces, as well as the experience of empty space between the observer and a visible surface, reveals that multiple depth values can be perceived at any spatial location. The information content of perception can therefore be characterized as a three-dimensional volumetric data structure in which every point can encode either the experience of transparency, or the experience of a perceived color at that location. Since perceived color is expressed in the three dimensions of hue, intensity, and saturation, the perceived world can be expressed as a six-dimensional manifold (Clark 1993), with three spatial and three color dimensions.

The Cartesian Theatre and the Homunculus Problem

This "picture-in-the-head" or "Cartesian theatre" concept of visual representation has been criticized on the grounds that there would have to be a miniature observer to view this miniature internal scene, resulting in an infinite regress of observers within observers. However this argument is invalid, for there is no need for an internal observer of the scene, since the internal representation is simply a data structure like any other data in a computer, except that this data is expressed in spatial form. If the existence of a spatial data structure required a homunculus to view it, the same objection would also apply to symbolic or verbal information in the brain, which would also require a homunculus to read or interpret that data. In fact any information encoded in the brain needs only to be available to other internal processes rather than to a miniature copy of the whole brain. To deny the spatial nature of the perceptual representation is to deny the spatial nature so clearly evident in the world we perceive around us. To paraphrase Descartes, it is not only the existence of myself that is verified by the fact that I think, but when I experience the vivid spatial presence of objects in the phenomenal world, those objects are certain to exist, at least in the form of a subjective experience, with properties as I experience them to have, i.e. location, spatial extension, color, and shape. I think them, therefore they exist. All that remains uncertain is whether those percepts exist also as objective external objects as well as internal perceptual ones, and whether their perceived properties correspond to objective properties. But their existence in my internal perceptual world is beyond question if I experience them, even if only as a hallucination.

The Neuroreductionist Objection

A number of theorists have proposed (Dennett 1991, 1992, O'Regan 1992, Pessoa et al. 1998) that consciousness is an illusion, and that in fact the conscious experience is considerably more impoverished than it appears subjectively. For example the loss of resolution in peripheral vision is not immediately apparent to the naïve observer. However the objective of perceptual modeling is not to quantify the casual experience of the naïve observer, but the careful observation of the critical observer. For the loss of acuity in peripheral vision is plainly evident under phenomenological observation, and can be easily verified psychophysically, and therefore this should also be reflected in the perceptual model. Dennett argues that visual information need not be encoded explicitly in the brain, but merely implicitly in some kind of compressed representation. For example the percept of a surface with uniform color could be abbreviated to a kind of edge image, with a single value to encode the color of the whole surface, as is the practice in image compression algorithms. This notion appears to be supported by neurophysiological studies of the retina which show that ganglion cells respond only to spatial or temporal discontinuities of the brightness profile, with no response within regions of uniform color or brightness. Dennett argues that the experience of a filled-in field of color in uniform fields, and in the blind spot, does not suggest an explicit filling-in mechanism in the brain, but that the color experience is encoded by "ignoring an absence" (Dennett 1991,1992). However an absence can only be ignored from a representation that already contains something in the place of the ignored item, otherwise one would experience nothing at all, rather than a spatially continuous field of color. In fact the experience of the retinal blind spot, or a uniformly colored surface, produces a distinct colored experience at every point throughout the colored region to a particular spatial resolution as a spatial continuum, and the informational content of that experience is greater than that in a compressed representation. If it is true that the retinal image encodes only brightness transitions at visual boundaries, then some other mechanism higher up in the processing stream must perform an explicit filling-in to account for the subjective experience of the filled-in surface. In fact the many illusory filling-in phenomena such as the Kanizsa illusion implicate exactly this kind of mechanism in perception. If it were sufficient for the brain to encode visual information only implicitly in some kind of compressed code, then there would be no need to posit any perceptual processing beyond the retina, because the retina already contains an implicit representation of all of the information in the visual scene. If visual information were indeed expressed in a compressed neurophysiological code, then our subjective experience of that information would have to also be correspondingly compressed or abstracted, as is the case for example with an experience of a remembered or imagined scene. The fact that our phenomenal experience is of a filled-in volumetric world is direct and concrete evidence for a volumetric filling-in mechanism in the brain.

An Analogical Paradigm of Representation

Once we recognize the world of experience for what it really is, it becomes clearly evident that the representational strategy used by the brain is an analogical one. In other words, objects and surfaces are represented in the brain not by an abstract symbolic code, as suggested in the propositional paradigm, nor are they encoded by the activation of individual cells or groups of cells representing particular features detected in the scene, as suggested in the neural network or feature detection paradigm. Instead, objects are represented in the brain by constructing full spatial effigies of them that appear to us for all the world like the objects themselves- or at least so it seems to us only because we have never seen those objects in their raw form, but only through our perceptual representations of them. Indeed the only reason why this very obvious fact of perception has been so often overlooked is because the illusion is so compelling that we tend to mistake the world of perception for the real world of which it is merely a copy. This is a classic case of not seeing the forest for the trees, for the evidence for the nature of perceptual representation in the brain has been right before us all along, cleverly disguised as objects and surfaces in a virtual world that we take to be reality. So for example when I stand before a table, the light reflected from that table into my eye produces an image on my retina, but my conscious experience of that table is not of a flat two-dimensional image, but rather my brain fabricates a three-dimensional replica of that table carefully tailored to exactly match the retinal image, and presents that replica in an internal perceptual space that includes a model of my environment around me, and a copy of my own body at the center of that environment. The model table is located in the same relation to the model of my body as the real table is to my real body in external space. The perception or consciousness of the table therefore is identically equal to the appearance of the effigy of the table in my perceptual representation, and the experience of that internal effigy is the closest I can ever come to having the experience of the physical table itself.

The Function of Conscious Experience

There is much discussion in philosophy about the possible function of conscious experience, and whether it is an epiphenomenon that has no direct functional value. The issue is highlighted with the notion of the hypothetical `zombie' whose behavior as observed externally is identical to that of normal people, except that this zombie supposedly lacks all conscious experience. This notion sounds very peculiar from the indirect realist perspective. For once we accept that the world which appears to be external to our bodies is in fact an internal data structure in our physical brain, the notion of the zombie as proposed becomes a contradiction in terms. For a zombie that does not possess an internal picture of the world around it, could not possibly walk about in the world avoiding obstacles as we do. Without a conscious memory of where it had just been, and a conscious intent of where it would like to go next, the zombie would behave much as we do when we are in an unconscious state, i.e. it would lie inert and immobile, with neither the incentive nor the capacity for action.

The notion of this kind of zombie presupposes a distinction between the structural aspects of the perceived world, which are supposedly a reflection of the objective spatial properties of the world, and the subjective qualia with which those perceptual structures are somehow painted or clothed. This harks back to an old distinction in psychology between the primary and secondary qualities of perception. Immanuel Kant (1781 / 1991) argued however that the perception of space and time are themselves a priori intuitions, i.e. they are a kind of qualium used by the mind to express the structure of external reality. Therefore the fact that the world of experience appears as a volumetric spatial structure is itself an aspect of conscious experience, rather than a veridical manifestation of the true nature of the external world. The phenomenon of hemi-neglect demonstrates that portions of perceived space can completely disappear from consciousness, making it impossible to form either mental or perceptual imagery in that portion of space. It is not just the objects in that space that become invisible, but the very space itself as a potential holder of objects that ceases to exist. This condition clearly indicates the reality of an explicit spatial representation in the brain.

The notion of the hypothetical zombie therefore is impossible in principle, because it is impossible to have any perceptual experience in the absence of some subjective qualia by which that experience is expressed. For qualia are the carriers of the information experienced in perception (Rosenberg 1999), just as electromagnetic waves are the carriers of radio and television signals. Again, information theory can help clarify the central role of qualia in perceptual representation. For information is defined independent of the physical medium by which it is carried, whether it be electromagnetic radiation, electrical voltages on a wire, or characters on a printed page, etc. However in every case there must be some physical medium to carry that information, for it is impossible for information to exist without a physical carrier of some kind. A similar principle holds on the subjective side of the mind / brain barrier, where the information encoded in perceptual experience is carried by modulations of some subjective qualium, whether it be variations of hue, brightness, saturation, pitch, heat or cold, pleasure or pain, etc. The notion of experience without qualia to support it is as impossible as the notion of information without any physical medium or mechanism to carry that information. The zombie argument therefore is circular, for it presupposes the possibility of behavior in the absence of experience to demonstrate that behavior and experience are theoretically separable.

The functional purpose of conscious experience therefore is to provide an internal replica of the external world in order to guide our behavior through the world, for otherwise we would have no knowledge of the structure of the world, or of our location within it. Exactly how behavior is guided by conscious experience can also be determined by phenomenological observation. What that observation reveals is an analogical paradigm of behavioral computation that is quite unlike the analytical symbolic paradigm of computation embodied in the digital computer. In order to illustrate the functional principle behind this unique computational strategy I will present a spatial analogy that operates on the same essential principle as human behavioral computation, although in a much simplified form. I will then present the phenomenological evidence that implicates that same principle of spatial computation in human behavior.

The Plotting Room Analogy

During the Battle of Britain in the second world war, Britain's Fighter Command used a plotting room as a central clearing house for assembling information on both incoming German bombers, and defending British fighters, gathered from a variety of diverse sources. A chain of radar stations set up along the coast would detect the range, bearing, and altitude of invading bomber formations, and this information was continually communicated to the Fighter Command plotting room. British fighter squadrons sent up to attack the bombers reported their own position and altitude by radio, and squadrons on the ground telephoned in their strength and state of readiness. Additional information was provided by the Observer Corps, from positions throughout the British Isles. The Observer Corps would report friendly or hostile aircraft in their area that were either observed visually, or detected by sound with the aid of large acoustical dishes. Additional information was gathered by triangulating the radio transmissions from friendly and hostile aircraft, using radio direction finding equipment. All of this information was transmitted to the central plotting room, where it was collated, verified, and cross-checked, before being presented to controllers to help them organize the defense. The information was presented in the plotting room in graphical form, on a large table map viewed by controllers from a balcony above. Symbolic tokens representing the position, strength, and altitudes of friendly and hostile formations were moved about on the map by WAAFs (Women's Auxiliary Air Force personel) equipped with croupier's rakes, in order to maintain an up-to-date graphical depiction of the battle as it unfolded.

The symbols representing aircraft on the plotting room map did not distinguish between aircraft detected by radar as opposed to those sighted visually or detected acoustically, because the information of the sensory source of the data was irrelevant to the function of the plotting room. The same token was used therefore to represent a formation of bombers as it was detected initially by radar, then tracked by visual and acoustical observation, and finally confirmed by radio reports from the fighter squadrons sent out to intercept it. The functional principle behind this concept of plotting information is directly analogous to the strategy used for perceptual representation in the brain.

From Perception to Behavior

Now the plotting room analogy diverges from perception in that the plotting room does indeed have a "homunculus" or homunculi, in the form of the plotting room controllers, who issue orders to their fighter squadrons based on their observations of the plotting room map. However the idea of a central clearinghouse for assembling sensory information from a diverse array of sensory sources in a unified representation is just as useful for an automated system as it is for one designed for human operators. The automated system need only be equipped with the appropriate spatial algorithms to make use of that spatial data. The fact that a spatial or analogical form was chosen for the plotting room suggests that this form of information is more easily processed by the human mind than a more symbolic or abstracted representation, which in turn suggests that the mind employs a spatial computational strategy. In order to clarify the meaning of a spatial algorithm that operates on spatial data, I will describe a hypothetical mechanism designed to replace the human controllers in the Fighter Command plotting room. The general principle of operation of that mechanism, I propose, reflects the principle behind human perception and how it relates to behavior. Let us consider first a mechanism to command the fighter squadrons to take off when the enemy bombers approach the outer limits of their operational range. To achieve this, every fighter squadron token on the plotting room map could be equipped with a circular field of interest centered on its current location, like a large circular plate wired to respond to the presence of enemy bomber tokens within the circumference of that circle. If an enemy formation enters this circular field, the squadron is automatically issued orders to take off. Once airborne, the squadron should be directed to close with the enemy formation. This objective could be expressed in the plotting room model as a force of attraction, like a magnetic or electrostatic force, that pulls the fighter squadron token in the direction of the approaching bomber formation token on the plotting room map. However the token cannot move directly in response to that force. Instead, that attractive force is automatically translated into instructions for the squadron to fly in the direction indicated by that attractive force, and the force is only relieved or satisfied as the radio, radar, and Observer Corps reports confirm the actual movement of the squadron in the desired direction. That movement is then reflected in the movement of it's token on the plotting room map. The force of attraction between the squadron token and that of the bomber formation in the plotting room model represents an analogical computational strategy or algorithm, designed to convert a perceptual representation, the spatial model, into a behavioral response, represented by the command for the squadron to fly in the direction indicated by the force of attraction. The feedback loop between the perceived environment and the behavioral response that it provokes, is mediated through actual behavior in the external world, as reflected in sensory or "somatosensory" confirmation of that behavior back in the perceptual model.

The spatial model of the battle on the plotting room map represents the best guess, based on sensory evidence, of the actual configuration of the forces in the real world outside. Therefore when a formation of aircraft is believed to be in motion, its token is advanced automatically based on its estimated speed and direction, even in the absence of direct reports, to produce a running estimate of its location at all times. To demonstrate the power of this kind of computational strategy, let us delve a little deeper into the plotting room analogy, and refine the mechanism to show how it can be designed to be somewhat more intelligent.

When intercepting a moving target such as a bomber formation in flight, it is best to approach it not directly, but with a certain amount of "lead", just as a marksman leads a moving target by aiming for a point slightly ahead of it. Therefore the bomber formation is best intercepted by approaching the point towards which it appears to be headed. This too can be calculated with a spatial algorithm by using the recent history of the motion of the bomber formation to produce a "leading token" placed in front of the moving bomber token in the direction that it appears to be moving, advanced by a distance proportional to the estimated speed of the bomber formation.The leading token therefore represents the presumed future position of the moving formation a certain interval of time into the future. The fighter squadron token should therefore be designed to be attracted to this leading token, rather than to the token representing the present position of the bomber formation itself. But in the real situation the invading bombers would often change course in order to throw off the defense. It was important therefore to try to anticipate likely target areas, and to position the defending fighters between the bombers and their likely objectives. This behavior could be achieved by marking likely target areas, such as industrial cities, airports, or factories etc. with a weaker attractive force to draw friendly fighter squadron tokens towards them. This force, in conjunction with the stronger attraction to the hostile bombers, will induce the fighters to tend to position themselves between the approaching bombers and their possible targets, or to deviate their course towards those potential targets on their way to the attacking bombers, and then to approach the bombers from that direction. Fighter squadrons could also be designed to exert an influence on one another. For example if it is desired for individual squadrons to accumulate into larger formations before engaging the bomber streams, (the "big wing" strategy favored by Wing Commander Douglas Bader) the individual fighter squadron tokens could be equipped with a mutually attractive force, which will tend to pull different squadrons towards each other on their way to the bomber formations whenever convenient, tending to make them coalesce into larger clumps. If on the other hand it is desired to distribute the fighters more uniformly across the enemy formations, the fighter squadron tokens could be given a mutually repulsive force, which would tend to keep them spread out to cover more territory defensively. Additional forces or influences can be added to produce even more complex behavior. For example as a fighter squadron begins to exhaust its fuel and / or ammunition, its behavior pattern should be inverted, to produce a force of repulsion from enemy formations, and attraction back towards its home base, to induce it to refuel and re-arm at the nearest opportunity. With this kind of mechanism in place, fighter squadrons would be automatically commanded to take off, approach the enemy, attack, and return to base, all without human intervention.

The mechanism described above is of course rather primitive, and would need a good deal of refinement to be at all practical, to say nothing of the difficulties involved in building and maintaining a dynamic analog model equipped with spatial field-like forces. But the computational principle demonstrated by this fanciful analogy is very powerful. For it represents a parallel analogical spatial computation that takes place in a spatial medium, a concept that is quite unlike the paradigm of digital computation, whose operational principles are discrete, symbolic, and sequential. There are several significant advantages to this style of computation. For unlike the digital decision sequence with its complex chains of Boolean logic, the analogical computation can be easily modified by inserting additional constraints into the model. For example if the fighters were required to avoid areas of intense friendly anti-aircraft activity, this additional constraint can be added to the system by simply marking those regions with a repulsive force, that will tend to push the fighter squadron tokens away from those regions without interfering with their other spatial constraints. Since the proposed mechanism is parallel and analog in nature, any number of additional spatial constraints can be imposed on the system in similar manner, and each fighter squadron token automatically responds to the sum total of all of the analog forces acting on it in parallel. In an equivalent Boolean system, every additional constraint added after the fact would require re- examination of every Boolean decision in the system, each of which would have to be modified to accommodate every combination of possible contingencies. In other words adding or removing constraints after the fact in a Boolean logic system is an error-prone and time consuming business requiring the attention of an intelligent programmer, whereas in the analogical representation spatial constraints are relatively easy to manipulate independently, while the final behavior automatically takes account of all of those spatial influences simultaneously. The analogical paradigm also permits behavior that is governed by extended field-like influences in a manner that is awkward to emulate in the Boolean paradigm. For example a pilot in combat is naturally reluctant to fly out over water where rescue would be much more difficult should he be forced to bail out. The surface of the ocean could therefore be endowed with an extended continuous field of mild repulsive force, to induce the fighter squadrons to stay over land whenever that does not interfere with their primary mission.

Analogical v.s. Sequential Logic

The analogical and discrete paradigms of computation have very different characters. The Boolean sequential logic system is characterized by a jerky robotic kind of behavior, due to the sharp decision thresholds and discrete on/off nature of the computation. The analogical system on the other hand exhibits a kind of smooth interpolated motion characteristic of biological behavior. Of course a digital system can be contrived to emulate an analogical one (as is true for the converse also), and indeed computer simulations of weather systems, aircraft in flight, and other analog physical systems offer examples of how this can be done. But perhaps the greatest advantage of the analogical paradigm is that it suffers no degradation in performance as the system is scaled up to include hundreds or thousands of spatial constraints simultaneously, whereas the digital simulation gets bogged down easily, because those constraints must be handled in sequence in long chains of Boolean logic. The analogical paradigm is therefore particularly advantageous for the most complex computational problems that require simultaneous consideration of innumerable factors, where the digital sequential algorithm becomes intractable. It is also advantageous in problems involving extended fields of influence, such as seeking an optimal path through irregular terrain.

There are however cases in which a Boolean or sequential component is required in a control system. For example if it is required to direct a squadron to proceed to a point B by way of an intermediate point A. This kind of sequential logic can be incorporated in the analogical representation by installing an attractive force to point A that remains active only until the squadron token arrives there, at which point that force is turned off, and an attractive force is applied to point B instead. Or perhaps the attractive force can fade out gradually at point A in analog fashion as the squadron token approaches, while a new force fades in at point B, allowing the squadron to cut the corner with a smooth curving trajectory instead of a sharp turn, or to adapt the curve of their turn to account for other spatial influences acting on it at that time. In other words the analogical control system can be designed to incorporate Boolean or sequential decision sequences within it, turning the analog forces on and off in logical sequence, although the primitives, or elements of that sequential logic, are built up out of analogical force-field elements. A similar logical decision process would be required for a squadron to select its target. For if a squadron token were to experience an equal attraction to two or more bomber formations simultaneously, that would cause it to intercept some point between them. Therefore the squadron token should be designed to select one bomber formation token from the rest, and then feel an attractive force to that one exclusively. The analogical paradigm therefore can be designed to subsume digital or sequential functions, while maintaining the basic analogical nature of the elements of that logic, thereby preserving the advantages of a parallel decision strategy within sequentially ordered stages of processing.

Internal v.s. External Representation

The analogical spatial strategy presented above is reminiscent of the kind of computation suggested by Braitenberg (1984) in his book Vehicles. Braitenberg describes very simple vehicles that exhibit a kind of animal-like behavior by way of very simple analog control systems. For example Braitenberg describes a light-powered vehicle equipped with two photocells connected to two electric motors that power two driving wheels. In the presence of light, the current from the photocells drives the vehicle forward, but if the light distribution is non-uniform and one photocell receives more light than the other, the vehicle will turn either towards or away from the light, depending on how the photocells are wired to the wheels. One configuration produces a vehicle that exhibits light-seeking behavior, like a moth around a candle flame, whereas with the wires reversed the same vehicle will exhibit light-avoiding behavior, like a cockroach scurrying for cover when the lights come on. The behavior of these simple vehicles is governed by the spatial field defined by the intensity profile of the ambient light, and therefore, like the analogical paradigm, this type of vehicle also performs a spatial computation in a spatial medium. However in the case of Braitenberg's vehicles, the spatial medium is the external world itself, rather than an internal replica of it. Rodney Brooks (1991) elevates this concept to a general principle of robotics, whose objective is "intelligence without representation". Brooks argues that there is no need for a robotic vehicle to possess an internal replica of the external world, because the world can serve as a representation of itself. O'Regan (1992) extends this argument to human perception, and insists that the brain does not maintain an internal model of the external world, because the world itself can be accessed as if it were an internal memory, except that it happens to be external to the organism. Nevertheless information from the world can be extracted directly from the world whenever needed, just like a data access of an internal memory store.

However there is a fundamental flaw with this concept of perceptual processing, at least as a description of human perception. For unless we invoke mystical processes beyond the bounds of science, then surely our conscious experience of the world must be limited to that which is explicitly represented in the physical brain. In the case of Braitenberg's vehicles, that consciousness would correspond to the experience of only two values, i.e. the brightness detected by the two photocells, and the conscious decision-making processes of the vehicle (if it can be called such) would be restricted to responding to those two values with two corresponding motor signals. These four values therefore represent the maximum possible content of the vehicle's "conscious experience". The vehicle has no idea of its location or orientation in space, and its complex spatial behavior is more a property of the world around it than of anything going on in its "brain". In the case of human perception, our consciousness would be restricted to a sequence of two-dimensional images, as recorded by the retina, or pairs of images in the binocular case. However our experience is very different from the retinal representation. For when we stand in a space, like a room, we experience the volume of the room around us as a simultaneously present whole, every volumetric point of which exists as a separate parallel entity in our conscious experience, even in monocular viewing. Braitenberg's vehicles can be programmed to go to the center of a room by placing a light at that location, but the vehicle cannot conceive of the void of the room around it or the concept of its center, for those are spatial concepts that require a spatial understanding. We on the other hand can see the walls, floor, and ceiling of a room around us simultaneously, embedded in a perceived space, and we can conceptualize any point in the space of that room in three dimensions without having to actually move there ourselves. We can program ourselves to follow a wall at a certain distance, or to walk along the center of a path or corridor, or to pick a path of least resistance through irregular terrain, taking account simultaneously of every region of rough and smooth ground. The world of visual experience therefore clearly demonstrates that we possess an internal map of external space like the Fighter Command plotting room, and the world we see around us is exactly that internal representation.

Symbol Grounding by Spatial Analogy

The analogical spatial paradigm offers a solution to some of the most enduring and troublesome problems of perception. For although the construction and maintenance of a spatial model of external reality is a formidable computational challenge, the rewards that it offers makes the effort very much worth the trouble. The greatest difficulty with a more abstracted or symbolic approach to perception has always been the question of how to make use of that abstracted knowledge. This issue was known as the symbol grounding problem (Harnad 1990) in the propositional paradigm of representation promoted by the Artificial Intelligence (AI) movement. The problem of vision, as conceptualized in AI, involves a transformation of the two-dimensional visual input into a propositional or symbolic representation. For example an image of a street scene would be decomposed into a list of items recognized in that scene, such as "street", "car", "person", etc., as well as the relations between those items. Each of these symbolic tags or labels is linked to the region of the input image to which it pertains. The two- dimensional image is thereby carved up into a mosaic of distinct regions, by a process of segmentation (Ballard & Brown 1982, p. 6-12), each region being linked to the symbolic label by which it is identified. Setting aside the practical issues of how such a system can be made to work as intended, (which itself turns out to be a formidable problem) this manner of representing world information is difficult to translate into practical interaction with the world. For the algorithm does not "see" the street in the input image as we do, but rather it sees only a two-dimensional mosaic of irregular patches connected to symbolic labels. Consider the problem faced by a robotic vehicle designed to find a mail box on the street and post a letter in it. Even if an image region is identified as a mail box, it is hard to imagine how that information could be used by the robot to navigate down the street to the mail box avoiding obstacles along the way. What is prominently absent from this system is a three-dimensional consciousness of the street as a spatial structure, the very information that is so essential for practical navigation through the world. A similar problem is seen in the feature detection paradigm that suggests a similar decomposition of the input image into an abstracted symbolic representation.

An analogical representation of the street on the other hand would involve a three-dimensional spatial model, like a painted cardboard replica of the street complete with a model of the robot's own body at the center of the scene. It is the presence of such a three-dimensional replica of the world in an internal model that, I propose, constitutes the act of "seeing" the street. Setting aside the issue of how such a model can be constructed from the two-dimensional image, (which is also a formidable problem) making practical use of such a representation is much easier than for a symbolic or abstracted representation. For once the mailbox effigy in the model is recognized as such, it can be marked with an attractive force, and that force in turn draws the effigy of the robot's body towards the effigy of the mailbox in the spatial model. Obstacles along the way are marked with negative fields of influence, and the spatial algorithm to get to the mailbox is to follow the fields of force, like a charged particle responding to a pattern of electric fields.

The analogical paradigm can also be employed to compute the more detailed control signals to the robot's wheels. The forward force on the model of the robot's body applies a torque force to the model wheels, but the model wheels cannot respond to that force directly. Instead, that torque in the model is interpreted as a motor command to the wheels of the larger robot to turn, and as larger wheels begin to turn in response to that command, that turning is duplicated in the turning of the model wheels, producing behavior as if responding directly to the original force in the model world. Side forces to steer the robot around obstacles can also be computed in similar fashion. A side force on the model robot should be interpreted as a steering torque, like the torque on the pivot of a caster wheel. That pivoting torque in the model is interpreted as a steering command to pivot the larger wheels, and the steering of the larger wheels is then reflected in the steering of the model wheels also. The forces impelling the model robot through the model world are thereby transformed into motor commands to navigate the real robot through the real world. Obstacles in the real world that might block the larger wheels from turning or pivoting as commanded, would prevent their smaller replicas from turning also, thereby communicating the constraints of the external world back in to the internal model. Unlike Braitenberg's vehicles, this robot has a spatial "consciousness" or awareness of the structure of the world around it, for it can feel the simultaneous influence of every visible surface in the scene, which jointly influence its motor behavior. For example the robot navigating between obstacles in its path feels the repulsive influence of all of them simultaneously, and is thereby induced to take the path of least resistance weaving between them like a skier on a slalom course, on the way to the attractive target point. Our own conscious experience clearly has this spatial property for we are constantly aware of the distance to every visible object or surface in our visual world simultaneously, and we can voluntarily control our position in relation to those surfaces, although what we are actually "seeing" is an internal replica of the world rather than the world itself. This is not the whole story of consciousness, for there remains a deeper philosophical issue with regard to the ultimate nature of conscious experience, or what Chalmers (1995) refers to as the "hard problem" of consciousness, which in this case relates to the question of how the presence of a spatial model of any sort in a human or robot's brain could lead to a subjective experience of that internal model. However the analogical paradigm addresses the functional aspect, or the "easy problem" of consciousness by clarifying the functional role of conscious experience, and how it serves to influence behavior.

The idea of motor planning as a spatial computation has been proposed in field theories of motor control, (Gibson & Crooks 1938, Koffka 1935, Lewin 1969) in which the intention to walk towards a particular objective in space is expressed as a field-like force of attraction, or valence, between a model of the body, and a model of the target, expressed in a spatial model of the local environment. The target is marked with a positive valence, while obstacles along the way are marked with negative valence. When we see an attractive stimulus, for example a tempting delicacy in a shop window at a time when we happen to be hungry, our subjective impression of being physically drawn towards that stimulus is not only metaphorically true, but I propose that this subjective impression is a veridical manifestation of the mental mechanism that drives our motor response. For the complex combination of joint motions responsible for deviating our path towards the shop window are computed in spatial fashion in a spatial model of the world, exactly as we experience it to occur in subjective consciousness. Indeed the spatial configuration of the positive and negative valence fields evoked by a particular spatial environment can be inferred from observation of their effects on behavior, in the same way that the pattern of an electric field can be mapped out by its effects on moving charged particles. For example the negative valence field due to an obstacle such as a sawhorse placed on a busy sidewalk can be mapped by observing its effect on the paths of people walking by. The moving stream of humanity divides to pass around the obstacle like water flowing around a rock in a stream in response to the negative valence field projected by that obstacle. Although the influence of this obstacle is observed in external space, the spatial field that produces that behavioral response actually occurs in the spatial models in the brains of each of the passers-by individually.

Another example of a spatial computational strategy can be formulated for the problem of targeting a multi-jointed limb, i.e. specifying the multiple angles required of the individual joints of the limb in order to direct its end-effector to a target point in three-dimensional space. This is a complex trigonometrical problem that is underconstrained. However a simple solution to this complex problem can be found by building a scale model of the multi-jointed limb in a scale model of the environment in which the limb is to operate. The joint angles required to direct the limb towards a target point can be computed by simply pulling the end-effector of the model arm in the direction of the target point in the modeled environment, and recording how the model arm reacts to this pull. Sensors installed at each individual joint in the model arm can be used to measure the individual joint angles, and those angles in turn can be used as command signals to the corresponding joints of the actual arm to be moved. The complex trigonometrical problem of the multi-jointed limb is therefore solved by analogy, as a spatial computation in a spatial medium.

There is evidence to suggest that this kind of strategy is employed in biological motion. For when a person reaches for an object in space, their body tends to bend in a graceful arc, whose total deflection is evenly distributed amongst the various joints to define a smooth curving posture, i.e. the motor strategy serves to minimize a configural constraint expressed in three-dimensional space, thus implicating a spatial computational strategy. The dynamic properties of motor control are also most simply expressed in an external spatial context. For the motion of a person's hand while moving towards a target describes a smooth arc in space and time, accelerating uniformly through the first half of the path, and decelerating to a graceful stop through the second half. In other words the observed behavior is exactly as if the person's body were indeed responding lawfully to a spatial force of attraction between the hand and the target object in three-dimensional space, which in turn suggests that a spatial computational strategy is being used to achieve that result. Further evidence comes from the subjective experience of motor planning, for we are unaware of the individual joint motions when planning such a move, but rather our experience is more like a force of attraction that seems to pull our hand towards the target object, and the joints in our arm seem to simply follow our hand as it responds to that pull. This computational strategy generalizes to any configuration of limbs with any number of joints, as well as to continuous limbs like a snake's body or an elephant's trunk.

Conclusion

The debate over the functional role of conscious experience has been hung up for decades on an epistemological confusion that seems to pervade much of psychology and consciousness studies, whereby the world of experience is mistaken for the real external world of which it is merely an internal replica. This naive concept of vision leads to endless confusion as to the true nature of visual processing, for it suggests a visual representation that is expressed in some kind of abstract symbolic code, as proposed in the propositional paradigm of computation advanced by the AI movement, or by the feature detection paradigm inspired by single-cell recording data. This view of visual processing renders consciousness itself as somehow invisible, or impossible to observe, for whenever we attempt to observe consciousness we see only that which we are conscious of. The naive realist view suggests that we can somehow have experience of the external world directly, beyond the sensory surface, in violation of everything we know about the causal chain of vision, and in a manner that appears impossible in principle to implement in any artificial vision system. In other words the naive realist view of perception presents consciousness as a magical mystical entity forever beyond human comprehension.

The indirect realist view of perception is also incredible, for it suggests a three-dimensional volumetric imaging system in the brain that appears inconsistent with everything we think we know about neurophysiology. However the phenomenological evidence clearly implicates exactly such an imaging mechanism in the brain, and the evidence of phenomenology is primary, and at least as reliable as any knowledge of the visual mechanism obtained from the "outside". If the evidence of phenomenology is inconsistent with contemporary concepts of neurocomputation, then it is our neurocomputational theories that are in urgent need of revision to bring them in line with the evidence of phenomenology. The indirect realist view reveals that consciousness is by no means an epiphenomenon lacking any functional value. Instead, the function of conscious experience is to present us with an internal model of the external world, without which we would have no experience of the structure of the world, and no means of controlled interaction with that world. As Bertrand Russell said, all that we can ever see is the inside of our own head.

References

Baldwin T. (1992) The Projective Theory of Sensory Content. In: T. Crane (Ed.) The Contents of Experience: Essays on Perception. Cambridge UK: Cambridge University Press, 177-195.

Ballard D. H. & Brown C. M. (1982) "Computer Vision". Prentice-Hall, Englewood Cliffs, NJ.

Braitenberg (1984) Vehicles: Experiments in Synthetic Psychology. Cambridge MA: MIT Press.

Bridgman P. W. (1940) Science: Public or Private? Philosophy of Science 7: 36-48.

Brooks R. A. (1991) Intelligence Without Representation. Artificial Intelligence 47 (1/3) p. 139.

Chalmers, D. J. (1995) Facing Up to the Problems of Consciousness. Journal of Consciousness Studies 2 (3) 200-219. Reprinted in "Toward a Science of Consciousness II, The Second Tucson Discussions and Debates". (1996) S. R. Hameroff, A. W. Kaszniak, & A. C. Scott (Eds.) Cambridge MA: MIT Press, 5-28.

Clark A. (1993) Sensory Qualities. Oxford: Clarendon Press.

Dennett D. (1988) Quining Qualia. In Consciousness in Contemporary Science (Marcel A. J., & Bisiach E. Eds.) Oxford: Clarendon Press, pp 42-87.

Dennett D. (1991) Consciousness Explained. Boston, Little Brown & Co.

Dennett D. (1992) `Filling In' Versus Finding Out: a ubiquitous confusion in cognitive science. In Cognition: Conceptual and Methodological Issues, Eds. H. L. Pick, Jr., P. van den Broek, & D. C. Knill. Washington DC.: American Psychological Association.

Gibson J. J. & Crooks L. E. (1938) A Theoretical Field-Analysis of Automobile Driving. The American Journal of Psycholgy 51 (3) 453-471.

Gibson J. J. (1972) A Theory of Direct Visual Perception. In: The Psychology of Knowing. (J. R. Royce & W. W. Rozeboom (Eds.), Gordon & Breach.

Harnad S. (1990) The Symbol Grounding Problem. Physica D. 42, 335-346.

Harrison S. (1989) A New Visualization on the Mind-Brain Problem: Naive Realism Transcended. In J. Smythies & J. Beloff (Eds.) The Case for Dualism. Charlottesville: University of Virginia.

Hoffman D. D. (1998) Visual Intelligence: How We Create What We See. New York: W. W. Norton.

Humphrey N. (1999) The Privatization of Sensation. In: S. R. Hameroff, A. W. Kaszniak, & A. C. Scott (Eds.) Toward a Science of Consciousness II the Second Tucson Discussions and Debates, Cambridge MA: MIT Press, 247-257.

Hut P. & Shepard R. N. (1996) Turning the "Hard Problem" Upside Down and Sideways. Journal of Consciousness Studies 3: 313-329.

Kant I. (1781 / 1991) Critique of Pure Reason. Vasilis Politis (Ed.) London: Dent.

Koffka K, (1935) Principles of Gestalt Psychology. New York, Harcourt Brace & Co.

Köhler W. (1971) A Task For Philosophers. In: The Selected Papers of Wolfgang Koehler, Mary Henle (Ed.) Liveright, New York. pp 83-107.

Kuhn T. S. (1970) The Structure of Scientific Revolutions. Chicago: Chicago University Press.

Lewin K. (1969) Principles of Topological Psychology. New York: McGraw-Hill.

O'Regan K. J., (1992) Solving the `Real' Mysteries of Visual Perception: The World as an Outside Memory. Canadian Journal of Psychology 46 461-488.

O'Shaughnessy B. (1980) The Will: A Dual Aspect Theory. (2 volumes) Cambridge UK: Cambridge University Press.

Pessoa L., Thompson E., & Noë A. (1998) Finding Out About Filling-In: A guide to perceptual completion for visual science and the philosophy of perception. Behavioral and Brain Sciences 21, 723-802.

Revonsuo A. (1995) Consciousness, Dreams, and Virtual Realities. Philosophical Psychology 8 (1) 35- 58.

Rosenberg G. H. (1999) On the Intrinsic Nature of the Physical. In: S. R. Hameroff, A. W. Kaszniak, & A. C. Scott (Eds.) Toward a Science of Consciousness III the Third Tucson Discussions and Debates, Cambridge MA: MIT Press, pp 33-47.

Ruch (1950) In J. F. Fulton (Ed.) Textbook of Physiology, 16th Ed. Philadelphia, p. 311. Pertinent passage quoted in Smythies (1954).

Russell B. (1927) Philosophy. New York: W. W. Norton.

Searle J. R. (1992) The Rediscovery of Mind. Cambridge MA: The MIT Press.

Shepard R. N. & Hut P. (1998) My Experience, Your Experience, and the World We Experience: Turning the Hard Problem Upside Down. In: S. R. Hameroff, A. W. Kaszniak, & A. C. Scott (Eds.) Toward a Science of Consciousness II the Second Tucson Discussions and Debates, Cambridge MA: MIT Press 143-148.

Smythies J. R. (1954) Analysis of Projection. British Journal for the Philosophy of Science 5, 120-133.

Smythies J. R. (1989) The Mind-Brain Problem. In: J. R. Smythies & J. Beloff (Eds) The Case For Dualism. Charlottesville: University of Virginia Press.

Smythies J. R. (1994) The Walls of Plato's Cave: the science and philosophy of brain, consciousness, , and perception. Aldershot UK: Avebury.

Velmans M. (1990) Consciousness, Brain and the Physical World. Philosophical Psychology 3 (1) 77-99.