Plato's Cave: The Radar Analogy

Another Analogy: The Radar Controller

Consider an air traffic controller who communicates with air traffic while monitoring their progress on radar, as illustrated below.

Radar Controller

To the controller, the world consists of blips on a phosphor screen rather than real aircraft, and he gives clearances and plans routing in the context of his miniature copy of the external world. The local coordinates of the radar screen need not correspond to the global coordinates of the outside world, and indeed by convention the top of the screen is usually mapped to north, regardless of the direction that the controller is facing. If the controller happens to be seated facing south, then a blip on the left side of his screen would correspond to an aircraft off to his right, as shown below.

Aircraft being controlled

The controller would normally be totally oblivious to this discrepancy, and indeed it would be completely irrelevant to his normal function.

The radar controller analogy reveals an important property of sensory processing. The raw radar data consists of a time series of range / azimuth measures, together with additional transponder information such as aircraft identification and altitude. This data need not be displayed in spatial form, but could just as well be presented to the controller as lists of aircraft, each with its altitude and heading and other pertinent information. But the use of a spatial display for presenting air traffic information allows the controller to perform spatial calculations, such as estimating distances and angles between blips, as well as projecting flight paths into the future for collision avoidance. The world of the radar screen therefore can be considered as a "virtual reality", or internal copy of the external world, where objects within that virtual world are reproduced in correct geometric relation to one another, and the observer, the controller, is represented at the center of the space. The raw radar data from the radar dish is not simply abstracted, but first it is used to generate a complete spatial copy, or veridical facsimile of the external world. By this I mean not that every object in the outside world is necessarily represented in the radar view, but rather that the radar model of the world is spatially complete, representing all of space locally around the radar site without missing sectors. Even if there were a blind sector shadowed for instance by nearby mountains, that region would also be mapped on the radar screen, so that blips that disappear as they enter the blind sector could be anticipated to reappear at a particular place and time at the other side. Indeed, it might even be useful to plot such invisible aircraft through the blind sector as if they were real blips, filling in the missing information from their speed and direction when they entered the sector. The raw radar data contains information about the positions of individual aircraft, but gives no information about the empty spaces between aircraft, whereas the radar screen explicitly maps that empty space as well as the objects within it. The model of the world on the radar screen therefore is an enriched representation, containing more explicit spatial information than is actually encoded in the raw sensory data. In other words, information has been added to the raw data simply by presenting that data in spatial form, and filling in missing information where possible, including an assumption of empty space where no signal is detected. Furthermore, the spatial resolution of the radar screen may be higher than the resolution of the signal that it displays, so that the smallest resolvable radar feature is displayed as a fuzzy blob on a large screen rather than as a fine point on a smaller screen. This allows the spatial calculations to be performed on the radar screen at a higher precision than the data on which they are based. This corresponds to the phenomenon of hyper-acuity [ ], whereby human perception as measured in psychophysical tests, appears to be at a higher resolution than the resolution provided by the image on the retina.

While the radar controller analogy is not a complete model of perception since it requires the "homunculus" of the controller himself to interpret the view on the screen, the utility of this stage of processing would be equally valid for any artificial algorithm that attempts to make sense of the radar data, since it allows for spatial and temporal calculations of vectors and trajectories at a higher resolution or accuracy than that contained in the sensory input itself. This is not to say that that is the only way to perform computational manipulation of the radar data, and indeed a more abstracted analytical handling is more typical of computer algorithms used for such purposes. I will present evidence however that the fully spatial approach is the one that is used in the brain. The operation of constructing this internal spatial representation is not the same as abstraction, but indeed is the inverse of abstraction, or reification, a reconstruction of a more material or explicit representation from an abstracted one, by filling-in of missing information between measured data points in order to construct a single coherent spatial representation of the external world. I propose that this kind of representation is fundamental to human and animal perception, without which even the most basic interaction with the world would be impossible, or at least exceedingly clumsy. Furthermore, I propose that any higher level abstraction would also be meaningless without the existence of such a spatial facsimile, i.e. that higher level abstractions derive their meaning with respect to their relation to this low level representation, without which they would be nothing more than ungrounded symbols, as meaningless as telephone numbers without a telephone network to make the connection.

Return to argument

Return to Steve Lehar