The reviewer seems to suggest that general principles of neurocomputation, or paradigmatic hypotheses whose predictions are of a general rather than specific nature, are not a valid topic for academic debate. On this point the reviewer is very much mistaken. Theories are validly proposed at different levels of generality, from very specific neurophysiological theories involving the mechanism of action potentials and neurotransmitter release, all the way to more general theories of computational or representational principles. In fact there have been many theories published which are at least as general and non-specific as the present model, including Selfridge's Pandemonium theory, Triesman's Feature Integration Theory, Neisser's Iconic Memory Theory, Thorndyke's Schemas concept, Rosch's Prototype theory of memory, Kosslyn's theory of mental imagery, Atkinson & Schiffrin's model of memory, Gibson's theory of Direct Perception, Biederman's Geon theory, Crick's theory of quantum consciousness, Pribram's holographic theory, De Valois' Fourier theory, McClelland's Interactive Activation Model, Kirkpatrick's Simulated Annealing concept, McClelland's PDP approach, etc. While some of these models do make specific psychophysical predictions, their actual mechanism of operation are often described in the most vague and general terms. Nevertheless, all of these theories have been rightfully published in the literature even in the absence of either direct neurophysiological or psychophysical evidence, or complete mathematical specification (and sometimes both), because until the essential principles of operation of the brain have been established beyond a doubt, such theories enrich the discussion of possible principles and mechanisms of neurocomputation.

Even if such theories are ultimately rejected, they serve the invaluable purpose of supplying a reference point, or "handle" available in all subsequent discussions on the issue, even if those references are cited only as examples of wrongful approaches. Every unique and original concept of neurocomputation deserves to be exposed to the community, to make it available to be judged on its merits by the larger community of scientists, many of whom may have access to additional evidence not available to either the author or to the reviewers, which may help to either support or refute the proposed theory.

There is one other aspect of the present model that distinguishes it from Grossberg's model. The computational algorithm is considerably simpler than that of Grossberg's model, being unrestricted by considerations of "neural plausibility". In fact mathematical simplicity was a prime objective in the specification of this model. This has two distinct advantages. In the first place, it provides a much clearer and simpler explanation of the essential computational principles behind Grossberg's model, which, given the extraordinarily dense presentation of that model, would be a valuable service to the community to "demystify" the essential principles of that model.

Secondly, the very simplicity of the current model makes it considerably more robust than Grossberg's model, which is extremely parameter-sensitive due to a number of nonlinearities and sharp thresholds introduced in the interests of "neural plausibility". In fact the parameters of Grossberg's model are often so touchy that they have to be tweaked and tuned differently to replicate different illusory phenomena, an issue not openly advertised in the papers presenting those models. Some of those results have been very difficult to replicate by other researchers. By contrast, all of the simulations in the present paper were made with the same set of parameters throughout, a feat that was only possible due to the mathematical simplicity and transparency of the model.

Finally, this paper was originally submitted as Part I of a two-part paper, the second part of which delved deeper into the details of the computational properties of the model, addressing exactly the kinds of specific issues which the reviewer would like to see addressed. In particular, that paper highlights various serious limitations of Grossberg's approach, demonstrating how the reciprocal feedback and emergence properties of the present approach, justified here on purely theoretical grounds, actually serve to resolve some of the problems in Grossberg's model, thereby demonstrating again the power of a Gestalt inspired approach to perceptual modeling. It seems however that two-part papers are so unusual these days that they cause confusion in the editorial staff. So I resubmitted Part I as a single paper, with the intention that, should it be accepted for publication, I would then submit Part II as a follow-on. I cannot submit Part II by itself, because it rests on a number of principles which were developed in the present paper. In case you are interested, the original two-part version of the paper is available at...

Computational Implications of Gestalt Theory I: A Multi-Level Reciprocal Feedback (MLRF) to Model Emergence and Reification in Visual Processing
http://cns-alumni.bu.edu/~slehar/webstuff/orivar/orivar1.html

Computational Implications of Gestalt Theory II: A Directed Diffusion to Model Collinear Illusory Contour Formation
http://cns-alumni.bu.edu/~slehar/webstuff/orivar/orivar2.html

This highlights a problem I addressed in my response to the editor about the difficulties in presenting paradigmatic hypotheses. Sometimes a new idea is simply too large to fit into a single paper. In such cases it is very difficult to get these ideas accepted through the standard peer review process.