The experiments reported herein probe the visual cortical mechanisms that control near–far percepts in response to two-dimensional stimuli. Figural contrast is found to be a principal factor for the emergence of percepts of near versus far in pictorial stimuli, especially when stimulus duration is brief. Pictorial factors such as interposition (Experiment 1) and partial occlusion Experiments 2 and 3) may cooperate, as generally predicted by cue combination models, or compete with contrast factors in the manner predicted by the FACADE model. (...) In particular, if the geometrical con guration of an image favors activation of cortical bipole grouping cells, as at the top of a T-junction, then this advantage can cooperate with the contrast of the con guration to facilitate a near–far percept at a lower contrast than at an X-junction. Varying the exposure duration of the stimuli shows that the more balanced bipole competition in the X-junction case takes longer exposure times to resolve than the bipole competition in the T-junction case (Experiment 3). (shrink)
The human urge to represent the three-dimensional world using two-dimensional pictorial representations dates back at least to Paleolithic times. Artists from ancient to modern times have struggled to understand how a few contours or color patches on a flat surface can induce mental representations of a three-dimensional scene. This article summarizes some of the recent breakthroughs in scientifically understanding how the brain sees that shed light on these struggles. These breakthroughs illustrate how various artists have intuitively understand paradoxical properties about (...) how the brain sees, and have used that understanding to create great art. These paradoxical properties arise from how the brain forms the units of conscious visual perception; namely, representations of threedimensional boundaries and surfaces. Boundaries and surfaces are computed in parallel cortical processing streams that obey computationally complementary properties. These streams interact at multiple levels to overcome their complementary weaknesses and to transform their complementary properties into consistent percepts. The article describes how properties of complementary consistency have guided the creation of many great works of art. (shrink)
Recent neural models clarify many properties of mental imagery as part of the process whereby bottom-up visual information is influenced by top-down expectations, and how these expectations control visual attention. Volitional signals can transform modulatory top-down signals into supra-threshold imagery. Visual hallucinations can occur when the normal control of these volitional signals is lost.
Neural models have proposed how short-term memory (STM) storage in working memory and long-term memory (LTM) storage and recall are linked and interact, but are realized by different mechanisms that obey different laws. The authors' data can be understood in the light of these models, which suggest that the authors may have gone too far in obscuring the differences between these processes.
Lewis proposes a “reconceptualization” of how to link the psychology and neurobiology of emotion and cognitive-emotional interactions. His main proposed themes have actually been actively and quantitatively developed in the neural modeling literature for more than 30 years. This commentary summarizes some of these themes and points to areas of particularly active research in this area.
Plamondon & Alimi (P&A) have unified much data on speed/accuracy trade-offs during reaching movements using a delta-lognormal form factor that describes notably neuromuscular systems. Their approach raises questions about whether a large number of systems is needed, whether they are linear, and whether the results disclose the neural design principles that control reaching behaviors. The authors admit that (sect. 6, para. 4).
Lehar's lively discussion builds on a critique of neural models of vision that is incorrect in its general and specific claims. He espouses a Gestalt perceptual approach rather than one consistent with the “objective neurophysiological state of the visual system” (target article, Abstract). Contemporary vision models realize his perceptual goals and also quantitatively explain neurophysiological and anatomical data.
Examples of how LTP and LTD can control adaptively-timed learning that modulates attention and motor control are given. It is also suggested that LTP/LTD can play a role in storing memories. The distinction between match-based and mismatch-based learning may help to clarify the difference.
To understand schizophrenia, a linking hypothesis is needed that shows how brain mechanisms lead to behavioral functions in normals, and also how breakdowns in these mechanisms lead to behavioral symptoms of schizophrenia. Such a linking hypothesis is now available that complements the discussion offered by Phillips & Silverstein (P&S).
I agree with Quartz & Sejnowski's points, which are familiar to many scientists. A number of models with the sought-after properties, however, are overlooked, while models without them are highlighted. I will review nonstationary learning, links between development and learning, locality, stability, learning throughout life, hypothesis testing that models the learner's problem domain, and active dendritic processes.
A number of examples are given of how localist models may incorporate distributed representations, without the types of nonlocal interactions that often render distributed models implausible. The need to analyze the information that is encoded by these representations is also emphasized as a metatheoretical constraint on model plausibility.
Steels & Belpaeme (S&B) ask how autonomous agents can derive perceptually grounded categories for successful communication, using color categorization as an example. Their comparison of nativism, empiricism, and culturalism, although interesting, does not include key biological and technological constraints for seeing color or learning color categories in realistic environments. Other neural models have successfully included these constraints.
Because “people create features to subserve the representation and categorization of objects” (abstract) Schyns et al. “provide an account of feature learning in which the components of a representation have close ties to the categorization history of the organism” (sect. 1.1). This commentary surveys self-organizing neural models that clarify this process. These models suggest how “top-down information should constrain the search for relevant dimensions/features of categorization” (sect. 3.4.2).
Boundary completion and surface filling-in are computationally complementary processes whose multiple processing stages form processing streams that realize a hierarchical resolution of uncertainty. Such complementarity and uncertainty principles provide a new foundation for philosophical discussions about visual perception, and lead to neural explanations of difficult perceptual data.
“Chorus embodies an attempt to find out how far a mostly bottom-up approach to representation can be taken.” Models that embody both bottom-up and top-down learning have stronger computational properties and explain more data about representation than feedforward models do.