Visual neuroscience

Visual Neuroscience is a branch of neuroscience that focuses on the visual system of the human body, mainly located in the brain's visual cortex. The main goal of visual neuroscience is to understand how neural activity results in visual perception, as well as behaviors dependent on vision. In the past, visual neuroscience has focused primarily on how the brain (and in particular the Visual Cortex) responds to light rays projected from static images and onto the retina.[1] While this provides a reasonable explanation for the visual perception of a static image, it does not provide an accurate explanation for how we perceive the world as it really is, an ever-changing, and ever-moving 3-D environment. The topics summarized below are representative of this area, but far from exhaustive. To be less topic specific, one can see this textbook for the computational link between nerual activities and visual perception and behavior: "Understanding vision: theory, models, and data" , published by Oxford University Press 2014.[2]

Face processing

A recent study[3] using Event-Related Potentials (ERPs) linked an increased neural activity in the occipito-temporal region of the brain to the visual categorization of facial expressions.[3] Results focus on a negative peak in the ERP that occurs 170 milliseconds after the stimulus onset.[4][5] This action potential, called the N170, was measured using electrodes in the occipito-temporal region, an area already known to be changed by face stimuli. Studying by using the EEG, and ERP methods allow for an extremely high temporal resolution of 4 milliseconds, which makes these kinds of experiments extremely well suited for accurately estimating and comparing the time it takes the brain to perform a certain function. Scientists[3] used classification image techniques,[6] to determine what parts of complex visual stimuli (such as a face) will be relied on when patients are asked to assign them to a category, or emotion. They computed the important features when the stimulus face exhibited one of five different emotions. Stimulus faces exhibiting fear had the distinguishing feature of widening eyes, and stimuli exhibiting happiness exhibited a change in the mouth to make a smile. Regardless of the expression of the stimuli's face, the region near the eyes affected the EEG before the regions near the mouth. This revealed a sequential, and predetermined order to the perception and processing of faces, with the eye being the first, and the mouth, and nose being processed after. This process of downward integration only occurred when the inferior facial features were crucial to the categorization of the stimuli. This is best explained by comparing what happens when participants were shown a face exhibiting fear, versus happiness. The N170 peaked slightly earlier for the fear stimuli at about 175 milliseconds, meaning that it took a participants less time to recognize the facial expression. This is expected because only the eyes need to be processed to recognize the emotion. However, when processing a happy expression, where the mouth is crucial to categorization, downward integration must take place, and thus the N170 peak occurred later at around 185 milliseconds. Eventually visual neuroscience aims to completely explain how the visual system processes all changes in faces as well as objects. This will give a complete view to how the world is constantly visually perceived, and may provide insight into a link between perception and consciousness.

Perceptions of light and shadows

Recently, scientists have conducted experiments challenging the hierarchal process of visual perception of lightness. These experiments have suggested that the perception of lightness is derived from a much higher level of cognition involving the interpretation of illuminations and shadows rather than the process occurring at a basic single unit level.[7] This idea is best explained by examining two different versions of two common visual illustrations. The first set of illustrations cause a phenomenon known as the induction effect. The image consists of two identical gray squares, surrounded by black and white respectively. The result is that the perception of the gray on the white is darker than the gray on the black. The traditional way of explaining this is through lateral inhibition. A cell with a receptive field in the gray square surrounded by the white receives more of the lateral inhibition and thus it does not fire as often and appears darker.[7] The second set of illustrations explain the Craik-O'Brien-Cornsweet illusion. This includes a sharp transition from black to white in the middle then fading to medium gray on the other side. The other two diagrams exhibit the same two effects but with a much greater intensity. This is due to the shapes in the illustrations being 3-dimensional causing the human mind to interpret the seemingly darker areas as shadows.[8] This was first introduced by Ernst Mach in the 1866.

Visual neuroscience and clinical neuropsychology

The continuous research into visual neuroscience has resulted in an ever-growing knowledge of the human visual system. It has filled in many of the steps between the moment when light hits our retina to when we experience visual perception of our world. Insight into this process allows clinical psychologists to gain a greater understanding for what may be causing visual disorders in their patients. While understanding the underlying process of a visual disorder alone will not provide a patient with treatment, it will put both the patient and the clinician at ease knowing exactly what they are dealing with from a scientific perspective rooted in visual neuroscience research rather than a descriptive account of symptoms by the patient.[9]

gollark: More seriously: I don't think the blame thing is entirely binary.
gollark: The kitten killing incentivizer is to blame.
gollark: Yes it does, actually.
gollark: But blaming people for following the incentives is just silly.
gollark: You can blame… GPU-mined cryotocurrency, the increasing costs of newer fab infrastructure and duopoly in GPUs, COVID-19 disruption, that sort of thing.

References

  1. Rainer, G. (2008). Visual neuroscience: computational brain dynamics of face processing. Current Biology, 17(21), R933-R934.
  2. Zhaoping, L. (2014). Understanding vision: theory, models, and data. Oxford University Press.
  3. Schyns, P.G., Petro, L.S., and Smith, M.L. (2007). Dynamics of visual information integration in the brain to categorize facial expressions. Current Biology 17, 1580–1585.
  4. Eimer, M., and Holmes, A. (2007). Event-related brain potential correlates of emotional face processing. Neuropsychologia 45, 15–31.
  5. Vuilleumier, P., & Pourtois, G. (2007). Distributed and interactive brain mechanisms during emotion face perception: evidence from functional neuroimaging. Neuropsychologia 45, 174–194.
  6. Gosselin, F., and Schyns, P.G. (2001). Bubbles: a technique to reveal the use of information in recognition tasks. Vision Res. 41, 2261–2271.
  7. Paradiso, M. (2000). Visual Neuroscience: illuminating the dark corners. Current Biology 10(1), R15–R18.
  8. Logvinenko AD: Lightness induction revisited. Perception 1999, 28:803-816.
  9. Schwartz, S. H. (2010). Visual Perception a clinical orientation(fourth edition). New York: The McGraw-Hill Companies.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.