Background: Visual cortex neurons often respond to stimuli very differently on repeated trials. This trial-by-trial variability is known to be correlated among nearby neurons. Our long-term goal is to quantitatively estimate neuronal response variability, using multi-channel local field potential (LFP) data from single trials.
Methods: Acute experiments were performed with anesthetized (Remifentanil, Propofol, nitrous oxide) and paralyzed (Gallamine Triethiodide) cats. Computer-controlled visual stimuli were displayed on a gamma-corrected CRT monitor. For the principal experiment, two kinds of visual stimuli were used: drifting sine-wave gratings, and a uniform mean-luminance gray screen. These two stimuli were each delivered monocularly for 100 sec in a random order, for 10 trials. Multi-unit activity (MUA) and LFP signals were extracted from broadband raw data acquired from Area 17 and 18 using A1X32 linear arrays (NeuroNexus) and the OpenEphys recording system. LFP signal processing was performed using Chronux, an open-source MATLAB toolbox. Current source density (CSD) analysis was performed on responses to briefly flashed full-field stimuli using the MATLAB toolbox, CSDplotter. The common response variability (global noise) of MUA was estimated using the model proposed by Scholvinck et al. [2015].
Results: On different trials, a given neuron responded with different firing to the same visual stimuli. Within one trial, a neuron’s firing rate also fluctuated across successive cycles of a drifting grating. When the animal was given extra anesthesia, neurons fired in a desynchronized pattern; with lighter levels of anesthesia, neuronal firing because more synchronized. By examining the cross-correlations of LFP signals recorded from different cortical layers, we found LFP signals could be divided to two groups: those recorded in layer IV and above, and those from layers V and VI. Within each group, LFP signals recorded by different channels are highly correlated. These two groups were observed in lighter and deeper anesthetized animals, also in sine-wave and uniform gray stimulus conditions. We also investigated correlations between LFP signals and global noise. Power in the LFP beta band was highly correlated with global noise, when animals were in deeper anesthesia.
Conclusions: Brain states contribute to variations in neuronal responses. Raw LFP correlation results suggest that we should analyze LFP data according to their laminar organization. Correlation of low-frequency LFP under deeper anesthesia with global noise gives us some insight to predict noise from single-trial data, and we hope to extend this analysis to lighter anesthesia in the future.
Background: Visual cortex neurons often respond to stimuli very differently on repeated trials. This trial-by-trial variability is known to be correlated among nearby neurons. Our long-term goal is to quantitatively estimate neuronal response variability, using multi-channel local field potential (LFP) data from single trials.
Methods: Acute experiments were performed with anesthetized (Remifentanil, Propofol, nitrous oxide) and paralyzed (Gallamine Triethiodide) cats. Computer-controlled visual stimuli were displayed on a gamma-corrected CRT monitor. For the principal experiment, two kinds of visual stimuli were used: drifting sine-wave gratings, and a uniform mean-luminance gray screen. These two stimuli were each delivered monocularly for 100 sec in a random order, for 10 trials. Multi-unit activity (MUA) and LFP signals were extracted from broadband raw data acquired from Area 17 and 18 using A1X32 linear arrays (NeuroNexus) and the OpenEphys recording system. LFP signal processing was performed using Chronux, an open-source MATLAB toolbox. Current source density (CSD) analysis was performed on responses to briefly flashed full-field stimuli using the MATLAB toolbox, CSDplotter. The common response variability (global noise) of MUA was estimated using the model proposed by Scholvinck et al. [2015].
Results: On different trials, a given neuron responded with different firing to the same visual stimuli. Within one trial, a neuron’s firing rate also fluctuated across successive cycles of a drifting grating. When the animal was given extra anesthesia, neurons fired in a desynchronized pattern; with lighter levels of anesthesia, neuronal firing because more synchronized. By examining the cross-correlations of LFP signals recorded from different cortical layers, we found LFP signals could be divided to two groups: those recorded in layer IV and above, and those from layers V and VI. Within each group, LFP signals recorded by different channels are highly correlated. These two groups were observed in lighter and deeper anesthetized animals, also in sine-wave and uniform gray stimulus conditions. We also investigated correlations between LFP signals and global noise. Power in the LFP beta band was highly correlated with global noise, when animals were in deeper anesthesia.
Conclusions: Brain states contribute to variations in neuronal responses. Raw LFP correlation results suggest that we should analyze LFP data according to their laminar organization. Correlation of low-frequency LFP under deeper anesthesia with global noise gives us some insight to predict noise from single-trial data, and we hope to extend this analysis to lighter anesthesia in the future.
Background: The expression, localization, and function of the endocannabinoid system has been well characterized in recent years in the monkey retina and in the primary thalamic relay, the lateral geniculate nucleus (dLGN). Few data are available on cortical recipients’ structures of the dLGN, namely the primary visual cortex (V1). The goal of this study is to characterize the expression and localization of the metabotropic cannabinoid receptor type 1 (CB1R), the synthesizing enzyme N-acyl phosphatidyl-ethanolamine phospholipase D (NAPE-PLD), and the degradation enzyme fatty acid amide hydrolase (FAAH) in the vervet monkey area V1.
Methods: Using Western blots and immunohistochemistry, we investigated the expression patterns of CB1R, NAPE-PLD, and FAAH in the vervet monkey primary visual cortex.
Results: CB1R, NAPE-PLD, and FAAH were expressed in the primary visual cortex throughout the rostro-caudal axis. CB1R showed very low levels of staining in cortical layer 4, with higher expressions in all other cortical layers, especially layer 1. NAPE-PLD and FAAH expressions were highest in layers 1, 2 and 3, and lowest in layer 4.
Conclusions: Interestingly enough, CB1R was very low in layer 4 of V1 in comparison to the other cortical layers. The visual information coming from the dLGN and entering layer 4Calpha (magno cells) and 4Cbeta (parvo cells) may be therefore modulated by the higher expression levels of CB1R in cortical layers 2 and 3 on the way to the dorsal and ventral visual streams. This is further supported by the higher expression of NAPE-PLD and FAAH in the outer cortical layers. These data indicate that CB1R system can influence the network of activity patterns in the visual stream after the visual information has reached area V1. These novel results provide insights for understanding the role of the endocannabinoids in the modulation of cortical visual inputs, and hence, visual perception.
Background: The expression, localization, and function of the endocannabinoid system has been well characterized in recent years in the monkey retina and in the primary thalamic relay, the lateral geniculate nucleus (dLGN). Few data are available on cortical recipients’ structures of the dLGN, namely the primary visual cortex (V1). The goal of this study is to characterize the expression and localization of the metabotropic cannabinoid receptor type 1 (CB1R), the synthesizing enzyme N-acyl phosphatidyl-ethanolamine phospholipase D (NAPE-PLD), and the degradation enzyme fatty acid amide hydrolase (FAAH) in the vervet monkey area V1.
Methods: Using Western blots and immunohistochemistry, we investigated the expression patterns of CB1R, NAPE-PLD, and FAAH in the vervet monkey primary visual cortex.
Results: CB1R, NAPE-PLD, and FAAH were expressed in the primary visual cortex throughout the rostro-caudal axis. CB1R showed very low levels of staining in cortical layer 4, with higher expressions in all other cortical layers, especially layer 1. NAPE-PLD and FAAH expressions were highest in layers 1, 2 and 3, and lowest in layer 4.
Conclusions: Interestingly enough, CB1R was very low in layer 4 of V1 in comparison to the other cortical layers. The visual information coming from the dLGN and entering layer 4Calpha (magno cells) and 4Cbeta (parvo cells) may be therefore modulated by the higher expression levels of CB1R in cortical layers 2 and 3 on the way to the dorsal and ventral visual streams. This is further supported by the higher expression of NAPE-PLD and FAAH in the outer cortical layers. These data indicate that CB1R system can influence the network of activity patterns in the visual stream after the visual information has reached area V1. These novel results provide insights for understanding the role of the endocannabinoids in the modulation of cortical visual inputs, and hence, visual perception.
Background: Visual deficits, caused by ocular disease or trauma to the visual system, can cause lasting damage with insufficient treatment options available. However, recent research has focused on neural plasticity as a means to regain visual abilities. In order to better understand the involvement of neural plasticity and reorganization in partial vision restoration, we aim to evaluate the partial recovery of a visual deficit over time using three behavioural tests. In our study, a partial optic nerve crush (ONC) serves as an induced visual deficit, allowing for residual vision from surviving cells.
Methods: Three behavioural tests—optokinetic reflex, object recognition, and visual cliff—were conducted in 9 mice prior to a bilateral, partial ONC, then 1, 3, 7, 14, 21, and 28 days after the ONC. The optokinetic reflex test measured the tracking reflex in response to moving sinusoidal gratings. These gratings increase in spatial frequency until a reflex is no longer observed, i.e., a visual acuity threshold is reached. The object recognition test examines the animal’s exploratory behaviour in its capacity to distinguish high versus low contrast objects. The visual cliff test also evaluates exploratory behaviour, by simulating a cliff to observe the animal’s sense of depth perception. All three tests provide an estimate of the rodent’s visual abilities at different levels of the visual pathway.
Results: The partial optic nerve crush resulted in a total loss of visual acuity as measured by the optokinetic reflex. The deficit did not show improvement during the 4 following weeks. Despite the visual cliff test showing a non-significant decrease in deep end preference 1-day post ONC, though this was not the case for subsequent test occasions. The object recognition test showed no significant trends.
Conclusions: In conclusion, the optokinetic reflex test showed a significant loss of function following the visual deficit, but no recovery. However, a complimentary pilot study shows visual recovery using lighter crush intensities. The spatial visual function does not seem to be affected by the ONC, suggesting that the object recognition and visual cliff tests, in their current design, may rely on somatosensory means of exploration.
Background: Visual deficits, caused by ocular disease or trauma to the visual system, can cause lasting damage with insufficient treatment options available. However, recent research has focused on neural plasticity as a means to regain visual abilities. In order to better understand the involvement of neural plasticity and reorganization in partial vision restoration, we aim to evaluate the partial recovery of a visual deficit over time using three behavioural tests. In our study, a partial optic nerve crush (ONC) serves as an induced visual deficit, allowing for residual vision from surviving cells.
Methods: Three behavioural tests—optokinetic reflex, object recognition, and visual cliff—were conducted in 9 mice prior to a bilateral, partial ONC, then 1, 3, 7, 14, 21, and 28 days after the ONC. The optokinetic reflex test measured the tracking reflex in response to moving sinusoidal gratings. These gratings increase in spatial frequency until a reflex is no longer observed, i.e., a visual acuity threshold is reached. The object recognition test examines the animal’s exploratory behaviour in its capacity to distinguish high versus low contrast objects. The visual cliff test also evaluates exploratory behaviour, by simulating a cliff to observe the animal’s sense of depth perception. All three tests provide an estimate of the rodent’s visual abilities at different levels of the visual pathway.
Results: The partial optic nerve crush resulted in a total loss of visual acuity as measured by the optokinetic reflex. The deficit did not show improvement during the 4 following weeks. Despite the visual cliff test showing a non-significant decrease in deep end preference 1-day post ONC, though this was not the case for subsequent test occasions. The object recognition test showed no significant trends.
Conclusions: In conclusion, the optokinetic reflex test showed a significant loss of function following the visual deficit, but no recovery. However, a complimentary pilot study shows visual recovery using lighter crush intensities. The spatial visual function does not seem to be affected by the ONC, suggesting that the object recognition and visual cliff tests, in their current design, may rely on somatosensory means of exploration.
Background: The perception of visual forms is crucial for effective interactions with our environment and for the recognition of visual objects. Thus, to determine the codes underlying this function is a fundamental theoretical objective in the study of the visual forms perception. The vast majority of research in the field is based on a hypothetico-deductive approach. Thus, we first begin by formulating a theory, then we make predictions and finally we conduct experimental tests. After decades of application of this approach, the field remains far from having a consensus as to the traits underlying the representation of visual form. Our goal is to determine, without theoretical a priori or any bias whatsoever, the information underlying the discrimination and recognition of 3D visual forms in normal human adults.
Methods: To this end, the adaptive bubble technique developed by Wang et al. [2011] is applied on six 3D synthetic objects under varying views from one test to another. This technique is based on the presentation of stimuli that are partially revealed through Gaussian windows, the location of which is random and the number of which is established in such a way as to maintain an established performance criterion. Gradually, the experimental program uses participants’ performance to determine the stimulus regions that participants use to recognize objects. The synthetic objects used in this study are unfamiliar and were generated from a program produced at C. Edward Connor’s lab, Johns Hopkins University School of Medicine.
Results: The results were integrated across participants to establish regions of presented stimuli that determine the observers’ ability to recognize them—i.e., diagnostic attributes. The results will be reported in graphical form with a Z scores mapping that will be superimposed on silhouettes of the objects presented during the experiment. This mapping makes it possible to quantify the importance of the different regions on the visible surface of an object for its recognition by the participants.
Conclusions: The diagnostic attributes that have been identified are the best described in terms of surface fragments. Some of these fragments are located on or near the outer edge of the stimulus while others are relatively distant. The overlap is minimal between the effective attributes for the different points of view of the same object. This suggests that the traits underlying the recognition of objects are specific to the point of view. In other words, they do not generalize through the points of view.
Background: The perception of visual forms is crucial for effective interactions with our environment and for the recognition of visual objects. Thus, to determine the codes underlying this function is a fundamental theoretical objective in the study of the visual forms perception. The vast majority of research in the field is based on a hypothetico-deductive approach. Thus, we first begin by formulating a theory, then we make predictions and finally we conduct experimental tests. After decades of application of this approach, the field remains far from having a consensus as to the traits underlying the representation of visual form. Our goal is to determine, without theoretical a priori or any bias whatsoever, the information underlying the discrimination and recognition of 3D visual forms in normal human adults.
Methods: To this end, the adaptive bubble technique developed by Wang et al. [2011] is applied on six 3D synthetic objects under varying views from one test to another. This technique is based on the presentation of stimuli that are partially revealed through Gaussian windows, the location of which is random and the number of which is established in such a way as to maintain an established performance criterion. Gradually, the experimental program uses participants’ performance to determine the stimulus regions that participants use to recognize objects. The synthetic objects used in this study are unfamiliar and were generated from a program produced at C. Edward Connor’s lab, Johns Hopkins University School of Medicine.
Results: The results were integrated across participants to establish regions of presented stimuli that determine the observers’ ability to recognize them—i.e., diagnostic attributes. The results will be reported in graphical form with a Z scores mapping that will be superimposed on silhouettes of the objects presented during the experiment. This mapping makes it possible to quantify the importance of the different regions on the visible surface of an object for its recognition by the participants.
Conclusions: The diagnostic attributes that have been identified are the best described in terms of surface fragments. Some of these fragments are located on or near the outer edge of the stimulus while others are relatively distant. The overlap is minimal between the effective attributes for the different points of view of the same object. This suggests that the traits underlying the recognition of objects are specific to the point of view. In other words, they do not generalize through the points of view.
Background: The concept of stochastic facilitation suggests that the addition of precise amounts of white noise can improve the perceptibility of a stimulus of weak amplitude. We know from previous research that tactile and auditory noise can facilitate visual perception, respectively. Here we wanted to see if the effects of stochastic facilitation generalise to a reaction time paradigm, and if reaction times are correlated with tactile thresholds. We know that when multiple sensory systems are stimulated simultaneously, reaction times are faster than either stimulus alone, and also faster than the sum of reaction times (known as the race model).
Methods: Five participants were re-tested in five blocks each of which contained a different background noise levels, randomly ordered across sessions. At each noise level, they performed a tactile threshold detection task and a tactile reaction time task.
Results: Both tactile threshold and tactile reaction times were significantly affected by the background white noise. While the preferred amplitude for the white noise was different for every participant, the average lowest threshold was obtained with white noise presented binaurally at 70 db. The reaction times were analysed by fitting an ex-Gaussian, the sum of a Gaussian function and an exponential decay function. The white noise significantly affected the exponential parameter (tau) in a way that is compatible with the facilitation of thresholds.
Conclusions: We therefore conclude that multisensory reaction time facilitation can, at least in part, be explained by stochastic facilitation of the neural signals.
Background: The concept of stochastic facilitation suggests that the addition of precise amounts of white noise can improve the perceptibility of a stimulus of weak amplitude. We know from previous research that tactile and auditory noise can facilitate visual perception, respectively. Here we wanted to see if the effects of stochastic facilitation generalise to a reaction time paradigm, and if reaction times are correlated with tactile thresholds. We know that when multiple sensory systems are stimulated simultaneously, reaction times are faster than either stimulus alone, and also faster than the sum of reaction times (known as the race model).
Methods: Five participants were re-tested in five blocks each of which contained a different background noise levels, randomly ordered across sessions. At each noise level, they performed a tactile threshold detection task and a tactile reaction time task.
Results: Both tactile threshold and tactile reaction times were significantly affected by the background white noise. While the preferred amplitude for the white noise was different for every participant, the average lowest threshold was obtained with white noise presented binaurally at 70 db. The reaction times were analysed by fitting an ex-Gaussian, the sum of a Gaussian function and an exponential decay function. The white noise significantly affected the exponential parameter (tau) in a way that is compatible with the facilitation of thresholds.
Conclusions: We therefore conclude that multisensory reaction time facilitation can, at least in part, be explained by stochastic facilitation of the neural signals.
Background: Saccades are rapid and abrupt eye movements that allow us to change the point of fixation very quickly. Saccades are generally made to visual points of interest, but we can also saccade to non-visual objects that attract our attention. While there is a plethora of studies investigating saccadic eye movements to visual targets, there is very little evidence of how eye movement planning occurs when individuals are performing eye movements to non-visual targets across different sensory modalities.
Methods: Fifteen adults with normal, or corrected to normal, vision made saccades to either visual, auditory, tactile or proprioceptive targets. In the auditory condition a speaker was positioned at one of eight locations along a circle surrounding a central fixation point. In the proprioceptive condition the participant’s finger was placed at one of the eight locations. In the tactile condition participants were touched on their right forearm in one of four eccentric location, left and right of a central point. Eye movements were made in complete darkness.
Results: We compared the precision and accuracy of the eye movements to tactile, proprioceptive, and auditory targets in the dark. Overall, both precision and accuracy of movements to non-visual targets were significantly lower compared to visual targets.
Conclusions: These differences emphasize the central role of the visual system in saccade planning.
Background: Saccades are rapid and abrupt eye movements that allow us to change the point of fixation very quickly. Saccades are generally made to visual points of interest, but we can also saccade to non-visual objects that attract our attention. While there is a plethora of studies investigating saccadic eye movements to visual targets, there is very little evidence of how eye movement planning occurs when individuals are performing eye movements to non-visual targets across different sensory modalities.
Methods: Fifteen adults with normal, or corrected to normal, vision made saccades to either visual, auditory, tactile or proprioceptive targets. In the auditory condition a speaker was positioned at one of eight locations along a circle surrounding a central fixation point. In the proprioceptive condition the participant’s finger was placed at one of the eight locations. In the tactile condition participants were touched on their right forearm in one of four eccentric location, left and right of a central point. Eye movements were made in complete darkness.
Results: We compared the precision and accuracy of the eye movements to tactile, proprioceptive, and auditory targets in the dark. Overall, both precision and accuracy of movements to non-visual targets were significantly lower compared to visual targets.
Conclusions: These differences emphasize the central role of the visual system in saccade planning.
Background: The ability to track objects as they move is critical for successful interaction with objects in the world. The multiple object tracking (MOT) paradigm has demonstrated that, within limits, our visual attention capacity allows us to track multiple moving objects among distracters. Very little is known about dynamic auditory attention and the role of multisensory binding in attentional tracking. Here, we examined whether dynamic sounds congruent with visual targets could facilitate tracking in a 3D-MOT task.
Methods: Participants tracked one or multiple target-spheres among identical distractor-spheres during 8 seconds of movement in a virtual cube. In the visual condition, targets were identified with a brief colour change, but were then indistinguishable from the distractors during the movement. In the audio-visual condition, the target-spheres were accompanied by a sound, which moved congruently with the change in the target’s position. Sound amplitude varied with distance from the observer and inter-aural amplitude difference varied with azimuth.
Results: Results with one target showed that performance was better in the audiovisual condition, which suggests that congruent sounds can facilitate attentional visual tracking. However, with multiple targets, the sounds did not facilitate tracking.
Conclusions: This suggests that audiovisual binding may not be possible when attention is divided between multiple targets.
Background: The ability to track objects as they move is critical for successful interaction with objects in the world. The multiple object tracking (MOT) paradigm has demonstrated that, within limits, our visual attention capacity allows us to track multiple moving objects among distracters. Very little is known about dynamic auditory attention and the role of multisensory binding in attentional tracking. Here, we examined whether dynamic sounds congruent with visual targets could facilitate tracking in a 3D-MOT task.
Methods: Participants tracked one or multiple target-spheres among identical distractor-spheres during 8 seconds of movement in a virtual cube. In the visual condition, targets were identified with a brief colour change, but were then indistinguishable from the distractors during the movement. In the audio-visual condition, the target-spheres were accompanied by a sound, which moved congruently with the change in the target’s position. Sound amplitude varied with distance from the observer and inter-aural amplitude difference varied with azimuth.
Results: Results with one target showed that performance was better in the audiovisual condition, which suggests that congruent sounds can facilitate attentional visual tracking. However, with multiple targets, the sounds did not facilitate tracking.
Conclusions: This suggests that audiovisual binding may not be possible when attention is divided between multiple targets.
Background: Research suggests that the analysis of facial expressions by a healthy brain would take place approximately 170 ms after the presentation of a facial expression in the superior temporal sulcus and the fusiform gyrus, mostly in the right hemisphere. Some researchers argue that a fast pathway through the amygdala would allow automatic and early emotional treatment around 90 ms after stimulation. This treatment would be done subconsciously, even before this stimulus is perceived and could be approximated by presenting the stimuli quickly on the periphery of the fovea. The present study aimed to identify the neural correlates of a peripheral and simultaneous presentation of emotional expressions through a frequency tagging paradigm.
Methods: The presentation of emotional facial expressions at a specific frequency induces in the visual cortex a stable and precise response to the presentation frequency [i.e., a steady-state visual evoked potential (ssVEP)] that can be used as a frequency tag (i.e., a frequency-tag to follow the cortical treatment of this stimulus. Here, the use of different specific stimulation frequencies allowed us to label the different facial expressions presented simultaneously and to obtain a reliable cortical response being associated with (I) each of the emotions and (II) the different times of presentations repeated (1/0.170 ms =~5.8 Hz, 1/0.090 ms =~10.8 Hz). To identify the regions involved in emotional discrimination, we subtracted the brain activity induced by the rapid presentation of six emotional expressions of the activity induced by the presentation of the same emotion (reduced by neural adaptation). The results were compared to the hemisphere in which attention was sought, emotion and frequency of stimulation.
Results: The signal-to-noise ratio of the cerebral oscillations referring to the treatment of the expression of fear was stronger in the regions specific to the emotional treatment when they were presented in the subjects peripheral vision, unbeknownst to them. In addition, the peripheral emotional treatment of fear at 10.8 Hz was associated with greater activation within the Gamma 1 and 2 frequency bands in the expected regions (frontotemporal and T6), as well as desynchronization in the Alpha frequency bands for the temporal regions. This modulation of the spectral power is independent of the attentional request.
Conclusions: These results suggest that the emotional stimulation of fear presented in the peripheral vision and outside the attentional framework elicit an increase in brain activity, especially in the temporal lobe. The localization of this activity as well as the optimal stimulation frequency found for this facial expression suggests that it is treated by the fast pathway of the magnocellular layers.
Background: Research suggests that the analysis of facial expressions by a healthy brain would take place approximately 170 ms after the presentation of a facial expression in the superior temporal sulcus and the fusiform gyrus, mostly in the right hemisphere. Some researchers argue that a fast pathway through the amygdala would allow automatic and early emotional treatment around 90 ms after stimulation. This treatment would be done subconsciously, even before this stimulus is perceived and could be approximated by presenting the stimuli quickly on the periphery of the fovea. The present study aimed to identify the neural correlates of a peripheral and simultaneous presentation of emotional expressions through a frequency tagging paradigm.
Methods: The presentation of emotional facial expressions at a specific frequency induces in the visual cortex a stable and precise response to the presentation frequency [i.e., a steady-state visual evoked potential (ssVEP)] that can be used as a frequency tag (i.e., a frequency-tag to follow the cortical treatment of this stimulus. Here, the use of different specific stimulation frequencies allowed us to label the different facial expressions presented simultaneously and to obtain a reliable cortical response being associated with (I) each of the emotions and (II) the different times of presentations repeated (1/0.170 ms =~5.8 Hz, 1/0.090 ms =~10.8 Hz). To identify the regions involved in emotional discrimination, we subtracted the brain activity induced by the rapid presentation of six emotional expressions of the activity induced by the presentation of the same emotion (reduced by neural adaptation). The results were compared to the hemisphere in which attention was sought, emotion and frequency of stimulation.
Results: The signal-to-noise ratio of the cerebral oscillations referring to the treatment of the expression of fear was stronger in the regions specific to the emotional treatment when they were presented in the subjects peripheral vision, unbeknownst to them. In addition, the peripheral emotional treatment of fear at 10.8 Hz was associated with greater activation within the Gamma 1 and 2 frequency bands in the expected regions (frontotemporal and T6), as well as desynchronization in the Alpha frequency bands for the temporal regions. This modulation of the spectral power is independent of the attentional request.
Conclusions: These results suggest that the emotional stimulation of fear presented in the peripheral vision and outside the attentional framework elicit an increase in brain activity, especially in the temporal lobe. The localization of this activity as well as the optimal stimulation frequency found for this facial expression suggests that it is treated by the fast pathway of the magnocellular layers.