Background: The expression, localization, and function of the endocannabinoid system has been well characterized in recent years in the monkey retina and in the primary thalamic relay, the lateral geniculate nucleus (dLGN). Few data are available on cortical recipients’ structures of the dLGN, namely the primary visual cortex (V1). The goal of this study is to characterize the expression and localization of the metabotropic cannabinoid receptor type 1 (CB1R), the synthesizing enzyme N-acyl phosphatidyl-ethanolamine phospholipase D (NAPE-PLD), and the degradation enzyme fatty acid amide hydrolase (FAAH) in the vervet monkey area V1.
Methods: Using Western blots and immunohistochemistry, we investigated the expression patterns of CB1R, NAPE-PLD, and FAAH in the vervet monkey primary visual cortex.
Results: CB1R, NAPE-PLD, and FAAH were expressed in the primary visual cortex throughout the rostro-caudal axis. CB1R showed very low levels of staining in cortical layer 4, with higher expressions in all other cortical layers, especially layer 1. NAPE-PLD and FAAH expressions were highest in layers 1, 2 and 3, and lowest in layer 4.
Conclusions: Interestingly enough, CB1R was very low in layer 4 of V1 in comparison to the other cortical layers. The visual information coming from the dLGN and entering layer 4Calpha (magno cells) and 4Cbeta (parvo cells) may be therefore modulated by the higher expression levels of CB1R in cortical layers 2 and 3 on the way to the dorsal and ventral visual streams. This is further supported by the higher expression of NAPE-PLD and FAAH in the outer cortical layers. These data indicate that CB1R system can influence the network of activity patterns in the visual stream after the visual information has reached area V1. These novel results provide insights for understanding the role of the endocannabinoids in the modulation of cortical visual inputs, and hence, visual perception.
Background: Visual deficits, caused by ocular disease or trauma to the visual system, can cause lasting damage with insufficient treatment options available. However, recent research has focused on neural plasticity as a means to regain visual abilities. In order to better understand the involvement of neural plasticity and reorganization in partial vision restoration, we aim to evaluate the partial recovery of a visual deficit over time using three behavioural tests. In our study, a partial optic nerve crush (ONC) serves as an induced visual deficit, allowing for residual vision from surviving cells.
Methods: Three behavioural tests—optokinetic reflex, object recognition, and visual cliff—were conducted in 9 mice prior to a bilateral, partial ONC, then 1, 3, 7, 14, 21, and 28 days after the ONC. The optokinetic reflex test measured the tracking reflex in response to moving sinusoidal gratings. These gratings increase in spatial frequency until a reflex is no longer observed, i.e., a visual acuity threshold is reached. The object recognition test examines the animal’s exploratory behaviour in its capacity to distinguish high versus low contrast objects. The visual cliff test also evaluates exploratory behaviour, by simulating a cliff to observe the animal’s sense of depth perception. All three tests provide an estimate of the rodent’s visual abilities at different levels of the visual pathway.
Results: The partial optic nerve crush resulted in a total loss of visual acuity as measured by the optokinetic reflex. The deficit did not show improvement during the 4 following weeks. Despite the visual cliff test showing a non-significant decrease in deep end preference 1-day post ONC, though this was not the case for subsequent test occasions. The object recognition test showed no significant trends.
Conclusions: In conclusion, the optokinetic reflex test showed a significant loss of function following the visual deficit, but no recovery. However, a complimentary pilot study shows visual recovery using lighter crush intensities. The spatial visual function does not seem to be affected by the ONC, suggesting that the object recognition and visual cliff tests, in their current design, may rely on somatosensory means of exploration.
Background: The perception of visual forms is crucial for effective interactions with our environment and for the recognition of visual objects. Thus, to determine the codes underlying this function is a fundamental theoretical objective in the study of the visual forms perception. The vast majority of research in the field is based on a hypothetico-deductive approach. Thus, we first begin by formulating a theory, then we make predictions and finally we conduct experimental tests. After decades of application of this approach, the field remains far from having a consensus as to the traits underlying the representation of visual form. Our goal is to determine, without theoretical a priori or any bias whatsoever, the information underlying the discrimination and recognition of 3D visual forms in normal human adults.
Methods: To this end, the adaptive bubble technique developed by Wang et al. [2011] is applied on six 3D synthetic objects under varying views from one test to another. This technique is based on the presentation of stimuli that are partially revealed through Gaussian windows, the location of which is random and the number of which is established in such a way as to maintain an established performance criterion. Gradually, the experimental program uses participants’ performance to determine the stimulus regions that participants use to recognize objects. The synthetic objects used in this study are unfamiliar and were generated from a program produced at C. Edward Connor’s lab, Johns Hopkins University School of Medicine.
Results: The results were integrated across participants to establish regions of presented stimuli that determine the observers’ ability to recognize them—i.e., diagnostic attributes. The results will be reported in graphical form with a Z scores mapping that will be superimposed on silhouettes of the objects presented during the experiment. This mapping makes it possible to quantify the importance of the different regions on the visible surface of an object for its recognition by the participants.
Conclusions: The diagnostic attributes that have been identified are the best described in terms of surface fragments. Some of these fragments are located on or near the outer edge of the stimulus while others are relatively distant. The overlap is minimal between the effective attributes for the different points of view of the same object. This suggests that the traits underlying the recognition of objects are specific to the point of view. In other words, they do not generalize through the points of view.
Background: The concept of stochastic facilitation suggests that the addition of precise amounts of white noise can improve the perceptibility of a stimulus of weak amplitude. We know from previous research that tactile and auditory noise can facilitate visual perception, respectively. Here we wanted to see if the effects of stochastic facilitation generalise to a reaction time paradigm, and if reaction times are correlated with tactile thresholds. We know that when multiple sensory systems are stimulated simultaneously, reaction times are faster than either stimulus alone, and also faster than the sum of reaction times (known as the race model).
Methods: Five participants were re-tested in five blocks each of which contained a different background noise levels, randomly ordered across sessions. At each noise level, they performed a tactile threshold detection task and a tactile reaction time task.
Results: Both tactile threshold and tactile reaction times were significantly affected by the background white noise. While the preferred amplitude for the white noise was different for every participant, the average lowest threshold was obtained with white noise presented binaurally at 70 db. The reaction times were analysed by fitting an ex-Gaussian, the sum of a Gaussian function and an exponential decay function. The white noise significantly affected the exponential parameter (tau) in a way that is compatible with the facilitation of thresholds.
Conclusions: We therefore conclude that multisensory reaction time facilitation can, at least in part, be explained by stochastic facilitation of the neural signals.
Background: Saccades are rapid and abrupt eye movements that allow us to change the point of fixation very quickly. Saccades are generally made to visual points of interest, but we can also saccade to non-visual objects that attract our attention. While there is a plethora of studies investigating saccadic eye movements to visual targets, there is very little evidence of how eye movement planning occurs when individuals are performing eye movements to non-visual targets across different sensory modalities.
Methods: Fifteen adults with normal, or corrected to normal, vision made saccades to either visual, auditory, tactile or proprioceptive targets. In the auditory condition a speaker was positioned at one of eight locations along a circle surrounding a central fixation point. In the proprioceptive condition the participant’s finger was placed at one of the eight locations. In the tactile condition participants were touched on their right forearm in one of four eccentric location, left and right of a central point. Eye movements were made in complete darkness.
Results: We compared the precision and accuracy of the eye movements to tactile, proprioceptive, and auditory targets in the dark. Overall, both precision and accuracy of movements to non-visual targets were significantly lower compared to visual targets.
Conclusions: These differences emphasize the central role of the visual system in saccade planning.
Background: The ability to track objects as they move is critical for successful interaction with objects in the world. The multiple object tracking (MOT) paradigm has demonstrated that, within limits, our visual attention capacity allows us to track multiple moving objects among distracters. Very little is known about dynamic auditory attention and the role of multisensory binding in attentional tracking. Here, we examined whether dynamic sounds congruent with visual targets could facilitate tracking in a 3D-MOT task.
Methods: Participants tracked one or multiple target-spheres among identical distractor-spheres during 8 seconds of movement in a virtual cube. In the visual condition, targets were identified with a brief colour change, but were then indistinguishable from the distractors during the movement. In the audio-visual condition, the target-spheres were accompanied by a sound, which moved congruently with the change in the target’s position. Sound amplitude varied with distance from the observer and inter-aural amplitude difference varied with azimuth.
Results: Results with one target showed that performance was better in the audiovisual condition, which suggests that congruent sounds can facilitate attentional visual tracking. However, with multiple targets, the sounds did not facilitate tracking.
Conclusions: This suggests that audiovisual binding may not be possible when attention is divided between multiple targets.
Background: Research suggests that the analysis of facial expressions by a healthy brain would take place approximately 170 ms after the presentation of a facial expression in the superior temporal sulcus and the fusiform gyrus, mostly in the right hemisphere. Some researchers argue that a fast pathway through the amygdala would allow automatic and early emotional treatment around 90 ms after stimulation. This treatment would be done subconsciously, even before this stimulus is perceived and could be approximated by presenting the stimuli quickly on the periphery of the fovea. The present study aimed to identify the neural correlates of a peripheral and simultaneous presentation of emotional expressions through a frequency tagging paradigm.
Methods: The presentation of emotional facial expressions at a specific frequency induces in the visual cortex a stable and precise response to the presentation frequency [i.e., a steady-state visual evoked potential (ssVEP)] that can be used as a frequency tag (i.e., a frequency-tag to follow the cortical treatment of this stimulus. Here, the use of different specific stimulation frequencies allowed us to label the different facial expressions presented simultaneously and to obtain a reliable cortical response being associated with (I) each of the emotions and (II) the different times of presentations repeated (1/0.170 ms =~5.8 Hz, 1/0.090 ms =~10.8 Hz). To identify the regions involved in emotional discrimination, we subtracted the brain activity induced by the rapid presentation of six emotional expressions of the activity induced by the presentation of the same emotion (reduced by neural adaptation). The results were compared to the hemisphere in which attention was sought, emotion and frequency of stimulation.
Results: The signal-to-noise ratio of the cerebral oscillations referring to the treatment of the expression of fear was stronger in the regions specific to the emotional treatment when they were presented in the subjects peripheral vision, unbeknownst to them. In addition, the peripheral emotional treatment of fear at 10.8 Hz was associated with greater activation within the Gamma 1 and 2 frequency bands in the expected regions (frontotemporal and T6), as well as desynchronization in the Alpha frequency bands for the temporal regions. This modulation of the spectral power is independent of the attentional request.
Conclusions: These results suggest that the emotional stimulation of fear presented in the peripheral vision and outside the attentional framework elicit an increase in brain activity, especially in the temporal lobe. The localization of this activity as well as the optimal stimulation frequency found for this facial expression suggests that it is treated by the fast pathway of the magnocellular layers.
Background: All neurons of the visual system exhibit response to differences in luminance. This neural response to visual contrast, also known as the contrast response function (CRF), follows a characteristic sigmoid shape that can be fitted with the Naka-Rushton equation. Four parameters define the CRF, and they are often used in different visual research disciplines, since they describe selective variations of neural responses. As novel technologies have grown, the capacity to record thousands of neurons simultaneously brings new challenges: processing and robustly analyzing larger amounts of data to maximize the outcomes of our experimental measurements. Nevertheless, current guidelines to fit neural activity based on the Naka-Rushton equation have been poorly discussed in depth. In this study, we explore several methods of boundary-setting and least-square curve-fitting for the CRF in order to avoid the pitfalls of blind curve-fitting. Furthermore, we intend to provide recommendations for experimenters to better prepare a solid quantification of CRF parameters that also minimize the time of the data acquisition. For this purpose, we have created a simplified theoretical model of spike-response dynamics, in which the firing rate of neurons is generated by a Poisson process. The spike trains generated by the theoretical model depending on visual contrast intensities were then fitted with the Naka-Rushton equation. This allowed us to identify combinations of parameters that were more important to adjust before performing experiments, to optimize the precision and efficiency of curve fitting (e.g., boundaries of CRF parameters, number of trials, number of contrast tested, metric of contrast used and the effect of including multi-unit spikes into a single CRF, among others). Several goodness-of-fit methods were also examined in order to achieve ideal fits. With this approach, it is possible to anticipate the minimal requirements to gather and analyze data in a more efficient way in order to build stronger functional models.
Methods: Spike-trains were randomly generated following a Poisson distribution in order to draw both an underlying theoretical curve and an empirical one. Random noise was added to the fit to simulate empirical conditions. The correlation function was recreated on the simulated data and re-fit using the Naka-Rushton equation. The two curves were compared: the idea being to determine the most advantageous boundaries and conditions for the curve-fit to be optimal. Statistical analysis was performed on the data to determine those conditions for experiments. Experiments were then conducted to acquire data from mice and cats to verify the model.
Results: Results were obtained successfully and a model was proposed to assess the goodness of the fit of the contrast response function. Various parametres and their influence of the model were tested. Other similar models were proposed and their performance was assessed and compared to the previous ones. The fit was optimized to give semi-strict guidelines for scientists to follow in order to maximize their efficiency while obtaining the contrast tuning of a neuron.
Conclusions: The aim of the study was to assess the optimal testing parametres of the neuronal response to visual gratings with various luminance, also called the CRF. As technology gets more powerful and potent, one must make choices when experimenting. With a strong model, robust boundaries, and strong experimental conditioning, the best fit to a function can lead to more efficient analysis and stronger cognitive models.
Background: It is well known that the pulvinar establishes reciprocal connections with areas of the visual cortex, allowing the transfer of cortico-cortical signals through transthalamic pathways. However, the exact function of these signals in coordinating activity across the visual cortical hierarchy remains largely unknown. In anesthetized cats, we have explored whether pulvinar inactivation affects the dynamic of interactions between the primary visual cortex (a17) and area 21a, a higher visual cortical area, as well as between layers within each cortical area. We found that pulvinar inactivation modifies the local field potentials (LFPs) coherence between a17 and 21a during a visual stimulation. In addition, the Granger causality analysis showed that the functional connectivity changed across visual areas and between cortical layers during pulvinar inactivation, the effects being stronger in layers of the same area. We observed that the effects of pulvinar inactivation arise at two different epochs of the visual response, i.e., at the early and late components. The proportion of feedback and feedforward functional events was higher during the early and the late phases of the responses, respectively. We also found that pulvinar inactivation facilitates the feedback propagation of gamma oscillations from 21a to a17. This feedback transmission was predominant during the late response. At the temporal level, pulvinar inactivation also delayed the signals from a17 and 21a, depending on the source and the target of the cortical layer. Thus, the pulvinar can not only modify the functional connectivity between intra and inter cortical layers but may also control the temporal dynamics of neuronal activity across the visual cortical hierarchy.
Methods: In vivo electrophysiological recordings of visual cortical areas, area 17 and 21a, in anesthetized cats, were then explored with temporal serial analysis (i.e., Fourier analysis, Coherence, Cross-correlation and Granger causality) of the local field potential.
Results: Inactivation of the thalamic nucleus modifies the dynamics of areas 17 and 21a. The changes observed depends on the source and the target of the cortical layer. The pulvinar inactivation arise at two different epochs of visual response.
Conclusions: The pulvinar modifies the functional connectivity between intra and inter cortical layers and may also control the temporal dynamics of neuronal activity across the visual cortical hierarchy.
Background: For years, studies using several animal models have highlighted the predominant role of the primary visual area in visual information processing. Its six cortical layers have morphological, hodological and physiological differences, although their roles regarding the integration of visual contrast and the messages sent by the layers to other brain regions have been poorly explored. Given that cortical layers have distinct properties, this study aims to understand these differences and how they are affected by a changing visual contrast.
Methods: A linear multi-channel electrode was placed in the primary visual cortex (V1) of the anesthetized mouse to record neuronal activity across the different cortical layers. The laminar position of the electrode was verified in real time by measuring the current source density (CSD) and the multi-unit activity (MUA), and confirmed post-mortem by histological analysis. Drifting gratings varying in contrast enabled the measurement of the firing rate of neurons throughout layers. We fitted this data to the Naka-Rushton equations, which generated the contrast response function (CRF) of neurons.
Results: The analysis revealed that the baseline activity as well as the rate of change of neural discharges (the slope of the CRF) had a positive correlation across the cortical layers. In addition, we found a trend between the cortical position and the contrast evoking the semi-saturation of the activity. A significant difference in the maximum discharge rate was also found between layers II/III and IV, as well as between layers II/III and V.
Conclusions: Since layers II/III and V process visual contrast differently, our results suggest that higher cortical visual areas, as well subcortical regions, receive different information regarding a change in visual contrast. Thus, a contrast may be processed differently throughout the different areas of the visual cortex.