Background: Visual salience computed using algorithmic procedures have been shown to predict eye-movements in a number of contexts. However, despite calls to incorporate computationally-defined visual salience metrics as a means of assessing the effectiveness of advertisements, few studies have incorporated these techniques in a marketing context. The present study sought to determine the impact of visual salience and knowledge of a brand on eye-movement patterns and buying preferences.
Methods: Participants (N=38) were presented with 54 pairs of products presented on the left and right sides of a blank white screen. For each pair, one product was a known North American product, such as Fresca?, and one was an unknown British product of the same category, such as Irn Bru?. Participants were asked to select which product they would prefer to buy while their eye movements were recorded. Salience was computed using Itti & Koch’s [2001] computational model of bottom-up salience. Products were defined as highly salient if the majority of the first five predicted fixations were in the region of the product.
Results: Results showed that participants were much more likely to prefer to buy known products, and tentative evidence suggests that participants had longer total dwell times when looking at unknown products. Salience appears to have had little or no effect on preference for a product, nor did it predict total dwell time or time to first fixation. There also appears to be no interaction between knowledge of a product and visual salience on any of the measures analyzed.
Conclusions: The results indicate that product salience may not be a useful predictor of attention under the constraints of the present experiment. Future studies could use a different operational definition of visual salience which might be more predictive of visual attention. Furthermore, a more fine-grained analysis of product familiarity based on survey data may reveal patterns obscured by the definitional constraints of the present study.
Background: The ability to track objects as they move is critical for successful interaction with objects in the world. The multiple object tracking (MOT) paradigm has demonstrated that, within limits, our visual attention capacity allows us to track multiple moving objects among distracters. Very little is known about dynamic auditory attention and the role of multisensory binding in attentional tracking. Here, we examined whether dynamic sounds congruent with visual targets could facilitate tracking in a 3D-MOT task.
Methods: Participants tracked one or multiple target-spheres among identical distractor-spheres during 8 seconds of movement in a virtual cube. In the visual condition, targets were identified with a brief colour change, but were then indistinguishable from the distractors during the movement. In the audio-visual condition, the target-spheres were accompanied by a sound, which moved congruently with the change in the target’s position. Sound amplitude varied with distance from the observer and inter-aural amplitude difference varied with azimuth.
Results: Results with one target showed that performance was better in the audiovisual condition, which suggests that congruent sounds can facilitate attentional visual tracking. However, with multiple targets, the sounds did not facilitate tracking.
Conclusions: This suggests that audiovisual binding may not be possible when attention is divided between multiple targets.
Background: Research suggests that the analysis of facial expressions by a healthy brain would take place approximately 170 ms after the presentation of a facial expression in the superior temporal sulcus and the fusiform gyrus, mostly in the right hemisphere. Some researchers argue that a fast pathway through the amygdala would allow automatic and early emotional treatment around 90 ms after stimulation. This treatment would be done subconsciously, even before this stimulus is perceived and could be approximated by presenting the stimuli quickly on the periphery of the fovea. The present study aimed to identify the neural correlates of a peripheral and simultaneous presentation of emotional expressions through a frequency tagging paradigm.
Methods: The presentation of emotional facial expressions at a specific frequency induces in the visual cortex a stable and precise response to the presentation frequency [i.e., a steady-state visual evoked potential (ssVEP)] that can be used as a frequency tag (i.e., a frequency-tag to follow the cortical treatment of this stimulus. Here, the use of different specific stimulation frequencies allowed us to label the different facial expressions presented simultaneously and to obtain a reliable cortical response being associated with (I) each of the emotions and (II) the different times of presentations repeated (1/0.170 ms =~5.8 Hz, 1/0.090 ms =~10.8 Hz). To identify the regions involved in emotional discrimination, we subtracted the brain activity induced by the rapid presentation of six emotional expressions of the activity induced by the presentation of the same emotion (reduced by neural adaptation). The results were compared to the hemisphere in which attention was sought, emotion and frequency of stimulation.
Results: The signal-to-noise ratio of the cerebral oscillations referring to the treatment of the expression of fear was stronger in the regions specific to the emotional treatment when they were presented in the subjects peripheral vision, unbeknownst to them. In addition, the peripheral emotional treatment of fear at 10.8 Hz was associated with greater activation within the Gamma 1 and 2 frequency bands in the expected regions (frontotemporal and T6), as well as desynchronization in the Alpha frequency bands for the temporal regions. This modulation of the spectral power is independent of the attentional request.
Conclusions: These results suggest that the emotional stimulation of fear presented in the peripheral vision and outside the attentional framework elicit an increase in brain activity, especially in the temporal lobe. The localization of this activity as well as the optimal stimulation frequency found for this facial expression suggests that it is treated by the fast pathway of the magnocellular layers.