Summary
Visual spatial attention is a form of visual attention that involves directing attention to a location in space. Similar to its temporal counterpart visual temporal attention, these attention modules have been widely implemented in video analytics in computer vision to provide enhanced performance and human interpretable explanation of deep learning models. Spatial attention allows humans to selectively process visual information through prioritization of an area within the visual field. A region of space within the visual field is selected for attention and the information within this region then receives further processing. Research shows that when spatial attention is evoked, an observer is typically faster and more accurate at detecting a target that appears in an expected location compared to an unexpected location. Attention is guided even more quickly to unexpected locations, when these locations are made salient by external visual inputs (such as a sudden flash). According to the V1 Saliency Hypothesis, the human primary visual cortex plays a critical role for such an exogenous attentional guidance. Spatial attention is distinctive from other forms of visual attention such as object-based attention and feature-based attention. These other forms of visual attention select an entire object or a specific feature of an object regardless of its location, whereas spatial attention selects a specific region of space and the objects and features within that region are processed. A key property of visual attention is that attention can be selected based on spatial location and spatial cueing experiments have been used to assess this type of selection. In Posner's cueing paradigm, the task was to detect a target that could be presented in one of two locations and respond as quickly as possible. At the start of each trial, a cue is presented that either indicates the location of the target (valid cue) or indicates the incorrect location thus misdirecting the observer (invalid cue).
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (4)
ENG-644: Teaching STEM: a problem solving approach
Problem solving is a core engineering skill. This course explores relevant heuristics, epistemologies, metacognitive skills and evidence-informed teaching strategies for developing problem solving ski
ENG-644(a): Teaching STEM: a problem solving approach - FALL
Problem solving is a core engineering skill. This course explores relevant heuristics, epistemologies, metacognitive skills and evidence-informed teaching strategies for developing problem solving ski
ENG-644(b): Teaching STEM: a problem solving approach - SPRING
Problem solving is a core engineering skill. This course explores relevant heuristics, epistemologies, metacognitive skills and evidence-informed teaching strategies for developing problem solving ski
Show more
Related publications (87)
Related concepts (4)
Visual temporal attention
Visual temporal attention is a special case of visual attention that involves directing attention to specific instant of time. Similar to its spatial counterpart visual spatial attention, these attention modules have been widely implemented in video analytics in computer vision to provide enhanced performance and human interpretable explanation of deep learning models.
Activity recognition
Activity recognition aims to recognize the actions and goals of one or more agents from a series of observations on the agents' actions and the environmental conditions. Since the 1980s, this research field has captured the attention of several computer science communities due to its strength in providing personalized support for many different applications and its connection to many different fields of study such as medicine, human-computer interaction, or sociology.
Convolutional neural network
Convolutional neural network (CNN) is a regularized type of feed-forward neural network that learns feature engineering by itself via filters (or kernel) optimization. Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by using regularized weights over fewer connections. For example, for each neuron in the fully-connected layer 10,000 weights would be required for processing an image sized 100 × 100 pixels.
Show more