SVGC-AVA: 360-Degree Video Saliency Prediction With Spherical Vector-Based Graph Convolution and Audio-Visual Attention
Graph Chatbot
Chat with Graph Search
Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
Visual search is a good illustration about how the brain coordinates a variety of functions such as visual clues extraction from the scene, coordination of the eye-movements, accumu- lation of visual information and visual recognition. It has an important ...
Humans largely guide their behavior by their visual representation of the world. Recent studies have shown that visual information can trigger behavior within 150 msec, suggesting that visually guided responses to external events, in fact, precede consciou ...
The obvious symptoms of schizophrenia are of cognitive and psychopathological nature. However, schizophrenia affects also visual processing which becomes particularly evident when stimuli are presented for short durations and are followed by a masking stim ...
Various computer visions tasks require a summary of visually important regions in an image. Thus, developing simple yet accurate salient region detection algorithms has become an important research topic. The currently best performing state-of-the-art sali ...
We propose a robust and accurate method to extract the centerlines and scale of tubular structures in 2D images and 3D volumes. Existing techniques rely either on filters designed to respond to ideal cylindrical structures, which lose accuracy when the lin ...
Agency plays an important role in self-recognition from motion. Here, we investigated whether our own movements benefit from preferential processing even when the task is unrelated to self-recognition, and does not involve agency judgments. Participants se ...
In this paper we present a multimodal analysis of emergent leadership in small groups using audio-visual features and discuss our experience in designing and collecting a data corpus for this purpose. The ELEA Audio-Visual Synchronized corpus (ELEA AVS) wa ...
To cope with the continuously incoming stream of input, the visual system has to group information across space and time. Usually, spatial and temporal grouping are investigated separately. However, recent findings revealed that these two grouping mechanis ...
Association for Research in Vision and Ophthalmology2010
Integration of multimodal information is essential for the integrative response of the brain and is thought to be accomplished mostly in sensory association areas. However, available evidences in humans and monkeys indicate that this process begins already ...
The human brain analyzes a visual object first by basic feature detectors. These features are integrated in subsequent stages of the visual hierarchy. Generally it is assumed that the information about these basic features is lost once the information is s ...