Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
Based on our recent work on the development of a trust model for recommender agents and a qualitative survey, we explore the potential of building users' trust with explanation interfaces. We present the major results from the survey, which provided a road ...
This document describes our work to provide a video database, of people in real situations with their head pose continuously annotated through time. The head poses were annotated using a magnetic 3d location and orientation tracker, the flock of bird. The ...
This paper presents an obstacle detection system for visually impaired people. User can be alerted of closed obstacles in range while traveling in their environment. The system we propose detects the nearest obstacle via a stereoscopic sonar system and sen ...
This paper provides an overview of a multi-modal wearable computer system, SNAP&TELL. The system performs real-time gesture tracking, combined with audio-based control commands, in order to recognize objects in an environment, including outdoor landmarks. ...
We propose a spectral prediction model for predicting the reflectance and transmittance of recto-verso halftone prints. A recto-verso halftone print is modeled as a diffusing substrate surrounded by two inked interfaces in contact with air (or with another ...
Nowadays, computer interaction is mostly done using dedicated devices. But gestures are an easy mean of expression between humans that could be used to communicate with computers in a more natural manner. Most of the current research on hand gesture recogn ...
Natural audio-visual interface between human user and machine requires understanding of user's audio-visual commands. This does not necessarily require full speech and image recognition. It does require, just as the interaction with any working animal does ...
In this paper, we extend the Hopfield Associative Memory for storing multiple sequences of varying duration. We apply the model for learning, recognizing and encoding a set of human gestures. We measure systematically the performance of the model against n ...
Estimating the {\em wandering visual focus of attention} (WVFOA) for multiple people is an important problem with many applications in human behavior understanding. One such application, addressed in this paper, monitors the attention of passers-by to outd ...
In recent years, the advent of robust tracking systems has enabled behavioral analysis of individuals based on their trajectories. An analysis method based on a Point Distribution Model (PDM) is presented here. It is an unsupervised modeling of the traject ...