Kinect is a line of motion sensing input devices produced by Microsoft and first released in 2010. The devices generally contain RGB cameras, and infrared projectors and detectors that map depth through either structured light or time of flight calculations, which can in turn be used to perform real-time gesture recognition and body skeletal detection, among other capabilities. They also contain microphones that can be used for speech recognition and voice control.
Kinect was originally developed as a motion controller peripheral for Xbox video game consoles, distinguished from competitors (such as Nintendo's Wii Remote and Sony's PlayStation Move) by not requiring physical controllers. The first-generation Kinect was based on technology from Israeli company PrimeSense, and unveiled at E3 2009 as a peripheral for Xbox 360 codenamed "Project Natal". It was first released on November 4, 2010, and would go on to sell eight million units in its first 60 days of availability. The majority of the games developed for Kinect were casual, family-oriented titles, which helped to attract new audiences to Xbox 360, but did not result in wide adoption by the console's existing, overall userbase.
As part of the 2013 unveiling of Xbox 360's successor, Xbox One, Microsoft unveiled a second-generation version of Kinect with improved tracking capabilities. Microsoft also announced that Kinect would be a required component of the console, and that it would not function unless the peripheral is connected. The requirement proved controversial among users and critics due to privacy concerns, prompting Microsoft to backtrack on the decision. However, Microsoft still bundled the new Kinect with Xbox One consoles upon their launch in November 2013. A market for Kinect-based games still did not emerge after the Xbox One's launch; Microsoft would later offer Xbox One hardware bundles without Kinect included, and later revisions of the console removed the dedicated ports used to connect it (requiring a powered USB adapter instead).
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Machine learning is a sub-field of Artificial Intelligence that allows computers to learn from data, identify patterns and make predictions. As a fundamental building block of the Computational Thinki
Motion detection is the process of detecting a change in the position of an object relative to its surroundings or a change in the surroundings relative to an object. It can be achieved by either mechanical or electronic methods. When it is done by natural organisms, it is called motion perception.
Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. It is a subdiscipline of computer vision. Gestures can originate from any bodily motion or state, but commonly originate from the face or hand. Focuses in the field include emotion recognition from face and hand gesture recognition since they are all expressions. Users can make simple gestures to control or interact with devices without physically touching them.
In video games and entertainment systems, a motion controller is a type of game controller that uses accelerometers or other sensors to track motion and provide input. Motion controllers using accelerometers are used as controllers for video games, which was made more popular since 2006 by the Wii Remote controller for Nintendo's Wii console, which uses accelerometers to detect its approximate orientation and acceleration, and serves an image sensor, so it can be used as a pointing device.
Explores predictive models and trackers for autonomous vehicles, covering object detection, tracking challenges, neural network-based tracking, and 3D pedestrian localization.
Explores deep learning for autonomous vehicles, covering perception, action, and social forecasting in the context of sensor technologies and ethical considerations.
We propose a test -time adaptation for 6D object pose tracking that learns to adapt a pre -trained model to track the 6D pose of novel objects. We consider the problem of 6D object pose tracking as a 3D keypoint detection and matching task and present a mo ...
This bachelor project, conducted at the Experimental Museology Laboratory (eM+) at EPFL, focusing on immersive technologies and visualization systems. The project aimed to enhance the Panorama+, a 360-degree stereoscopic interactive visualization system, b ...
2024
, ,
Current transformer-based skeletal action recognition models tend to focus on a limited set of joints and low-level motion patterns to predict action classes. This results in significant performance degradation under small skeleton perturbations or changin ...