Motion captureMotion capture (sometimes referred as mo-cap or mocap, for short) is the process of recording the movement of objects or people. It is used in military, entertainment, sports, medical applications, and for validation of computer vision and robots. In filmmaking and video game development, it refers to recording actions of human actors and using that information to animate digital character models in 2D or 3D computer animation. When it includes face and fingers or captures subtle expressions, it is often referred to as performance capture.
Finger trackingIn the field of gesture recognition and , finger tracking is a high-resolution technique developed in 1969 that is employed to know the consecutive position of the fingers of the user and hence represent objects in 3D. In addition to that, the finger tracking technique is used as a tool of the computer, acting as an external device in our computer, similar to a keyboard and a mouse. The finger tracking system is focused on user-data interaction, where the user interacts with virtual data, by handling through the fingers the volumetric of a 3D object that we want to represent.
Motion blurMotion blur is the apparent streaking of moving objects in a photograph or a sequence of frames, such as a film or animation. It results when the image being recorded changes during the recording of a single exposure, due to rapid movement or long exposure. When a camera creates an image, that image does not represent a single instant of time. Because of technological constraints or artistic requirements, the image may represent the scene over a period of time.
Facial motion captureFacial motion capture is the process of electronically converting the movements of a person's face into a digital database using cameras or laser scanners. This database may then be used to produce computer graphics (CG), computer animation for movies, games, or real-time avatars. Because the motion of CG characters is derived from the movements of real people, it results in a more realistic and nuanced computer character animation than if the animation were created manually.
Gesture recognitionGesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. It is a subdiscipline of computer vision. Gestures can originate from any bodily motion or state, but commonly originate from the face or hand. Focuses in the field include emotion recognition from face and hand gesture recognition since they are all expressions. Users can make simple gestures to control or interact with devices without physically touching them.
MotionIn physics, motion is the phenomenon by which an object changes its position with respect to time. Motion is mathematically described in terms of displacement, distance, velocity, acceleration, speed, and frame of reference to an observer, measuring the change in position of the body relative to that frame with a change in time. The branch of physics describing the motion of objects without reference to their cause is called kinematics, while the branch studying forces and their effect on motion is called dynamics.
Computer animationComputer animation is the process used for digitally generating animations. The more general term (CGI) encompasses both static scenes (s) and dynamic images (s), while computer animation refers to moving images. Modern computer animation usually uses 3D computer graphics to generate a three-dimensional picture. The target of the animation is sometimes the computer itself, while other times it is film. Computer animation is essentially a digital successor to stop motion techniques, but using 3D models, and traditional animation techniques using frame-by-frame animation of 2D illustrations.
Pose trackingIn virtual reality (VR) and augmented reality (AR), a pose tracking system detects the precise pose of head-mounted displays, controllers, other objects or body parts within Euclidean space. Pose tracking is often referred to as 6DOF tracking, for the six degrees of freedom in which the pose is often tracked. Pose tracking is sometimes referred to as positional tracking, but the two are separate. Pose tracking is different from positional tracking because pose tracking includes orientation whereas and positional tracking does not.
Non-linear editingNon-linear editing is a form of offline editing for audio, video, and . In offline editing, the original content is not modified in the course of editing. In non-linear editing, edits are specified and modified by specialized software. A pointer-based playlist, effectively an edit decision list (EDL), for video and audio, or a directed acyclic graph for still images, is used to keep track of edits. Each time the edited audio, video, or image is rendered, played back, or accessed, it is reconstructed from the original source and the specified editing steps.
Offline editingOffline editing is the creative storytelling stage of film making and television production where the structure, mood, pacing and story of the final show are defined. Many versions and revisions are presented and considered at this stage until the edit gets to a stage known as picture lock. This is when the process then moves on to the next stages of post production known as online editing, colour grading and audio mixing.