A cutscene or event scene (sometimes in-game cinematic or in-game movie) is a sequence in a video game that is not interactive, interrupting the gameplay. Such scenes are used to show conversations between characters, set the mood, reward the player, introduce newer models and gameplay elements, show the effects of a player's actions, create emotional connections, improve pacing or foreshadow future events.
Cutscenes often feature "on the fly" rendering, using the gameplay graphics to create scripted events. Cutscenes can also be pre-rendered computer graphics streamed from a video file. Pre-made videos used in video games (either during cutscenes or during the gameplay itself) are referred to as "full motion videos" or "FMVs". Cutscenes can also appear in other forms, such as a series of images or as plain text and audio.
The Sumerian Game (1966), an early mainframe game designed by Mabel Addis, introduced its Sumerian setting with a slideshow synchronized to an audio recording; it was essentially an unskippable introductory cutscene, but not an in-game cutscene. Taito's arcade video game Space Invaders Part II (1979) introduced the use of brief comical intermission scenes between levels, where the last invader who gets shot limps off screen. Namco's Pac-Man (1980) similarly featured cutscenes in the form of brief comical interludes, about Pac-Man and Blinky chasing each other.
Shigeru Miyamoto's Donkey Kong (1981) took the cutscene concept a step further by using cutscenes to visually advance a complete story. Data East's laserdisc video game Bega's Battle (1983) introduced animated full-motion video (FMV) cutscenes with voice acting to develop a story between the game's shooting stages, which became the standard approach to game storytelling years later. The games Bugaboo (The Flea) in 1983 and Karateka (1984) helped introduce the cutscene concept to home computers.
In the point-and-click adventure genre, Ron Gilbert introduced the cutscene concept with non-interactive plot sequences in Maniac Mansion (1987).
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
This course addresses the subject of moving images. It focuses on the field of 3D computer graphics and the animation of computer-generated images (CGI).
The ability to represent ideas coherently and communicate a projectâs aims effectively is a key skill for every architect. Design, painting, photography, modelling and graphics are essential to the
An interactive film is a video game or other interactive media that has characteristics of a cinematic film. In the video game industry, the term refers to a movie game, a video game that presents its gameplay in a cinematic, scripted manner, often through the use of full-motion video of either animated or live-action footage. In the film industry, the term "interactive film" refers to interactive cinema, a film where one or more viewers can interact with the film and influence the events that unfold in the film.
A non-player character (NPC), or non-playable character, is any character in a game that is not controlled by a player. The term originated in traditional tabletop role-playing games where it applies to characters controlled by the gamemaster or referee rather than by another player. In video games, this usually means a character controlled by the computer (instead of a player) that has a predetermined set of behaviors that potentially will impact gameplay, but will not necessarily be the product of true artificial intelligence.
Pre-rendering is the process in which video footage is not rendered in real-time by the hardware that is outputting or playing back the video. Instead, the video is a recording of footage that was previously rendered on different equipment (typically one that is more powerful than the hardware used for playback). Pre-rendered assets (typically movies) may also be outsourced by the developer to an outside production company. Such assets usually have a level of complexity that is too great for the target platform to render in real-time.
We propose a novel way to augment a real-world scene with minimal user intervention on a mobile phone; the user only has to point the phone camera to the desired location of the augmentation. Our method is valid for horizontal or vertical surfaces only, bu ...
Non-linear image warping or image resampling is a necessary step in many current and upcoming video applications such as video retargeting, stereoscopic 3D mapping, and multiview synthesis. The challenges for real-time resampling include resampling image q ...
Institute of Electrical and Electronics Engineers2012
We propose a new service for building user-defined 3D anatomical structures on the Web. The Web server is connected to a database storing more than 1000 3D anatomical models reconstructed from the Visible Human. Users may combine existing models as well as ...