In the context of human–computer interaction, a modality is the classification of a single independent channel of input/output between a computer and a human. Such channels may differ based on sensory nature (e.g., visual vs. auditory), or other significant differences in processing (e.g., text vs. image).
A system is designated unimodal if it has only one modality implemented, and multimodal if it has more than one. When multiple modalities are available for some tasks or aspects of a task, the system is said to have overlapping modalities. If multiple modalities are available for a task, the system is said to have redundant modalities. Multiple modalities can be used in combination to provide complementary methods that may be redundant but convey information more effectively. Modalities can be generally defined in two forms: human-computer and computer-human modalities.
Computers utilize a wide range of technologies to communicate and send information to humans:
Common modalities
Vision – computer graphics typically through a screen
Audition – various audio outputs
Tactition – vibrations or other movement
Uncommon modalities
Gustation (taste)
Olfaction (smell)
Thermoception (heat)
Nociception (pain)
Equilibrioception (balance)
Any human sense can be used as a computer to human modality. However, the modalities of seeing and hearing are the most commonly employed since they are capable of transmitting information at a higher speed than other modalities, 250 to 300 and 150 to 160 words per minute, respectively. Though not commonly implemented as computer-human modality, tactition can achieve an average of 125 wpm through the use of a refreshable Braille display. Other more common forms of tactition are smartphone and game controller vibrations.
Computers can be equipped with various types of input devices and sensors to allow them to receive information from humans. Common input devices are often interchangeable if they have a standardized method of communication with the computer and afford practical adjustments to the user.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
The objective is to enable students to critically apprehend the Human Computer Interaction (HCI) challenges associated with the design and the exploitation of social media platforms.
How do people learn and how can we support learning? This is part 2 of a two-part course that provides an overview of major theoretical perspectives that attempt to describe how learning works, and se
How do people learn and how can we support learning? This is part 1 of a two-part course that provides an overview of major theoretical perspectives that attempt to describe how learning works, and se
Human–computer interaction (HCI) is research in the design and the use of computer technology, which focuses on the interfaces between people (users) and computers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways. A device that allows interaction between human being and a computer is known as a "Human-computer Interface (HCI)".
In the industrial design field of human–computer interaction, a user interface (UI) is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, while the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls and process controls.
Examines default images of human interaction and their impact on scientific inquiry, theories of agency, cognition, participation, and the political implications of embedded elements.
Explores the relevance of styles in human-computer interaction, visual perception, cognition, usability testing, games, data visualization, accessibility, extended reality, and more.
Viewers of 360-degree videos are provided with both visual modality to characterize their surrounding views and audio modality to indicate the sound direction. Though both modalities are important for saliency prediction, little work has been done by joint ...
With their exponentially rising computational power, digital platforms are heralding a new era of hybrid intelligence. There has recently been much enthusiasm and hype that the Metaverse has the potential to unlock hybrid intelligence. This is premised on ...
MDPI2023
,
Machine learning models trained with passive sensor data from mobile devices can be used to perform various inferences pertaining to activity recognition, context awareness, and health and well-being. Prior work has improved inference performance through t ...