Two-streams hypothesisThe two-streams hypothesis is a model of the neural processing of vision as well as hearing. The hypothesis, given its initial characterisation in a paper by David Milner and Melvyn A. Goodale in 1992, argues that humans possess two distinct visual systems. Recently there seems to be evidence of two distinct auditory systems as well. As visual information exits the occipital lobe, and as sound leaves the phonological network, it follows two main pathways, or "streams".
Speech processingSpeech processing is the study of speech signals and the processing methods of signals. The signals are usually processed in a digital representation, so speech processing can be regarded as a special case of digital signal processing, applied to speech signals. Aspects of speech processing includes the acquisition, manipulation, storage, transfer and output of speech signals. Different speech processing tasks include speech recognition, speech synthesis, speaker diarization, speech enhancement, speaker recognition, etc.
Americanist phonetic notationAmericanist phonetic notation, also known as the North American Phonetic Alphabet (NAPA), the Americanist Phonetic Alphabet or the American Phonetic Alphabet (APA), is a system of phonetic notation originally developed by European and American anthropologists and language scientists (many of whom were students of Neogrammarians) for the phonetic and phonemic transcription of indigenous languages of the Americas and for languages of Europe.
Speech synthesisSpeech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software or hardware products. A text-to-speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech. The reverse process is speech recognition. Synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database.
Speech repetitionSpeech repetition occurs when individuals speak the sounds that they have heard another person pronounce or say. In other words, it is the saying by one individual of the spoken vocalizations made by another individual. Speech repetition requires the person repeating the utterance to have the ability to map the sounds that they hear from the other person's oral pronunciation to similar places and manners of articulation in their own vocal tract.
Phonetic transcriptionPhonetic transcription (also known as phonetic script or phonetic notation) is the visual representation of speech sounds (or phones) by means of symbols. The most common type of phonetic transcription uses a phonetic alphabet, such as the International Phonetic Alphabet. The pronunciation of words in all languages changes over time. However, their written forms (orthography) are often not modified to take account of such changes, and do not accurately represent the pronunciation.
Speech sound disorderA speech sound disorder (SSD) is a speech disorder in which some sounds (phonemes) are not produced or used correctly. The term "protracted phonological development" is sometimes preferred when describing children's speech, to emphasize the continuing development while acknowledging the delay. Speech sound disorders may be subdivided into two primary types, articulation disorders (also called phonetic disorders) and phonemic disorders (also called phonological disorders).
Communication disorderA communication disorder is any disorder that affects an individual's ability to comprehend, detect, or apply language and speech to engage in dialogue effectively with others. The delays and disorders can range from simple sound substitution to the inability to understand or use one's native language. Disorders and tendencies included and excluded under the category of communication disorders may vary by source. For example, the definitions offered by the American Speech–Language–Hearing Association differ from those of the Diagnostic Statistical Manual 4th edition (DSM-IV).
Manner of articulationIn articulatory phonetics, the manner of articulation is the configuration and interaction of the articulators (speech organs such as the tongue, lips, and palate) when making a speech sound. One parameter of manner is stricture, that is, how closely the speech organs approach one another. Others include those involved in the r-like sounds (taps and trills), and the sibilancy of fricatives.
ParaphasiaParaphasia is a type of language output error commonly associated with aphasia, and characterized by the production of unintended syllables, words, or phrases during the effort to speak. Paraphasic errors are most common in patients with fluent forms of aphasia, and come in three forms: phonemic or literal, neologistic, and verbal. Paraphasias can affect metrical information, segmental information, number of syllables, or both. Some paraphasias preserve the meter without segmentation, and some do the opposite.