Auditory and Visual Perception Differences

What Do You Hear at a Concert?

Music perception explicates the subjective responses elicited by auditory stimuli – in this case, concert music. From every music signal perceived, I will get information about its pitch, tonality, loudness, timbre, and combination tones. A fundamental attribute of simple or complex tones is the pitch, which is the perception of “a high or low sound” (Chaudhuri, 2012, p. 41). This perceptual quality is a one-dimensional component comprising tone chroma and height that can be organized in a musical scale with low-frequency tones (low pitch) and high-frequency tones (high-pitched sounds) (Foley & Matlin, 2010). The audible pitch has a frequency of 20-5000 Hz, which is within the range of piano sounds (Chaudhuri, 2012). Therefore, I will perceive tones with low (<50 Hz) and high (>800 Hz) pitches at the concert.

Pitches can be ordered hierarchically based on predefined relations or harmonic idioms. Tonality describes the systematic organization of tones in a piece of music in major or minor diatonic scales (Chaudhuri, 2012). In a perceptual sense, I will perceive the tonality of the musical composition as either consonant (harmonic) or dissonant (inharmonic) based on major or minor note intervals. Loudness refers to the “intensity of sound pressure level” that ranges from the softest (PPP) to the loudest (fff) based on the frequency of the sound signals (Foley & Matlin, 2010, p. 303). The musicians can amplify the different instruments to varying levels or reduce the tone intensity to signal transitions, which the listener subjectively perceives as mixed loudness.

Another quality of subjective perception of music is timbre, which is the “distinctive sound signature of a source” (Foley & Matlin, 2010, p. 303). Based on this perceptual characteristic, I will discriminate between two simultaneous tones with comparable loudness and pitch or identify the timbre of different sound sources. Simple tones played at high amplitudes and equal frequencies result in the sensation of tone combinations (Foley & Matlin, 2010). They are the outcome of nonlinear transmission, whereby the ear cannot discriminate between real components and false ones. Therefore, my perceptual experience will include the sensation of the non-acoustic combination tones resulting from the simple tones played at the concert. Question: How would you cancel tone combinations perceived by a listener?

Vision and Audition

An auditory percept is considered a distinct entity from a visual percept. However, the two channels appear to support each other in instances such as following directions, audiovisual speech perception, writing, etc. The loci of auditory and visual stimuli are the basilar membrane and retinal wall, respectively (Moulin-Frier & Arbib, 2013). Although the stimulus in each case is different, the two sub-organs are similar in the sense that they convert stimuli (sound or visual signals) into neural pulses that are passed to the brain. Another similarity relates to their spatial structure, i.e., they can perceive light from various directions (vision) and sounds of multiple frequencies (audition) (Moulin-Frier & Arbib, 2013). Visual and auditory discrimination are also similar. Both cognitive processes help us discriminate acoustic cues or visible objects and separate them from their backgrounds. Another commonality between audition and vision is perceptual closure, i.e., the capacity to discern auditory stimuli (sounds) or visual signal (figures) delivered in an incomplete form (Chaudhuri, 2012).

The differences between vision and audition are very significant. First, the role of visual sensation is to perceive visible cues that allow us to visualize objects within the visual field. In contrast, the function of ‘audition’ is to discern sounds and related attributes such as loudness and pitch (Foley & Matlin, 2010). Second, research shows that visual memory centers on the recovery of the predominant features of an object, including shape and size (Chaudhuri, 2012). In comparison, the auditory memory entails the recall of the sensory attributes of sound (rote memory). Moreover, spatial relations, i.e., the determination of the positioning of entities in space, are only possible in the vision percept – consider speech vs. written word. While the writings are spatially perceived, the spoken language is not (Chaudhuri, 2012). In contrast, the auditory system hears sounds of different frequencies, but cannot delineate the direction of each source. This distinction explains why we can hear different sounds but pay no auditory attention to each of them. Question: What is the significance of the non-spatial aspects of audition in audio-visual perception?

Quiz

Auditory perception is a complex process. Acoustic signals reaching the basilar membrane are converted into neural pulses that are transmitted to the brain for interpretation and recognition (Chaudhuri, 2012). Central auditory processes play a significant role in the detection and processing of sound signals. They support various behavioral phenomena that are critical for ‘normal’ hearing. First, central processes are involved in sound localization, i.e., the capacity to discern the direction of the sound source. According to Chaudhuri (2012), subcortical structures are implicated in the development of acoustic maps that define the “time, intensity, and spectra” of binaural sound signals (p. 49). Cortical sound localization involves the corpus callosum fibers, which connect the cerebral hemispheres (Chaudhuri, 2012). The central processes are also critical in auditory discrimination. People with central auditory processing disorder (CAPD), which has a neurological basis, lack phonological awareness (Foley & Matlin, 2010). Their recognition of differences between sounds is impaired. Cortical mechanisms involving the cerebellum and basal ganglia play a role in the perception of temporal dimensions of audition (Chaudhuri, 2012). These aspects include temporal resolution, structure, ordering, etc. Central processes also help us assess auditory performance and identify incomplete auditory forms.

People hear musical tones differently. Individuals with relative pitch (RP) have the capacity to perceive high-pitched or low-pitched notes by relating them to the sound notations they listened to earlier (Moulin-Frier & Arbib, 2013). For instance, one may identify pitches in a tune based on a familiar tonic note of musical compositions. Such a person can also utilize his/her interval recognition skills to determine the pitch based on the space between musical notes (Moulin-Frier & Arbib, 2013). In comparison, absolute pitch (AP) refers to the capacity to distinguish notes independent of the influence of unrelated notes (Moulin-Frier & Arbib, 2013). Excellent pitch perception is considered an innate ability. People with AP can intuitively state that an automobile hooter is ‘B♭’ without hearing it or relating it to a reference key. Such individuals cannot necessarily tell if a note somewhat off-key. A few people are tone-deaf. The equivalent of tone-deafness is color blindness (Foley & Matlin, 2010). Although persons with this affliction can distinguish between two notes, they lack the ability to perceive the pitch. For example, they cannot differentiate C major from C minor, a distinction that is often clear to most people.

Music perception involves high-level cognitive processes. In particular, the brain plays a role in analyzing the acoustic frequencies of simultaneous tones to localize the sources, resulting in pitch perception (Foley & Matlin, 2010). The brain must parse simultaneous sinusoidal frequencies (within the harmonic spectrum) produced by musical instruments for the music to be perceived. The interpretation of pitch height, relative pitch, absolute pitch, and consonance involves mental mechanisms and memory of familiar tones. Neuropsychological research has implicated the primary auditory cortex in pitch cognition and the frontal cortex part of the brain to short-term pitch recall (Moulin-Frier & Arbib, 2013).

Melodic perception is also associated with the right hemisphere, whereas the left part of the brain processes verbal components of music, e.g., lyrics (Moulin-Frier & Arbib, 2013). Different parts of the brain play a role in AP and RP perception. According to Moulin-Frier and Arbib (2013), musicians with AP cognition have been shown to use the left posterior dorsolateral part of the brain, while those with RP do not. Thus, different brain parts specialize in the cognition of various aspects of musical tones, e.g., the pitch, resulting in music perception.

The speech perception theories explain specific aspects of spoken language recognition. Several models have been proposed to explain how speech signals and phonemes identified, decoded, and interpreted. Two dominant theories explain different perceptual aspects of speech: motor and auditory models. The motor theory is premised on the notion that speech perception entails a reference to phoneme articulation as understood by the listener (Moulin-Frier & Arbib, 2013). In contrast, the auditory model holds that the spoken language is characterized by acoustic invariance and experienced listeners have speech feature detectors (Moulin-Frier & Arbib, 2013). The two theories have two similarities. First, they are both based on the assumption that we can all hear abstract phonemic categories (Chaudhuri, 2012). Second, our perceptual system ignores or rectifies distorted speech signals. The two models differ in the content and mechanism of perception. While in the auditory theory, listeners discern speech features by referencing them to known phonemic categories, they extract such information from the sounds in the motor model (Chaudhuri, 2012). Further, in the auditory theory, speech perception involves a bottom-up approach, while the motor theory a top-down approach with the listener at the top.

References

Chaudhuri, A. (2012). Fundamentals of sensory perception/making sense in psychology pack. Oxford, MS: Oxford University Press.

Foley, H.J., & Matlin, M. W. (2010). Sensation and perception (5th ed.). London, UK: Psychology Press.

Moulin-Frier, C., & Arbib, M. A. (2013). Recognizing speech in a novel accent: The motor theory of speech perception reframed. Biological Cybernetics, 107(4), 421-447. Web.

Cite this paper

Select style

Reference

StudyCorgi. (2020, December 8). Auditory and Visual Perception Differences. https://studycorgi.com/auditory-and-visual-perception-differences/

Work Cited

"Auditory and Visual Perception Differences." StudyCorgi, 8 Dec. 2020, studycorgi.com/auditory-and-visual-perception-differences/.

* Hyperlink the URL after pasting it to your document

References

StudyCorgi. (2020) 'Auditory and Visual Perception Differences'. 8 December.

1. StudyCorgi. "Auditory and Visual Perception Differences." December 8, 2020. https://studycorgi.com/auditory-and-visual-perception-differences/.


Bibliography


StudyCorgi. "Auditory and Visual Perception Differences." December 8, 2020. https://studycorgi.com/auditory-and-visual-perception-differences/.

References

StudyCorgi. 2020. "Auditory and Visual Perception Differences." December 8, 2020. https://studycorgi.com/auditory-and-visual-perception-differences/.

This paper, “Auditory and Visual Perception Differences”, was written and voluntary submitted to our free essay database by a straight-A student. Please ensure you properly reference the paper if you're using it to write your assignment.

Before publication, the StudyCorgi editorial team proofread and checked the paper to make sure it meets the highest standards in terms of grammar, punctuation, style, fact accuracy, copyright issues, and inclusive language. Last updated: .

If you are the author of this paper and no longer wish to have it published on StudyCorgi, request the removal. Please use the “Donate your paper” form to submit an essay.