Introduction
Sound is central in communication, entertainment activities, and sending emergency signals. However, the phenomenon is complex since it has varying properties and is transmitted through gases, liquids, and solids at different speeds. Sound theory knowledge is essential to those involved in audio engineering, acoustics, or music productions.
Frequency, amplitude, pitch, harmonics, and perceived loudness are typical properties of sound. Given the advanced technologies, converting digital to analog signals of sound and vice versa is crucial. Therefore, this technical report explores the properties of sound, its conversion, and its applications.
Building Blocks of Sound and Transmission
The building blocks of sound can be identified from its definition in physics. The phenomenon is described as a wave that travels through a medium and is caused by the vibration of particles. Consequently, the building blocks of sound are air molecule oscillations that cause rarefactions and compressions to form a wave. Transmission of sound in air depends on the distance between the air molecules. As the air particles oscillate, a pressure variation that travels as a longitudinal wave is produced. The sound is captured by the microphone’s diaphragm, which is sensitive to air pressure. The diaphragm vibrates when the sound reaches it and converts the wave to an electrical signal.
Properties of Sound
Frequency, Pitch, Harmonic Series
Sound is a wave exhibiting several properties as it moves through solids, liquids, and gases. The main properties of sound as it is propagated through different media are pitch, harmonic series, and frequency. The features are of the essence in audio engineering and other activities involving sound energy. Audio engineers design audio systems by using frequency, pitch, and harmonic series. Frequency can be defined as the number of times an entire oscillation within a given period, often expressed in seconds. Thus, low-frequency sounds exhibit fewer oscillations or vibrations than high-frequency ones. For example, a drumbeat has a lower frequency than a whistle because more oscillations occur in the latter.
On the other hand, pitch is also interpreted as high or low by human beings when they hear sounds. Consequently, as an auditory attribute of a wave, pitch is often used alongside loudness and timbre. Another property of sound that can be used to distinguish between various instruments and singers is the harmonic series. These natural frequencies at which things vibrate in their characteristic modes give rise to this property. In acoustic physics, harmonic series refers to musical tones with frequencies that are an integral multiple of its minimum periodic waveform frequency.
Amplitude and Perceived Loudness
Apart from pitch, harmonic series, frequency, amplitude, and perceived loudness are essential sound elements. Although the two are often used interchangeably, they differ in definitions and aspects of sound they measure. The physical measure of a sound wave’s strength is called amplitude. The latter is central to determining the loudness and volume of a particular sound. On the other hand, perceived loudness is the subjective experience of how loud a sound seems to people. Unlike amplitude, which is expressed in decibels (dB), perceived loudness can be described as quiet, soft, or loud. For example, machines such as vacuum cleaners produce sound waves whose amplitude is about 70 dB. Humans perceive the latter as loud, and the loudness levels can subjectively vary. The different properties of sound help determine its strength and how ears perceive it.
Conversion of Sound
Technological advancements have led to digital transformations, making it necessary to convert analog audio to digital. While analog sound is continuous and mechanically recorded, digital sound is transmitted in binary codes. Modern computers utilize digital signals, making it necessary to convert continuous audio waves into binary form. The primary rationale for converting sound waves to digital forms is greater accuracy and fidelity. Additionally, the non-continuous audio format is represented by numbers that can be manipulated to produce more precise and more accurate sound. Therefore, analog-to-digital and digital-to-analog (AD/DA) sound conversion is crucial for compatibility with modern computer devices.
Although the AD/DC conversion process seems complex, advanced devices called converters are used. An analog-to-digital converter (ADC) changes continuous sound waves and represents them in binary codes. ADC operates by measuring the amplitude of a given sound wave at regular intervals. Consequently, numerical values are put on the measured waves, and their amplitude is split into levels along the vertical axis. ADC’s resolution determines the bit depth of the transformed digital signals. Quantization occurs when the bit depth falls between two binary values, rounding it to the nearest value. ADCs can be found in digital microphones, mobile phones, and other digitized recording systems.
On the other hand, digital-to-analog converters (DACs) are used to recreate the analog wave. DACs operate on a decoding principle that allows them to revert the process done by the ADCs. Where a binary value was rounded off through quantization, DACs smoothen the process through interpolation. Consequently, two points are examined, and values between them are approximated to fill the gaps. AD conversation happens at the sound outputs since the human ear cannot perceive numbers. Therefore, the binary codes are converted to a series of voltage levels, allowing the speakers to produce sounds that can be heard.
Bit Depth and Sample Rates
Bit depth and sample rates are fundamental to understanding AD/DA conversion processes. Sample rates include the number of analog waveform samples that ADC takes to create a discrete digital signal. The rates are measured in kilohertz and can determine the frequency of the discrete audio. Meanwhile, bit depth is the number of amplitude values recorded for each sample. Therefore, higher bit depths indicate more amplitude captured for audio recreation processes. Audio resolution and frequency are determined by sample rate and bit depth.
Noise Floor and Signal-to-Noise Ratio
The process of converting audio from analog to digital and vice versa can alter the quality of sound. Noise floor and signal-to-noise ratio (SNR) are crucial in determining the quality of audio signals during transmission. Shot, thermal, and flicker noises cause noise on the floor. The latter is the minimum level of unwanted sounds within an electronic system. Shot noise includes that made by the random movements of electrons within conductors in an electronic system. Impurities in a semiconductor and electrons’ fluctuations in a current cause flicker and thermal noises, respectively. The higher the noise floor, the more likely it is to have low-quality audio.
The presence of background noise can also alter the quality of sound. SNR involves the measure of a desired signal strength relative to that of the background noise. The measurement is expressed in dB and calculated as a power signal-to-noise ratio. High SNR indicates that the sound can be heard and distinguished from the background noise. On the other hand, low SNR shows that it is difficult to distinguish the transmitted audio from the background noise. Therefore, audio with low SNR is perceived to be poor quality since the ears have problems interpreting it.
Recording Studio Set-up and Signal Flow
The set-up of a recording studio is crucial in enhancing smooth signal flow. Microphones, preamplifiers, studio monitors, and audio interfaces are essential for a recording studio set-up. Although the instruments accurately capture sound in the live room, reducing background noises that may interfere with signal flow is essential. Consequently, soundproof materials are designed and placed around the recording studio walls to minimize reflections and possible noises. Using recording instruments with low-noise floors and soundproof materials in a recording studio allows smooth signal flow.
Level Metering, dB, and Headroom
Level metering, dB, and headroom are essential for audio engineering and sound transmission. Sound pressure levels and signal strength are expressed in dB, a logarithmic scale. Meanwhile, level metering is measuring sound intensity using specialized devices. Peak meters are crucial in preventing clipping if a signal is associated with distortions. Headroom helps determine the difference between an audio signal’s nominal level and a system’s clipping point. While the nominal level is the level at which a sound system is designed to operate, the clipping is the maximum level that can be handled without distortion. Therefore, dB, headroom, and level metering are essential to note when recording audio.
Conclusion
Sound is a wave propagated through particles’ vibrations and exhibits various properties. Solids are the best media through which the wave can be transmitted since their particles are close. Harmonic series, pitch, frequency, amplitude, and perceived loudness are the significant properties of a sound wave. Recent technological advancements have made it necessary to convert analog audio signals to digital and vice versa. When recording audio, it is important to consider noise floor, dB, headroom, level meters, and SNR.