Day three was all about audio, and I’ll have to admit, this was my favorite subject of the week. I had worked with audio in the past, but I never really understood the processes that were taking place. The numbers and waveforms in programs like Audacity and Adobe Audition can be a bit daunting to understand. Thankfully, Robin presented the material in a clear and compelling way, taking us step-by-step through the process. This class tied-in with the first day, as we worked more directly with the numbers and mathematical concepts of bits and bytes. Even though waveforms are visual representations of sound, it was easier to see how the waveforms related directly to mathematical data points.
To help us work through the process of editing audio, Robin played a single track by Radiohead in a variety of ways. Reducing the sampling rate had a direct impact on the playability of the track, moving it from clean and crisp to dull and distant in tone. Since they’re both measured in Hertz or Hz, it was difficult at first to see how audio frequency and sampling rate differed from one another.
Audio frequency is the vibration, the wave itself that is the actual sound. Sampling rate is the number of times per second the wave is measured. Humans can hear a range of sound, or audio frequency between 20 and 20,000 Hz. In order to turn these waves into digital data that is representative of the sound, the waves must be measured at twice the Hz, around 40,000 Hz. This high sampling rate is necessary to capture the waves at crests and at troughs, giving a full picture of the sound. Capturing the wave at the same rate as its audio frequency would only produce data points for crests, or only troughs, and the sound would not be dynamic or representative of the original.