Electronics seems to be a study in tradeoffs. Take digital recording. It’s never been easier to edit and fix in the mix, but in the recording process itself we are presented with new problems. The most significant being the issue of latency: In order to create a dynamic performance, not to mention being able to play in time, a musician depends on the immediate feedback produced by their instrument. With digital recording, CPU processing time delays that essential musical feedback.
1. Acceptable latency values for different recording purposes
Vocals: This is the most difficult to deal with since anyone listening to vocals in real time will have headphones on, and therefore have the sounds inside their head. A latency of even 3ms can be disconcerting.
Drums and Percussion: Most drummers will prefer to work with a latency of 6ms or under, which should provide an ‘immediate’ response.
Guitars: Electric guitarists generally play a few feet from their stacks, and since the speed of sound is roughly one thousand feet per second, each millisecond of delay is equivalent to listening to the sound from a point of one foot further away. So, if you can play an electric guitar from twelve feet away from your amp, you can easily cope with 12ms of latency.
Keyboards: Even on acoustic pianos there’s a delay between hitting a key and the hammer mechanism striking the string, so a smallish latency of around 6ms should be acceptable. While some keyboardists claim to hear a 5ms discrepancy in their performances, the vast majority of musicians are unlikely to worry about 10ms, and many should find a latency of 23ms or more perfectly acceptable with most sounds, especially pads with longer attacks. Keep in mind that MIDI keyboards and interfaces introduce latency as well, so it’s to keep latency as low as possible.
2. Calculating latency
Some audio interface manufacturers make life easy by providing playback latency values in milliseconds at the current sample rate in their Control Panel utilities. However, some of these readouts can present incorrect values. If your audio application or sound card provides a readout of buffer size in samples, it’s easy you calculate latency; Divide the buffer size by the sample rate. For example, in the case of a typical ASIO buffer size of 256 samples in a 44.1kHz project, latency is 256/44100, or 5.8ms, normally rounded to 6ms. Similarly, a 256-samole buffer in a 96kHz project would provide 256/96000, or 2.6ms latency.
3. Practical applications and latency
If you’re streaming samples from a hard drive, for an application such as Gigastudio, HALion or Kontakt, using a drive with a low access time will help you achieve the minimum latency.
During playback and mixdown, latency largely determines the time it takes to begin hearing your song after you press the play button. Few people will notice a gap even as large as 100ms in this situation.
If you are running a pre-mastering application such as Wavelab or Sound Forge, you don’t often need to work with low latency. Few people notice a slight lag time between altering a plug-in parameter and hearing the result when mastering, even when the lag is 100ms or more. The only time this can become bothersome is when you are bypassing a plug-in for an A/B comparison of the audio effect. Ideally, the change should occur as soon as you click the bypass button, but most people won’t notice a delay of 23ms.
When using a hardware MIDI controller for automation you may not need low latency, but it’s generally preferable when inputting real time synth changes such as fast filter sweeps, to ensure the most responsive interface between real and virtual knobs.
During monitoring of vocals you may be able to use zero-latency monitoring for the main signal and still add a reverb plug-in with latency of 23ms or more without causing delay problems as long as you use the effect totally wet.