"PC" indicates a Windows or Mac system
"DAW" (Digital Audio Workstation") indicates a PC configured as an audio recording system
"CPU" is the central processing unit of a computer
"DSP" are Digital Signal Processors, extra processing power usually found on PCI cards (such as Universal Audios UAD patform) and FireWire hardware plug-in DSP farms (such as TC Electronics PowerCore FireWire device).
In the good old days of analogue linear (tape based) recording we put up with a lot ....
- Wow and flutter
- Cross talk
- Limited editing
- Lining up
- Noise reduction systems
- Cost of tape
- Tape machine maintenance
- Head cleaning
... need I go on? It's a wonder anyone thinks of the analogue era with any kind of affection. But I'll tell you one problem we didn't have to put up with. A problem so fundamental and intrusive to the creative process that it often seems incredible that we ever embraced non-linear (hard disk) digital recording at all ...
Latency, what is it good for?
... absolutely nothing, say it again! You see, it turns out that the analogue recording chain ...
Soundwave > Microphone transducer > Mic cable > Mic pre-amp > Mixer bus > Tape monitor circuit > Mixer tape monitor returns > Master module > Amp > Speaker > Soundwave > ear (phew!) ...
... imposes no audible delay at all.
Amazing really! But as soon as our precious audio signals start passing through the software and hardware components of a digital recording system, time delays start to build up.
What is latency?
Simply put, latency is the time delay between a digital signal (in our case MIDI or audio) entering a system, passing through it and emerging on the other side. If the delay is noticeable, it may make recording a musical performance very difficult, or at least uncomfortable. Audible latency is usually not the product of a delay in one system component, but the cumulative delay of many components.
Delays of more than 5-6ms (milliseconds - thousand of a second) can be audible and present problems. I personally find delays of 9-12ms intolerable when I am recording guitar but tolerable for an organ or string performance.
Latency is a problem in real-time recording situations. Almost uniquely, audio recording on a PC is just such a situation. Video isn't, because sound and pictures are recorded in-camera in real time and then transferred to a PC for editing and processing later.
There are 2 primary scenarios in which latency is a problem ...
1. Audio recording and software monitoring
Latency is a problem most often associated with monitoring during the PC / DAW recording process, because this is when it is most evident. The task of recording an audio performance into a DAW, where it may be processed in some way (compression, reverb etc) and recorded to the hard drive, requires that the resulting audio is sent back out of the DAW to a monitoring system, so that it can be heard during the performance.
2. Playing software instruments from a controller (keyboard)
Latency is also a problem when playing software generated instruments (plug-ins) from an external (MIDI) controller keyboard. This affects keyboards transmitting MIDI performance messages via a MIDI or USB connection to the DAW. Because the software instruments are being created using CPU (or DSP) processing power, they will be subject to CPU processing delays, and OS output buffering delays too.
What causes latency?
Latency is caused by ...
- a software processing task which takes a finite time to complete
- a delay imposed when a task has to wait for system/processing resources (CPU/DSP) to complete a prior task
- the time taken for a signal to travel between 2 components of a system
In a DAW, latency is the result of ...
- The time taken to convert a signal from analogue to digital
- The time taken for a signal to enter the OS / application environment (buffer)
- The time taken for a MIDI signal to reach a software instrument
- The time taken for a signal to be processed (eq, compression etc)
- The time taken for a software instrument to be realised/processed
- The time taken for a signal to be sent to and received back from a DSP device
- The time taken for the DSP device to process a signal or realise/process a software instrument
- The time taken for a signal to exit the OS / application environment (buffer)
- The time taken to convert a signal from digital to analogue
- poorly written audio drivers, OS elements and applications which take more time then they should to execute
A DAW consists of a number of hardware and software components whose job it is to facilitate the creation, recording, processing and replay of audio signals. Almost all the components of a DAW create some latency, some insignificant, some highly audible.
|Insignificant latency||Potential audible delay||Audible delay|
|A to D converter (less than 4ms)||Firewire connection||Software instrument (plug-in) which runs on a PC CPU, PCI card or FireWire DSP outboard device will not respond immediately to data (such as note-on) from a controller (such as a music keyboard).|
|D to A converter (less than (4ms)||USB connection||Effect or dynamic process which runs on a PC CPU, PCI card or FireWire DSP outboard device take time to be be created/process.|
|MIDI cables less than 15m||USB microphone||Real time modeling process such as guitar amp emulations which run on the CPU, PCI card or FireWire DSP outboard device delay the signal they are processing.|
|Digital hardware mixer||PC / DAW operating system input buffer|
|MIDI interface||PC / DAW operating system output buffer|
NOTE: Dedicated hardware devices
Devices such as hardware samplers, digital mixers and keyboard workstations, don't suffer from noticeable latency because they are running optimised operating systems dedicated only to the task at hand.
When is latency audible?
Here are a few recording workflow scenarios ...
|When a performance is being recorded with a microphone and a DAW and the signal is passing through the system to be monitored.||When recording sound into a hardware sampler.|
|When a guitarist is being recorded with a DI box and a DAW and the signal is passing through the system to be monitored.||When routing sound through a digital mixer.|
|When an electronic instrument is being recorded with a DAW and the signal is passing through the system to be monitored.||When monitoring sound through a DAT or CD-R digital 2-track.|
|When a signal passing through a DAW is being processed by the CPU in some way, such as EQ or compression (especially if the processor employs "look-ahead").||When playing a guitar through a digital effects hardware device.|
|When a signal passing through a DAW is being sent to a DSP device (such as a PCI card or Firewire outboard device) to be processed in some way, such as EQ or compression.||When using a tape or disk based hardware multitrack recorder such as an ADAT or Otari RADAR system.|
|When MIDI performance data generated by an external hardware controller is being sent to a software instrument (plug-in) in a DAW.||Recording with a keyboard workstation such as a Korg Oasys.|
PCs and DAW's
CPU / OS delays
It may be clear to you by now that latency is a problem most often associated with PCs that are running DAW software in complex operating systems such as Windows and Mac OS. This is because these systems were not primarily designed for real-time audio processing.
There are 2 fundamental problems WITHIN PCs / DAW's ...
- At any one time, there are many processes occurring in a PC, some are background processes and some are tasks the user is requesting
- All processing within the system will be handled by the CPU and the CPU can't do everything at once, so CPU time is allocated (there's a queue!)
Problems without (buffers)
Also, external signals (MIDI and audio) seeking to enter or leave the system can only do so under the supervision of the CPU, and these tasks must be added to the queue too. Whilst the signals are waiting to be dealt with, they reside in a so called "buffer". Think of a buffer as a waiting room. The signals wait in the waiting room to be called by the CPU. When the CPU is free, it processes all the data in the waiting room.
If, for the moment, we ignore latency caused by any software processes within the DAW (eq, compression, software instruments etc) and consider just the tasks of getting signals through (in AND out) of the system ...
|Buffer size||Samples (44.1KHz)||Time delay||Pros||Cons|
|Big||1024 (x 2)||46ms||Smooth recording and playback of (probably) all signals.||Time delay to big for real time monitoring.|
|Medium||256 (x 2)||11ms||Smooth recording and playback of most signals.||Moderate time delay may be disconcerting for some performers.|
|Small||128 (x 2)||6ms||Smooth recording and playback of some signals.||Time delay should be acceptable for most performers.|
|Very small||64 (x 2)||3ms||Small time delay, will not be noticeable by most performers.||Very few systems will be able to record and playback multiple signals without audio glitching.
Requires lots of CPU processing time.
Some systems will glitch with just a single signal being processed.
Because the processing power of a CPU is finite, it is common to expand the processing power of a PC with extra hardware DSP in the form of PCI expansion cards and FireWire connected external outboard. Whilst this allows more software instruments, dynamic processors and effects to run, it doesn't solve the problem of latency.
This is because ...
- signals must be transferred between the DSP and core OS / DAW application (more buffers!)
- the DSP hardware takes a finite time to complete its processes
- interconnection (FireWire etc) isn't instantaneous
Some dynamic processor plug-ins, such as compressors, use so called "look-ahead" technology to decide how to process a signal. They buffer and analyse an incoming signal over a period of time before deciding how to process it.
Latency solutions and work arounds
If you are using a DAW with an audio interface, and have no other way of monitoring the audio signals you are recording (cue or monitor mix generated by an interface or separate mixer), you will have to use round-trip software monitoring and find ways to optimise the audio to reduce the latency.
Optimise your system
Optimise your system by removing OS elements and applications which take up background CPU time. Consider moving to a Mac, OSX has a system layer called Core Audio which has been designed to facilitate audio processing.
Change your buffer settings
Change your buffer settings between recording/tracking and mixing. During recording disable many plug-in processes and try and get your buffer down to 64 or even 32 samples.
Buy the best CPU, DSP, DAC technology you can afford from companies who write the best and most efficient drivers. The more processing power you have, the lower the achievable latency. The whole system works faster and the input and output buffers can be set to smaller amounts. And if driver software has been well written this will help also. With a fast modern multiple core computer, you should be able to run your audio buffer at 64 samples or less.
Choose the fastest computer interconnect technology
At present (June 2013) we are talking about a bespoke PCIe audio interface card eg Avid (Pro Tools), Focusrite (Rednet) and RME and Apogee (PCIe,) that connects to external (rack?) breakout hardware via either a bespoke connector, or something fast such as MADA, Ethernet, ADAT or Thuderbolt. Systems such as these tend to have well written high performance drivers.
Low latency mode
Use software that offers a low latency mode. This usually involves disabling latency inducing processes during recording to minimise the round-trip latency through the system.
Copy the analogue signal
When recording audio, don't use software monitoring. Disable it in the DAW. Split/copy the audio signal before it enters the DAWs audio interface A to D converters and use the copy as a real-time monitoring signal. You can do this by ...
- Using a soundcard/audio interfaces with a "no latency" or "direct monitoring" monitoring circuit.
- Using a mixer to split/copy the signal send the signal to the master outs (headphones) and groups/soundcard in simultaneously.
- Split/copy the signal with a lead or patchbay and then combine the copied signal and DAW outputs in a monitor controller (such as the Mackie Big Knob).
The disadvantage of these solutions is that you won't be able to hear the signal with DAW effects or processes in-place as you record and it doesn't solve the latency problems of playing software instruments.
Disable software processing whilst recording
Simple really, avoid latency by ensuring you don't pass your audio through plug-ins as it is being recorded. Leave effects and processing until the mix. Your DAW may have a so-called low-latency mode to facilitate this.
This won't solve your buffering problems though, and if your plug-in is an intrinsic part of the sound (such as guitar amp modeling) you might find it impossible to achieve the right performance.
Don't use a PC based DAW!
Sorry, not very helpful.
In reality, latency cannot be entirely eliminated no matter how much money you throw at the problem, but in the future it should possible to reduce it to a point (< 32 samples) where it is no longer a practical problem.