rrrabuyvsvsveytfazersurdwarubawvev
0% Interest for 24 Months! Learn more »
(800) 222-4700
  • Español: (800) 222-4701
Cart
June 2017 Giveaway

Sweetwater Forums [Archived]

After 15 years of great discussions, the Sweetwater Forums are now closed and preserved as a "read-only" resource. For discussions about current gear, check us out on Facebook, YouTube, inSync, and our Knowledge Base.

24 bit 48kHz verses 24 bit 96kHz

SAPA

What are the pro and cons and how much do you really gain in what we hear or is the the gain just on the editing side?
June 26, 2010 @02:41pm
DAS

Controversial subject. I think I can safely say that most users will agree that if there is any sonic difference it is subtle. How much difference you might perceive depends on what converter you have, your monitoring situation, the type of music, and ultimately your ears.
The cons are that 96K sampling requires more bandwidth (more RAM, hard drive space, faster computer, etc.) per number of tracks and plug-ins you use.
Your best bet is to try it and see whether it sounds any better for you, and then go from there based on what you hear.
June 26, 2010 @03:34pm
TimOBrien

Do you have a multi-million dollar treated studio and mega-computers that can take the strain of huge files with 96k-able plugins that won't make the whole CPU roll over and die after more than a handfull of tracks?
If not, just lock everything down on 24bit/44.1khz and forgetaboutit....
June 26, 2010 @06:27pm
rhardman

96K will give to a 48KHz bandwith... no monitor can reproduce that and to top it off human hearing max's out at 22K if you haven't lost ANY hearing.
"something only dogs can hear"
August 1, 2010 @07:30am
yeahforbes

Disclaimer: I always use 44.1 or 48 KHz for all the reasons cited so far.
Devil's advocate:
Indeed we can't hear that top octave, but say you have 2 violins that each produce supersonic waves. When they play simultaneously, the airborne summing of their sounds could perhaps create undertones that are indeed within the audible range. If they played at the same time and you picked up the ensemble with a single mic at 44.1/48, those audible undertones would be recorded. If they were isolated with separate mics (summed post-conversion), or recorded at separate times with overdubbing, those undertones don't have a chance.
Former case: summing occurs pre-lowpass (lowpass referring to the nyquist limit) thus the audible results of summing inaudible content are preserved.
Latter case: summing occurs post-lowpass, thus there is no indaudible content from which audible undertones could be generated.
Solution: high resolution recording -- and not just the converters... EVERYTHING in the signal chain, from the mic to the pre and so forth, needs to have heard this supersonic stuff. Like that earthworks mic that goes to 40k.
--
In practice, the sonic difference between running a converter at different rates probably has little to do with what I've just described. It's likely to be a product of alternate jitter characteristics and such.
August 4, 2010 @12:24am
dpd

Disclaimer: I always use 44.1 or 48 KHz for all the reasons cited so far.
Devil's advocate:
Indeed we can't hear that top octave, but say you have 2 violins that each produce supersonic waves. When they play simultaneously, the airborne summing of their sounds could perhaps create undertones that are indeed within the audible range.

Disagree - the airborne summing of their sounds is simply amplitude summation. That cannot, and will not, produce sum and difference frequencies. For that to occur, there must be a non-linearity to provide sum and difference frequencies. I don't know of any common acoustic phenomenon that would produce said non-linearity.
August 4, 2010 @04:03am
yeahforbes

http://en.wikipedia.org/wiki/Beat_(acoustics)#Difference_tones
Check out the link "An interesting listening experiment" - pretty neat.
August 4, 2010 @07:51am
Dave Burris

Disagree - the airborne summing of their sounds is simply amplitude summation. That cannot, and will not, produce sum and difference frequencies. For that to occur, there must be a non-linearity to provide sum and difference frequencies. I don't know of any common acoustic phenomenon that would produce said non-linearity.

While it may not be by the same means, amplitude modulation of signals indeed produce audible beat frequencies. I will not argue that the mechanism is different than the non-linear artifacts of semiconductors (or tubes) but I can assure you that if you produce two frequencies 50Hz apart, the modulation envelope will produce an audible 50Hz beat frequency and that that will subsequently combine and contribute with the other audio signals to produce other frequencies.
The simplified models we use to analyze and understand acoustic phenomina are representations of the largest influences on sound. Like any other phenomina represented by physics, we typically use the simplified formula and reasoning to infer important characteristics.
For the sake of this argument, and to concede that I don't know if any physical acoustic phenomina act as non-linear devices. I can state pretty confidently that the non-linearity that's most obvious in acoustics is YOUR EARS. A quick review of the decibel and psycho-acoustics should refresh your memory that hearing is logarithmic, not linear.
August 4, 2010 @01:16pm
DAS

Sum and difference (beat) frequencies that occur in the atmosphere are audible at the ear mainly because the ear's non-linearity is what allows them to occur. However, this requires that both frequencies that are beating with each other be audible. If the ear can't respond to one of them then it is highly unlikely the listener will perceive the sum or difference tones to any significant degree.
The above pertains to sounds that combine in the air, which is largely a linear environment. It does not necessarily pertain to sounds that combine in electronic equipment.
The overarching point here is that these beat tones are not a valid justification to record at higher sample rates. If you mix the signals before recording you will get your sum and difference tones to the extent that non-linearities in your equipment cause it to happen. If you record sounds in the atmosphere at high sample rates you are still capturing supersonic information that isn't going to do the ear any good. However, by capturing this supersonic information and then mixing it in electronic equipment (before or after recording) you can potentially PRODUCE sum and difference tones that would not have been present had those sounds combined acoustically in the air and been listened to in that environment (i.e. without being recorded or manipulated with non-linear electronic equipment).
August 4, 2010 @03:17pm
yeahforbes

by capturing this supersonic information and then mixing it in electronic equipment (before or after recording) you can potentially PRODUCE sum and difference tones that would not have been present had those sounds combined acoustically in the air and been listened to in that environment (i.e. without being recorded or manipulated with non-linear electronic equipment).

And of course, digital summing vs. analog... We're in beyond my head now!
August 4, 2010 @10:38pm
TimmyP1955

Sum and difference (beat) frequencies that occur in the atmosphere are audible at the ear mainly because the ear's non-linearity is what allows them to occur. However, this requires that both frequencies that are beating with each other be audible. If the ear can't respond to one of them then it is highly unlikely the listener will perceive the sum or difference tones to any significant degree.

Some loudspeakers use sum/difference of supersonic sound to create audible sound. This allows the loudspeakers to be exceedingly directional.
August 9, 2010 @05:08am
Dave Burris

Some loudspeakers use sum/difference of supersonic sound to create audible sound. This allows the loudspeakers to be exceedingly directional.

Please indicate a model or two that do this.
August 9, 2010 @11:49am
TimmyP1955

August 11, 2010 @02:32am
JeffBarnett

Yes, but...
The only commercial application listed in that article is Epcot Center. Let's say for the sake of argument that you are creating content for Epcot and it's going to be played back via this incredibly specialized technology. That still isn't an argument for recording at high sample rates. First of all, the sample rates we're discussing here (96 kHz) aren't high enough to product the sort of ultrasonic effects that make this these things work. They operate in the 60 kHz range, which would require a sample rate of at least 120 kHz.
But even if we were talking about 192 kHz, another common high resolution standard, it still doesn't make sense. With Audio Beam and Audio Spotlight devices, standard resolution audio content (20 Hz to 20 kHz, in other words) is fed to the device, then modulated on a 60 kHz carrier. The program material itself does not contain any ultra-sonic content. If anything, use in a system such as this is an argument for NOT recording at high sample rates, because ultrasonic content in the program material would cause problems in the modulation process. I wouldn't be surprised if the designers of these devices included a low-pass filter on the input to filter such potentially-troublesome content out.
The one (and only) really good argument for high sample rate recording I have heard in the 9 or so years this debate has been going on came from a biologist who was recording sounds made by insects in flight in the 30-50 kHz range, then slowing the recordings down to make them audible to human researchers.
August 11, 2010 @02:20pm
beats4sale

Okay now keep in mind I have nowhere near the knowledge of u guys when it comes to audio so please b gentle when responding, but after reading all of the previous posts, it left me with one burning question; if our ears arguably can't differentiate the sonic difference between 48k and 192k, why in the hell are we led to believe that if we purchase these outrageously expensive converters, we will achieve "pro results?"
August 14, 2010 @09:20pm