Preamp difference : if it's not the frequency, not the slew rate, and not the harmonics, what is it ?

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Neve gear was OK - I never said it wasn't. But I don't think many of their approaches were unique - RN's reference to "power amplifiers" in line output stages was certainly not unique or limited to his gear.

Skin effect was en vogue in the 1980s and 90s ... because despite it having been first postulated nearly a century earlier, it had gained awareness as a new "thing" which no one could argue with. I did specifically emphasise that I'd lifted the table from Wiki "because I'm lazy". I am. And my knowledge predated Google's establishment by some decades. Factually, skin effect has even less to do with 50 or 60Hz power transmission than it does to anything AF or higher.
Nothing is unique in the world of electronics - I think it’s more that each new design development is an adaptation or progression of a previous one especially in audio. Many early guitar amp designs were simple extracts from the tube handbook’s application circuits with modifications. The same could be said later for audio circuits using transistors and opamps.
 
Neve gear was OK - I never said it wasn't. But I don't think many of their approaches were unique - RN's reference to "power amplifiers" in line output stages was certainly not unique or limited to his gear.

Skin effect was en vogue in the 1980s and 90s ... because despite it having been first postulated nearly a century earlier, it had gained awareness as a new "thing" which no one could argue with. I did specifically emphasise that I'd lifted the table from Wiki "because I'm lazy". I am. And my knowledge predated Google's establishment by some decades. Factually, skin effect has even less to do with 50 or 60Hz power transmission than it does to anything AF or higher.
except...
wiki said:
It is also important at mains frequencies (50–60 Hz) in AC electric power transmission and distribution systems.

Wires used for utility power distribution often used steel cores for strength, with aluminum conductors wrapped around that core for lower resistance. The higher resistance steel core doesn't matter as much because due to skin effect most of the current is flowing in the aluminum.

JR

1716064623386.png,
 
The main issue here is that music is not sound.

Nobody pays $100,- for a ticket to go to a theater and listen to a tonegenator (ok, maybe I missed some avant garde stuff)

You can measure the sound, but not the music, it's all about the emotional impact.

If we are dealing with recorded music, that piece of gear sits between the original performance and the listener, it is part of the pathway transfering emotions from the performer to the listener.

I don't think it's in the waveform, it's a phenomenon that occurs when you press the play button, that transfer of emotional "energy" only happens when there is a listener, the equipment becomes part of that chain when the music is playing.
 
unless I missed something, that's just CD quality at best, i.e. 44.1kHz with many tracks as significantly lower sample rates. Let's not confuse digital sample rates with frequency response.
Maybe you missed the "HD" part?
24bit 192kHz, 352.8 KHz.
yeah, let's not confuse the Nyquist theorem.
If those sample rates are from masters, and not just upsampled from 44.1/16 it could be good.
 
On the Blue Coast Music site, you can download albums recorded DSD, mixed and mastered analog and then captured again at DSD. They offer the same music in various sample rates (DSD) and .Wav.

Some free samples are offered so you can listen to their original and down samples versions or create your own. I’m not affiliated with them, but I find it an interesting opportunity.

Here is their site:

https://bluecoastmusic.com/hifilife...udio-dsd-wav-flac-and-mqa-with-free-downloads

Adam
 
As an aside, I recently had an album (that I co-produced and mixed) mastered by two separate and well-known mastering engineers. One uses an analog chain and the other uses digital. The music was typical Top40 rock/country - lots of processing and pushed a bit into saturation.

Interestingly, there were only tiny overall differences between the two masters (24-but, 44.1 kHz sources and masters). The artist, my co-poducer and I were all happy with both masters and we wound up using some of engineer 1’s masters and some from engineer 2.

More interesting is that the 16-bit 44.1 versions from engineer 1 sounded noticeably different than his 24-bit masters and also different from the other engineers 16-bit files. Engineer 2’s 24-bit and 16-bit files sounded identical to each other.

When I asked engineer 1 why his 16-bit files sounded different than his 24-bit files, he said “they won’t sound different,” and wouldn’t discuss it any further. Both mastering studios used Wavelab 11 to capture the final master and to export the files. Obviously they do something different.

This is not a difference in dither, but an obvious difference in the overall presentation of the song. Very strange, and based on his response and disposition , a reason not to return to engineer #1.

I was kind of shocked at how similarly the masters from two engineers turned out. I was similarly shocked at how dismissive one was when he was questioned about a clear technical issue with his exports.
 
Sounds like he’s either precious about his method or he’s made a boo-boo and doesn’t want to admit it. 24 and 16 if exported via the same chain should sound the same. I do this all the time for clients and no discernible difference between 16 bit and 24 bit.
 
In the comparison test each device shares the same A-D, D-A including the comparison benchmark re-record done without any preamp involved, which basically eliminates the interface from the test as the path is identical for each - any differences perceived are only from the DUT.
What I found in the DAW comparison with say LogicPro vs Cubase that the difference perceived sounded like mainly reverb signal content - were the pan laws different in Logic from record to playback? Certainly you would expect from any professional DAW to get back exactly what you put in if everything was set to unity gain. I repeated this same test with other software as well after seeing/hearing the results of the first and one thing was apparent that the errors were mainly in the mid-high regions, LF differences were less but still apparent and no two were the same. These differences are not necessarily distortion but possibly errors in reproduction whether in a DAW’s audio engine or a preamp.
If the A-D adds distortion, wouldn’t fhe DUT distort that distortion?
 
Geoff and Rupert both agree as I do, we three have had this discussion together many times. Things that effect the tone:


One additional thing, with the mishmash of preamps and eqs, that can also split up a good mix because you lose some coherency that was there when all the EQ’s and pres were the same. The phase shifts in these identical units do tend to glue things together better than 2 different sets of things…
I have been saying this for years to all my students and any engineer who will listen!

It's great to have lots of colour and flavour choices in outboard and plugin preamps and eq's today, but if you want to know why mixes don't alwasys ound as cohesive in the same way as the 'big console' mixes either modern or old school.. that consistency in preamp/eq/console guts across every single track of your mix has a whole heckofalot to do with it.

I've argued before that in some ways you could more easily turn out a cohesive, gelled sounding mix that was tracked and mixed entirely on a Mackie 8 bus with no outboard EQ than you could if you had racks full of dozens of the highest end preamps and eq's but all different models.
 
I have been saying this for years to all my students and any engineer who will listen!

It's great to have lots of colour and flavour choices in outboard and plugin preamps and eq's today, but if you want to know why mixes don't alwasys ound as cohesive in the same way as the 'big console' mixes either modern or old school.. that consistency in preamp/eq/console guts across every single track of your mix has a whole heckofalot to do with it.

I've argued before that in some ways you could more easily turn out a cohesive, gelled sounding mix that was tracked and mixed entirely on a Mackie 8 bus with no outboard EQ than you could if you had racks full of dozens of the highest end preamps and eq's but all different models.
I'm not sure what this has to do with path performance inside a single SKU but, a complex mix from a multi-mic'd stage involving stage wash and signal leakage into nearby mics (like the drum kit with a half dozen mics) means that some of the same signals when using outboard gear, can pass through dramatically different audio paths before being recombined into a final mix.

Phase shift is generally not very audible, but the same signal phase shifted then recombined with itself can cause varying frequency dependent additions or subtractions. I do not expect this to be significant unless the two signals are at similar levels when recombined (for deepest cancellations). This is most likely to affect long wavelength low bass waveforms (like a drum leaking into a bass mic, or vice versa).

I would expect this to be pretty subtle if at all, so yes possible in theory but unlikely to be widely significant.

Of course opinions vary. 🤔

JR
 
There’s an interview with Rupert Neve worth reading - Geoff Emerick from Abbey Road would disagree as would Rupert. Neve design consoles were designed with 75KHz and above bandwidth for a very good reason - the effects of distortion in the octave above 20KHz (up to 40KHz) and even the next octave up can have detrimental downward effects due to harmonics on the perceivable audio spectrum. There are 3 parts to the interview - access parts 2 & 3 using the right arrow tabs on the interview page. Part 1 deals with what I mention above.
Interview with Rupert Neve:
https://www.audiotechnology.com/features/interview/rupert-neve-interview-part-1
The OLD rule of thumb - record/sample at a hz 5X the sound source you want to record. Sounds we do not hear AFFECT the sounds we can. (So in my book; if I want to record cymbals and all their glory ~ I am tracking that at 96k to 300k+ on that track. Thx for the link...
 
The OLD rule of thumb - record/sample at a hz 5X the sound source you want to record. Sounds we do not hear AFFECT the sounds we can. (So in my book; if I want to record cymbals and all their glory ~ I am tracking that at 96k to 300k+ on that track. Thx for the link...
It is worth being aware that tracking at 96KHz does not necessarily mean a 48KHz bandwidth. Despite offering many sample rates, many interfaces bandwidth limit the input to 20KHz

Cheers

Ian
 
The OLD rule of thumb - record/sample at a hz 5X the sound source you want to record. Sounds we do not hear AFFECT the sounds we can. (So in my book; if I want to record cymbals and all their glory ~ I am tracking that at 96k to 300k+ on that track. Thx for the link...
For an old rule of science....
www said:

Nyquist-Shannon sampling theorem

The Nyquist–Shannon sampling theorem is an essential principle for digital signal processing linking the frequency range of a signal and the sample rate required to avoid a type of distortion called aliasing. The theorem states that the sample rate must be at least twice the bandwidth of the signal to avoid aliasing.

Of course most modern A/D convertors over-sample so don't lose too much sleep on the details.

JR
 
It is worth being aware that tracking at 96KHz does not necessarily mean a 48KHz bandwidth. Despite offering many sample rates, many interfaces bandwidth limit the input to 20KHz

Cheers

Ian
It's also worth being aware that theory and practice don't necessarily match. If the computer setup isn't up to the task 44.1k or 48k often sounds better than 88.2k or 96K. It's less common now but 5-10 years ago doing a project at 44.1k often sounded much better. I'm assuming there were lots of calculation errors in the double sample rate versions.
 
It's also worth being aware that theory and practice don't necessarily match. If the computer setup isn't up to the task 44.1k or 48k often sounds better than 88.2k or 96K. It's less common now but 5-10 years ago doing a project at 44.1k often sounded much better. I'm assuming there were lots of calculation errors in the double sample rate versions.
I believe, from speaking with many reps at NAMM, that most interfaces have AD chips that capture at DSD (11 mHz or higher) and then the audio is decimated to whatever PCM SR and bit depth is requested. In this case, the audio quality of the A to D conversion should be similar at any sample rate.. In the early 2000s, it was certainly not always the case that higher sample rates sounded better than lower sample rates (or vice-versa). I had some high-end converters that sounded (and measured) better at lower sample rates.

I haven't looked at any chip specs since the AKM5578, which supported 11.2 mHz DSD and 32-bit 768 kHz sampling, but I think we might be beyond that chip at this point.
 
I was talking about processing inside the DAW not the sound of converters at different sample rates. If you are mixing ITB the processing power has to be up to the task.

The Prism AD2 I use sounds pretty much the same at all sample rates. That’s about a 30 year old design. Some converters do sound noticeably different at different sample rates.
 
It is worth being aware that tracking at 96KHz does not necessarily mean a 48KHz bandwidth. Despite offering many sample rates, many interfaces bandwidth limit the input to 20KHz

Cheers

Ian
Ian, I believe it was you...perhaps someone else?....that had documented results of A/D convertors having a "hard coded" roll-off at 20K-ish regardless of the sampling rate. Or am I imagining something? Thanks for any clarification....

Bri
 
Back
Top