MS mics placement

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Until current flows and then falls in the transformer primary for the 1st 1/2 cycle you can’t get an output voltage on the secondary
Really?
- if you looked at an impulse test there should be a significant time lag of around 180deg - in that case this signal at the transformer output of a dynamic or ribbon will lag timewise behind the signal from a condenser capsule by 90deg and no longer lead by 90deg, unless the condenser uses a transformer as well in which case the condenser mic output should lag by 90deg timewise. Lots of transformerless condenser mics out there.
Try comparing a looped impulse audio signal like a snare hit on the input and the output of an audio transformer - the output voltage should lag behind the input voltage timewise if observed on a scope.
There can’t be an output voltage without the magnetic induction occurring to energise the secondary winding, which relies on current flowing and collapsing then reversing in the primary winding then this induction can produce a voltage at the output - this takes time to occur. It’s misleading to say voltage “leads” the current at the output of a transformer - phase wise yes (looking at a sine wave on a scope would make you see that as all wavefronts look identical anyway), timewise no, the output voltage doesn’t travel back in time to be leading the current that created it where the current hasn’t even started to flow for the leading wavefront.
Unless I’m completely mistaken in all of this???
You are negating evidence. Anyone with a an oscilloscope can see you are utterly wrong.
You think that the output voltage can appear only after a complete half-cycle has been completed. Think of the consequences: the output signal would suffer a variable delay, of 25ms at 20Hz, 0.5ms at 1kHz and 25us at 20kHz. Transformation is instant, no time-travel there.
When putting dual mics at one location to get the sonic benefits of each I do a simple slap test recording to see if there’s a phase/time lag in one and then placement adjustment can be calculated by using the time difference between the two waves mSec/1000 x 1100ft. Recording engineers have been doing this for years - takes minutes to do.
As I mentioned several times before, this is a very different issue. "Phase" effects due to different acoustic paths are well known and have nothing in common with transformers.
 
Last edited:
If i get this correctly, shouldn't the diaphragm/ribbon travel distance play a role? How much do both ribbon and condenser travel at say 1k, and compare that with the distance in placement between the two, and how they are oriented. Going purely by instinct, phase discrepancy could appear at very high frequencies. And that would be in anechoic environment, adding room acoustic to the equation, this effect is masked by all phase craziness going on with the room reflections. Very frequency dependent stuff.

Also even if it's omni, direction in which the diaphragm is pointing will play significant part. In the example above where omni is pointing downwards vs front/rear facing ribbon, i feel, phase interaction would be different if the omni was to be pointed to the ceiling or front/rear, side, or anywhere in between.
Microphones are obviously quite complex devices that only look simple. When there are two more in the equation that work in different ways, things obviously get even more complex. I just wanted to state that the difference in phase exists and can affect the final result, and yes, there are many other parameters that play a smaller or larger role. Which you also mentioned.
 
You think that the output voltage can appear only after a complete half-cycle has been completed. Think of the consequences: the output signal would suffer a variable delay, of 25ms at 20Hz, 0.5ms at 1kHz and 25us at 20kHz. Transformation is instant, no time-travel there.
Yeah - makes sense regarding different delay times for different frequencies. Never got much into transformer theory and always assumed (obviously wrongly) without checking or thinking too much about it - we were told how the rules applied to them and not what was going on except about the different core types and how they affected the performance and response. We were taught how to fix things and not how to design the components that went into them. Very enlightening.
Did a quick check with an audio transformer with an impulse decaying sine wave and they voltages are definitely in sync at input and output. You can ignore my previous comments regarding this.
So how does the voltage appear instantaneously on the secondary of the transformer?
Still doesn’t get away from the phase difference between the condenser and ribbon or dynamic. I know there’s mechanical and acoustic damping in a dynamic which would obviously bring it closer to the condenser, but in a ribbon there’s only the back resistance of the tensioned ribbon without any acoustic chambering as it’s open both sides.
 
So how does the voltage appear instantaneously on the secondary of the transformer?
I believe it's a case of the tree hiding the forest, in that case the sinewave hiding the transient.
Most circuit analysis relies on sinewaves, justifiedly, but it tends to establish a time-based view that covers a whole (or half) cycle, when actually, things happen instantly. Of course, in real life, there are always factors, parasitic or not, that make impossible instant transition from zero to x.
Actually, the instantaneity of voltage at the secondary makes debate. Since the xfmr has a bandpass response, the leading "edge" is actually ramping continuously, so the effects at work are instantaneous, but the materialization as a voltage is a debatable notion. When does the voltage appear? I would say "when it can be measured/felt", which depends on the sensitivity of the meter/sensor.
Mathematically, one can conclude that "some" voltage appears instantly, but not the "whole" voltage. The difference between "some" and "whole" is a matter of appreciation.
Commonly, the values of 10% and 90% have been standardized, but they actually don't reflect any universal value.
 
Well, electricity is electromagnetism...so in theory electric signal travel at speed of light right ?
Now dielectric constant and other damping factors may slow it down (I just check, salt water is 226 000km/s)
In small dimensional (space) system as we talk here we can considerer the energy transfer instantaneous.
As abbey say (if I get it well) something may not happen within the few first nano second of time... and just after that energy may not be translated mainly to voltage, but just go partially elsewhere.
This so fast that it's probably irrelevant for our practical use.
 
In this example, the condenser sensor detects position, correct? And the magnetic sensor detects speed.
Correct.
Based on the "pendulum" model, we can make a virtual model of that microphone, according to the picture below.

1694271734175.png

A condenser mike is designed on one side of the membrane, and a dynamic mike on the other. So two microphones share one membrane. The signals from those two microphones will always be in a phase relationship of 90 degrees regardless of the frequency. Nor can any other effects such as generator inertia and electro/mechanical/air damping affect this phase relationship.
I think that considering a dynamic diaphragm with its moving coil reacts to sound pressure identically to a condenser diaphragm with a different tuning is quite dubious.

But what happens when microphones each have their own membrane?
Do they move in phase with respect to the sound pressure? Obviously not.

This can be seen from the measurements, but also from the various reports of audio engineers who tried these combinations, from the fact that it gave good results to the fact that it did not. And there are also different measurements, where it is evident that there is no phase shift, until one video I came across that the phase shift is 45 degrees and so on.

So what are the reasons for the appearance of an additional and mutually different phase shift?

IMO, there are two main reasons, one of which you explained in your posts by introducing already mentioned parameters such as the size of mass in motion and different types of damping comparing the performance of microphones. There is nothing more to add.

Another reason that was not mentioned is the additional phase shift that was created by the principle of the pressure-gradient microphone. Let's take the example of the SM57 microphone used here in the experiment. Its polar characteristic is known to change from almost omni to sharp hypercardioid depending on the frequency. A simple explanation for this imperfection is that the phase of the signal arriving at the back of the membrane changes depending on the frequency and that the attenuation of the sound in the acoustic filter also changes. The first reason for the change leads to a further conclusion, which is important here, that the resulting pressure (or force) (the ratio of the pressures on the front and back sides of the membrane) that moves the membrane changes its phase relationship with respect to the initial pressure depending on the frequency. And we actually don't know for any such microphone what the phase shift vs. frequency is without measuring the phase relations of the initial pressure and the current position of the membrane.

So there are several reasons why there is a phase difference between the signals of two microphones, and they can affect how the two microphones will perform in stereo combinations. What is certainly true is that there is an initial 90 degree phase difference when using a dynamic and a condenser microphone, and this is a fact that various sources point to. However, how much that phase relationship will end up being for two real microphones will only depend on the microphones used and is hard to predict. Because manufacturers do not even provide such data for microphones. And that's why the results obtained by audio engineers are different. And it is certainly not the same whether a cardioid or omni (pressure-gradient or pressure) is used for the mid position with a ribbon microphone for the side position.

I apologize for the long post, several members asked me on PM to clarify my opinion and I am answering everyone here. And let me mention that microphones are not my area of expertise.

It seems to me the || (abs value) exclude any notion of phase...? In the absence of definition for variable u, it's difficult for me to understand it.
None of the formulas gives information about the phase angle. Cos theta is the incident angle of the wave in relation to the ribbon.
/u/ is linear ribbon velocity.
 
Electrons do not flow at the speed of light. They only get to ca. 2200 kph. So, the question in my mind is, how does that relate to elctro-magnetism?

I don't understand magnetism. Too many variables. I think that discrepancy between the theoretical speed of electricity and the real speed of electrons might have something to do with it. I just can't wrap my head around it.
 
Correct.
Based on the "pendulum" model, we can make a virtual model of that microphone, according to the picture below.

View attachment 114356

A condenser mike is designed on one side of the membrane, and a dynamic mike on the other. So two microphones share one membrane. The signals from those two microphones will always be in a phase relationship of 90 degrees regardless of the frequency. Nor can any other effects such as generator inertia and electro/mechanical/air damping affect this phase relationship.
Do you agree that in this specific example the frequency response of the two elements will be different, with the response of the magnetic element requiring integration?
But what happens when microphones each have their own membrane?
Do they move in phase with respect to the sound pressure? Obviously not.

This can be seen from the measurements, but also from the various reports of audio engineers who tried these combinations, from the fact that it gave good results to the fact that it did not. And there are also different measurements, where it is evident that there is no phase shift, until one video I came across that the phase shift is 45 degrees and so on.

So what are the reasons for the appearance of an additional and mutually different phase shift?

IMO, there are two main reasons, one of which you explained in your posts by introducing already mentioned parameters such as the size of mass in motion and different types of damping comparing the performance of microphones. There is nothing more to add.
I believe you concur with me that the "response of dynamic elements must be flattened" by whatever process, which will also create inverse 90° shift (unless some crazy wants to use FIR EQ).
Another reason that was not mentioned is the additional phase shift that was created by the principle of the pressure-gradient microphone. Let's take the example of the SM57 microphone used here in the experiment. Its polar characteristic is known to change from almost omni to sharp hypercardioid depending on the frequency. A simple explanation for this imperfection is that the phase of the signal arriving at the back of the membrane changes depending on the frequency and that the attenuation of the sound in the acoustic filter also changes. The first reason for the change leads to a further conclusion, which is important here, that the resulting pressure (or force) (the ratio of the pressures on the front and back sides of the membrane) that moves the membrane changes its phase relationship with respect to the initial pressure depending on the frequency. And we actually don't know for any such microphone what the phase shift vs. frequency is without measuring the phase relations of the initial pressure and the current position of the membrane.
I would think it's more or less what I tried to explain.
So there are several reasons why there is a phase difference between the signals of two microphones, and they can affect how the two microphones will perform in stereo combinations. What is certainly true is that there is an initial 90 degree phase difference when using a dynamic and a condenser microphone, and this is a fact that various sources point to.
That is where I strongly disagree. Member molke's experiment contradict this affirmation.
I maintain that whatever phase difference may exist is due solely to the amplitude derivative.
However, how much that phase relationship will end up being for two real microphones will only depend on the microphones used and is hard to predict.
I still maintain that as long as the signals from teh two mics are in the same quadrant there should be no major problem.
None of the formulas gives information about the phase angle. Cos theta is the incident angle of the wave in relation to the ribbon.
Indeed.
/u/ is linear ribbon velocity.
Thanks.
 
Electrons do not flow at the speed of light. They only get to ca. 2200 kph.
The speed of electrons depends very much of the nature and size of the material, as well as the current intensity. Typical examples (1A in 2mm dia. copper) result in speed of about 25 um/s (micrometer per second). Compare with sound. You know that sound travels at about 1000 ft per second; if the air particles travelled at the same speed, you'd be blown away by a Mach1 wind.
Actually, since sound has a negligible DC content, the air particles do not "travel", they just push each other until the last in line hits your ear-drum.
Another example is Newton's cradle/pendulum.
The last ball reacts after a very short time, but the first ball doesn't go further than its rest position.
 
Electrons do not flow at the speed of light
electrons or charges displacement is not the same as wave displacement ! I especially use the term electric signal !
this is how fast energy flow, and this is WAY higher than particle displacement.

In fact in our AC main lines the electrons don't move that much... they go forward and backward 50 or 60 time per second, you'll probably never get an electron living an atom orbital at the power-plant in your electrical device at home.
 
Electrons do not move at the speed of light in a conductor, but the wavefront travels at a significant fraction of it. Actually, the electric field is outside the conductor. That's the thing moving at almost the speed of light with the wavefront in the wires following along.

Magnetism is really the effect of electric fields. Magnetic effects are relativistic manifestations of electric fields - bit of a mind melting concept. Even though the electrons aren't moving very quickly there are a hell of a lot of them and the Lorentz Contraction of them relative to the nominally stationary positive nuclei in the conductor is what makes the electric field non-zero, but we see it as magnetism. You'll find that an electron beam by itself doesn't produce a magnetic field, but electrons moving in a conductor do.

How quickly does a signal get across a capacitor if we're well away from any corner frequencies? A transformer is, in some ways, an electromagnetic analogue/dual of a capacitor. Thus, transformers are more-or-less instantaneous. The idea that a full half-cycle of an AC waveform must happen before something can be induced is bizarre. Whichever lecturer came up with that was way off base.
 
I picked up the wrong value from Google, sorry. The 2200 kph is the speed of an electron revolving around the nucleus of the atom in the case of Hydrogen.

I'll leave it at that. When I try to grok it, it enters the realm of quantum physics and that's way above my pay grade.
 
Based on the "pendulum" model, we can make a virtual model of that microphone, according to the picture below.

1694271734175.png


A condenser mike is designed on one side of the membrane, and a dynamic mike on the other. So two microphones share one membrane.
Actually, I was half-determined to not add any more posts restating the same thing, but that’s an interesting example. Since the microphone(s) get only pressure from one side, I’m assuming it’s a pressure dynamic combined with a pressure condenser? At which resonance frequency* do you set the membrane of your hypothetical microphone?

* The third column in the quickly redrawn curves in post #130 should more accurately be labeled “resonance frequency and damping”.
 
Actually, I was half-determined to not add any more posts restating the same thing, but that’s an interesting example. Since the microphone(s) get only pressure from one side, I’m assuming it’s a pressure dynamic combined with a pressure condenser? At which resonance frequency* do you set the membrane of your hypothetical microphone?

* The third column in the quickly redrawn curves in post #130 should more accurately be labeled “resonance frequency and damping”.
This model was used to show that if the diaphragms of both microphones move in phase with each other (achieved by using one diaphragm for both), then the signals from them are shifted by 90 degrees. For this proof, it is not important where the resonance of the moving system is located, nor is it important what frequency characteristics they have, nor whether they are pressure microphones. Trying to make such a microphone in practice would be a kind of nonsense.

In the next part of the text, I state that the diaphragms of two different types of microphones do not move in reality in phase as in my model, and I try to find the reasons for this. And I mentioned two.

I have no intention of showing formulas and diagrams of how exactly different microphones work in theory, where the resonant frequencies are, and which elements of elasticity, mass and damping give the final results. For those who want to analyze it without too much theory, I suggest a book
John Eargle: The Microphone Handbook (free pdf on www, just ask google)
If someone wants hardcore, there are always books by Beranek and Olson.
For those who want to get basic information about what this might be about, I suggest reading the text and watching a nice video
https://recording.org/forum/microphones/ms-and-mixing-ribbons-and-condensers-possible-phase-issues
 
Agreeing to disagree is not constructive. Aren't you interested in finding an answer that stands complete scientific analysis?

Theoretically, a formula should be obtained that would describe the phase relationship of the movement of the membrane in relation to the initial sound pressure. IMO all the parameters I mentioned in my post should be included there.

I also indicated there that I think one should experimentally measure (confirm the formula) the phase shift between the initial sound pressure on the front surface of the microphone membrane and the displacement or position of the membrane. This would mean the use of a precise laser or some other non-invasive methods, which is certainly not easy or cheap.

So I certainly won't deal with that here. You will persist in your arguments, I tried to give a bigger picture.
 
This model was used to show that if the diaphragms of both microphones move in phase with each other (achieved by using one diaphragm for both), then the signals from them are shifted by 90 degrees.
And that's the big IF. I claim that in order to linearize the frequency response, designers have to apply acoustico-mechanical "tricks", that result in partial integration. because it's clear that the LF response of dynamic mics is restricted. Of course these "tricks" have to be different for condenser and dynamic mics.
For this proof, it is not important where the resonance of the moving system is located, nor is it important what frequency characteristics they have, nor whether they are pressure microphones.
So you say that exciting a resonant sytem below its fundamental resonance is the same as exciting it above? The FM "slope" or Foster-Seeley detectors would not work...
John Eargle: The Microphone Handbook (free pdf on www, just ask google)
Which, in Chapter 2, at page 44 explains how the damping produces a flattening of the fequency, which, by virtue of the system being MP, results in zero phase-shift.
At LF, when damping is superseded by compliance, the response is asymptotically +6dB.octave, with associated 90° lead.
But is really LF the basis for the stereo effect?
For those who want to get basic information about what this might be about, I suggest reading the text and watching a nice video
https://recording.org/forum/microphones/ms-and-mixing-ribbons-and-condensers-possible-phase-issues
If you allow any credibility to it, you may note at 8.40 that he clearly says he hasn't found any serious litt he could grok demonstrating teh existence of unconditional 45 or 90°initial shift.
When he concludes at 17.50 that there are "clearly" some differences, I wonder where this conclusion comes from. Certainly not from the goniometer, which I don't know what it indicates, since the L and R sources for it are not clearly indicated.
I would have liked to see the part of his experiment he didn't show, dedicated to visualizing waveforms; may he didn't because it didn't prove his point, contrary to molke's tests.
 
So I certainly won't deal with that here. You will persist in your arguments, I tried to give a bigger picture.
The initial question was extremely closed. Someone introduced disruption with the alleged 90°, which lead me to try to put it in perspective. Do you think I even closed it further?
 
Last edited:
Back
Top