#58066 - 02/28/06 01:17 PM
Re: autoEQ question
|
Desperado
Registered: 01/23/02
Posts: 765
Loc: Monterey Park, CA
|
Meridian and Lexicon pre-pros use the same approach, measuring in the time domain and correcting for low frequency ringing. Neither system operates above 250Hz nor do they make any attempt to flatten frequency response. In receivers, newer Denon models do some similar stuff with the Audyssey system they've licensed; newer H/K units also do the same with their EZset/EQ system designed using some of the research from the audio legends at Harman (Floyd Toole, Todd Welti, Sean Olive). Wayne, since you post at SMR, I'm surprised you haven't read more about how the Lex EQ system works. You can also read some discussion about this subject here .
_________________________
Sanjay
|
Top
|
|
|
|
|
#58067 - 02/28/06 05:10 PM
Re: autoEQ question
|
Gunslinger
Registered: 05/18/02
Posts: 203
|
|
Top
|
|
|
|
|
#58068 - 02/28/06 09:58 PM
Re: autoEQ question
|
Desperado
Registered: 01/23/02
Posts: 765
Loc: Monterey Park, CA
|
Wayne,
The Lexicon system does not do any averaging of the 4 mic inputs. It's inventor, Dr. James Muller, has made that very clear. If one mic picked up a dip and the other mic picked up a similarly sized peak, averaging the two results may cancel each other out and give the (mistaken) impression that there is no problem. The data correlation being used is more sophisticated than that.
What you're doing with EQ and what Meridain/Lexicon are doing with their EQ systems are completely different. For example: if those systems detected a hump at 70Hz, they would do absolutely nothing about it. They aren't looking for nor correcting anything in the amplitude domain. Instead they measure in the time domain, looking specifically for low frequencies that linger for the longest amount of time, obscuring details of sounds that immediately follow (irrespective of their frequency).
If a frequency matches a room dimension such that it keeps bouncing back and forth between two walls for longer than other frequencies do, then no seat placed between those two walls will be imune to that frequency ringing. So while levels can vary widely from seat to seat, ringing is more consistent in all seats. Reduce its decay time and you'll improve clarity across the entire frequency range, at all seats. Difficult, if not impossible, to get that sort seat-to-seat consistency when correcting for varying amplitude instead of varying decay times.
Keep in mind that it is difficult to measure decay times by looking at amplitude peaks, the way you are. That's because a ringing frequency may not be louder than other frequencies, so it will measure flat in amplitude but still have a long decay time. Other times, a peak is a symptom of a ringing frequency. In those cases, when you fix the problem (long decay time) you end up fixing the symptom (amplitude peak). So, even though they aren't attempting to do so, the Meridian/Lexicon systems can sometimes end up inadvertantly smoothening and flattening some of the frequency response.
Make sense?
_________________________
Sanjay
|
Top
|
|
|
|
|
#58069 - 03/01/06 09:39 AM
Re: autoEQ question
|
Desperado
Registered: 03/20/03
Posts: 668
Loc: Maryland
|
BB4TB believes:
Perception is affected by the interaction of many parameters.
Most single corrective measures affect more than one parameter.
The best way to approach a given problem is to deal primarily in that problem's domain, for instance correcting room problems with room treatments.
As a result, there is an order to multiple treatments.
For instance, if a room has a 'ringing' at one or more frequencies to any significant degree, that ringing should first be reduced by room treatment. Because the room treatment for the ringing will also affect the frequency response in that room, one could apply system EQ to bring about a perception of even frequency response after installing room treatments.
If the attributes of the room are not treated, the ringing will add to the 'presence' of 'offending' frequencies. If the ringing frequencies are not electronically 'suppressed' to some degree, those frequencies will initially 'hit' at the 'correct' level and potentially build to too high a room amplitude. Additionally the subsequent decay of the offending frequencies will be prolonged. If one uses the usual types of EQ to suppress the offending frequencies to some degree, these frequencies will initially hit at 'too low' a level then build up and subsequently decay with the idea that this is more acceptable and balanced in the listener's perception. In such a case the blend of lows, mids and highs would be out of balance initially, until the ringing sets in to approximate a better balance.
In order to handle these interactive parameters with only EQ, one would need a heretofore-unknown-to-me type of EQ. An EQ that is continously 'analytical,' programmed, buffers in a 'store and release' fashion and actively adjusts - an EQ that affects each narrow frequency range such that when frequencies that will ring are detected as coming through the 'pipeline,' those frequencies will, for the first few milliseconds, be allowed to pass unsupressed in the correct ratio compared to frequencies that will not 'ring,' then suppression will be applied, increasing and then leveling off at the same rate as the ringing will build in the room and hold steady. After this, the EQ will somehow know how quickly the offending ringing frequencies should decay compared to the non-ringing frequencies and artificially suppress the ringing frequencies in the right amount and sufficiently 'early' such that in the listening environment the ringing and non-ringing frequencies will decay in their original decay ratio. And, oh yes, since each channel's speaker type and placement will provide the room with somewhat differing 'excitement,' make sure that all this frequency handling occurs independently for anywhere from 2.0 to 5.1 to X.Y channel playback.
IMHO, I think what one can do with attention to placement and $500 to $3K worth of room treatment plus potential 'normal' EQ will exceed what one could accomplish with over $10K of additional electronic alteration and no room treatment.
For those that have purist tendencies, adding room treatments in lieu of more and more electronics will also mean having the precious signal less-messed-around-with.
|
Top
|
|
|
|
|
#58070 - 03/01/06 10:54 AM
Re: autoEQ question
|
Gunslinger
Registered: 05/18/02
Posts: 203
|
|
Top
|
|
|
|
|
#58071 - 03/01/06 01:10 PM
Re: autoEQ question
|
Desperado
Registered: 01/23/02
Posts: 765
Loc: Monterey Park, CA
|
Wayne, I'm still not convinced that Lexicon's method is really any more adept than the method I (and the professional community) employ. It's not a question at being more adept or doing a better job than you are. It's more a difference in approach. Frequencies with long decay times may not show up as problematic amplitude peaks. For example, a frequency that's bouncing back and forth between two walls will have a long decay time for every seat between those two walls. However, if you move a microphone between those two walls, you'll find places where that frequency might be softer than others or louder than others. Somewhere between those places will be a location where this frequency is around the same amplitude as other frequencies. If your seat or your measuring location happens to be at that spot, the amplitude of that frequency will appear to measure flat. However, it will still be ringing and still have the long decay time. In a situation like that, amplitude based correction systems wouldn't detect a problem and wouldn't do any correction. A time-based correction system will see the long delay and calculate an inverse filter. Likewise, there could be a situation where your speaker and room combine to create a loud hump around 60Hz at your listening seat. An amplitude based correction system will try to correct that, to bring the volume level in line with other frequencies. A time-based correction system will check for unusually long decay time and, if it finds none, will do nothing. You'll still have the amplitude hump. So it is two different approaches, each of which can sometimes totally miss a problem that the other is addressing. I am willing to bet that, were one to manually measure the room's frequency response before, and then again after the application of V.4 EQ, one would find the result is smoother frequency response, regardless of whether that was the original, "inadvertent" intention or not. Agreed. Removing energy from a frequency that is bouncing back and forth between two walls in an attempt to lower its decay time can also make it appear quieter (and bring it more in line with other frequencies). But if the the latter were the goal of the Lex/Meridian systems, they wouldn't be going through the added effort of measuring in the time domain. So much easier to simply look for the loudest and most offending peaks and try to bring those down. That's what most of the EQ systems on receivers do. The only way to affect changes to an audio signal using an electronic equalizer (graphic, parametric or other) is to cut "peaks", and/or boost "dips", all in the frequency/amplitude domain. Precision aside, there is nothing magical about the parametric EQ used by Meridian, Lexicon, Audyssey and H/K. They're basically used to remove energy from a particular frequency. The trick is being able to recognize which peaks correlate to long decay times and which ones don't. The Lex/Meridian systems can not only tell the difference, but they only attempt to correct the ones that correspond to long decay times, leaving the others alone. Wayne, if you see four peaks between 20Hz and 250Hz, what method do you use to tell which of those has unusually long decay times and which ones don't? it looks at, and makes adjustments for, only the very most severe peaks in room (again, call it what one will - decay, ringing, resonance or frequency) response and applies the appropriate amount of cut to those peaks. What if it is measuring from a location where a certain frequency isn't peaking in amplitude but does have a long decay time? If you believe it is correcting for peaks, then it will apply no correction there. If you believe it is correcting for decay time, then it will apply correction there. Which is it? BTW, since it can't fix all the problems, it only goes after the most severe ones. For each of the 10 channels it corrects (7 main channels and 3 subs), up to 7 filters can be used. In practice however, Dr. Muller has said that they've never run into a situation that required more than 3 or 4 filters per channel.
_________________________
Sanjay
|
Top
|
|
|
|
|
#58072 - 03/01/06 06:23 PM
Re: autoEQ question
|
Gunslinger
Registered: 05/18/02
Posts: 203
|
|
Top
|
|
|
|
|
#58073 - 03/01/06 07:32 PM
Re: autoEQ question
|
Desperado
Registered: 01/23/02
Posts: 765
Loc: Monterey Park, CA
|
Wayne, When all of the peaks are brought down (across the entire seating/listening area) the resonances are brought down with them, by default. What about frequencies with long decay times that don't show up as peaks? In my mind though, that's doing only "half" the job. Although, realistically speaking, to do the "whole" job would require much more processing power (probably more than twice what it offers currently) and be absolutely prohibitively expensive, perhaps even for the very well heeled. The Lex initially ships with 4 SHARC DSP engines. The room EQ feature adds 4 more. That still leaves room in the processor for 8 more SHARCs. Lack of processing power was not holding them back from doing the other "half" of the job. They are firmly against amplitude based correction. Keep in mind that if a peak is not associated with an unusually long decay time then it is left alone. It's not that they can't do something about it; it's that they deliberately don't do anything about it. Doesn't matter, as long as multiple microphones/microphone locations/samples are used in conjunction with spatial and temporal averaging, all of them will be detected and dealt with equally. It matters, because it's the difference between guessing and knowing. Again, how do you know which specific frequencies have the longest decay time? After correction, how do you verify that that actual decay times have been reduced? What are you using to measure decay time and generate corresponding waterfall plots? The shear number of samples taken from several different locations, averaged into a representative "room curve" of the entire listening/viewing area, will negate the effects you describe. No they won't. Meridian, Lexicon, Audyssey and H/K wouldn't have resorted to using time-based measurements if amplitude based measurement would have allowed them deal with long decay times. If one can afford it, it would seem that Lexicon (Meridian, Audyssey and Harman/Kardan) EQ has a leg up on it's competition. The H/K system is available on their AVR-435 receiver, which can be had for less than $500. The Audyssey system will soon be available on Denon's AVR-2807, which has a MSRP of $1099 (street prices will be lower). Not unaffordable.
_________________________
Sanjay
|
Top
|
|
|
|
|
#58074 - 03/02/06 12:01 PM
Re: autoEQ question
|
Gunslinger
Registered: 05/18/02
Posts: 203
|
|
Top
|
|
|
|
|
#58075 - 03/02/06 12:29 PM
Re: autoEQ question
|
Gunslinger
Registered: 05/09/05
Posts: 281
|
Can't speak for the others, but my neighbor has a Harman Kardon 635 and I helped him set it up. There is only one mike, but to do the setup you first run a set of tones with the mike in the center of the room and then run a "near field" measurement set with three more settings, one from each of the front L/C/R speakers. Interesting concept and good results.
|
Top
|
|
|
|
|
|
0 registered (),
488
Guests and
1
Spider online. |
Key:
Admin,
Global Mod,
Mod
|
|
|
8,717 Registered Members
88 Forums
11,331 Topics
98,708 Posts
Most users ever online: 1,171 @ 11/22/24 03:40 AM
|
|
|
|