Laurence A. Borden: Earl, welcome to Dagogo. Please tell us about your educational background, and how you first became interested in audio.
Earl Geddes: I first got started in audio when I was in High School as a singer in a Rock n’ Roll band, but soon found my interests were more aligned with the equipment. The equipment fascinated me. As I matured in audio I found the biggest challenges to be in the area of loudspeakers and room acoustics, and this is still the case.
I have always been mathematically inclined and so my interests naturally moved towards the more analytical side of audio and I majored in physics for my undergrad and Masters degrees. Most major Physics programs are devoid of any focus in acoustics so I ended up at Penn State in the Acoustics Program (still regarded as one of the best programs in this area in the world). I did my Ph.D thesis on the acoustics of low frequencies in small rooms where modal effects dominate. That was more than 30 years ago now.
While at Penn State I continued to take courses in Theoretical Physics and I studied it long after I graduated because I find the advanced theoretical stuff so interesting. This extensive math background has served me well since nothing that I have run into in mathematical acoustics has been over my head. I wrote (with Lidia’s help) the text Audio Transducers to put in place the mathematical foundations of loudspeaker and microphone design that I found lacking in the available literature. I still study quantum mechanics and string theory as a hobby in my spare time.
LB: Lidia Lee is your research- and business-partner. How did the collaboration begin, and what are your respective roles in GedLee?
EG: Lidia and I met while I was Director of R&D for Knowles Electronics in Chicago. I hired her as a consultant to teach my engineers audiology and psychoacoustics (she was a Professor at a local University at the time) because I realized that in hearing aids these subjects are fundamental. I had found that my engineers and I were not as competent in the subjects as we needed to be.
Lidia and I ended up getting married and we did several studies together at Knowles on how customers perceived various design aspects of Knowles products. I found early on that perceptions were often quite different from what the objective measures might indicate or what one might expect them to be. People were much more sensitive to some things and much less sensitive to others than the historical beliefs of these relationships had suggested.
I had worked at Ford Motor Company prior to joining Knowles and as part of my work there I got involved in “sound quality” originally as applied to noise control and perception in automobiles. This was and still is a very big deal in cars as it is a strong customer perceived differentiator between vehicles. I still consult in this area today. I was instrumental in establishing a sound quality approach to noise control at Ford for which I was awarded the Henry Ford Technological Achievement Award, the highest honor bestowed on an engineer at Ford.
Finding the relationship between perception and objective design decisions became a foundation of all that I have done since that time. Everything in engineering for consumers is about adding value to the product by paying attention to significant perceptible attributes while discounting those whose value is not warranted by its perception. This can be a complex task especially for engineers who are not trained in sciences involving psychology.
After Lidia and I left Chicago to return to the Detroit area (I returned to Ford to become Manager of Advanced Audio), we did several perceptual studies together. Lidia is very good at the clinical side, experimental design and data analysis while I am better on the objective side, setting up the stimuli, computer simulations and data acquisition. That remains how our work is divided except that I work on GedLee projects full time while Lidia is a full Professor and contributes to GedLee more sporadically (someone has to have health insurance!!).
Lidia and I have been happily married for 14 years now and we have a son Nathan who will, of course, go into acoustics (not!)
LB: A primary focus of your research concerns psychoacoustics, a discipline about which there is considerable confusion. Let’s begin with a definition.
EG: Psychoacoustics is the study of how people analyze sound physically and perceptually (it’s part of the broader subject Psychophysics). Loosely, one could say that it is the scientific study of sound perception, except that perception is more a subject that crosses over into psychology. Perception gets tied up in all kinds of complications like expectation and external influences that psychoacoustics would attempt to ignore or control for (with great difficulty I might add). Perception is what we want to know, but since these are human perceptions there is a huge array of external biases that get involved. For the most part we are trying to study around those biases, but when looking at perception from a product standpoint it is also critical to understand what those biases are and where they appear. The psycho-acoustician would attempt to null out the biases to get to the root science. So what we do is not hard core psychoacoustics, but a blending of that subject with purely psychological aspects of product evaluation.
LB: When did you first develop an appreciation for psychoacoustics, and how did you gain proficiency in it?
EG: I am not actually all that proficient in the art, Lidia is more so. I know just enough “to be dangerous” as they say. My understanding is limited to the interactions between audio, loudspeaker design and noise control and psychoacoustics/psychology and that’s not all that wide a swatch of the subject.
When I first started to study audio design I thought that everything that we needed to know could be measured directly, that there was no need to understand “perception,” that data told the whole story. That’s still probably more true than not, but I have since tempered my position a little.
The first issue that I found for which psychoacoustics could be a help had to do with what I saw as incredibly unstable subjective evaluations. In my job I had witnessed known authorities completely contradict their own opinions when they did not remember what they had originally said. I saw how easily people could be swayed in their opinions by external forces and found that I could convince people of things that I knew were false by simply telling them what they were hearing (which was not the case). When I went back to Ford one of things that I did was a “gauge capability” study on the ten member “Golden Ear” listening panel. A Gauge Capability study seeks to determine how reliable a gauge is at finding a true objective quantity – usually something like length or temperature, but Sound Quality judgments could also be tested.
For the most part the study concluded that this panel was “not capable.” In other words their judgments could not be relied upon to be statistically stable. That said, there were two members of the ten who were capable, so it was possible. But the real point here is that someone is not a good judge of sound quality just because they think that they are – all ten members would have claimed that they were audiophiles and good judges of sound quality.
After several more studies along these same lines, I came to conclude that the more someone claimed to be a “golden ear” the less likely it was that they actually were. Today, I simply do not accept any subjective opinions about sound quality (including my own) unless they were obtained under very rigorous testing protocols – which is an extremely small volume of data. For the most part I have found that most audio dogma and folklore is simply incorrect. Audio is like a religion, most aspects of its fundamental beliefs are accepted on faith and most supporting rational is constructed in such a way as to be untestable. You can neither prove or disprove what someone claims to like or dislike any more than you can prove or disprove the existence of God. Believe me, faith is a very powerful thing – in audio as well as politics and religion.
Until one understands psychoacoustics and the associated psychology of product evaluations as well as the techniques that are required to reliably test such perceptions, they cannot know how truly difficult it is to get to reality in the audio marketplace. Uncontrolled listening tests won’t get you there, that much is certain. These types of tests have a strong bias towards change for change’s sake (among numerous others) and as such have a tendency to go in circles.
LB: Regrettably, psychoacoustics is both widely misunderstood and largely ignored by much (though certainly not all) of the audio community. Why do you think this is?
EG: It can be a problem for some people. After working in this area for a long time, I have concluded that the very common phrase “I know what I hear!” is simply incorrect. The fact is that people do not have reliable perceptual capabilities when it comes to sound quality. Perception is cognitive and the brain tends to be a dominant factor – the brain tells you what you hear more so than your ears. This is a big problem for someone who has made his or her living by being a “Golden Ear.” It is far easier to simply ignore this area of research than to have to deal with its “answers” being different from your “answers.” Again, this situation is analogous to religion and its relationship to the Theory of Evolution. Many people will completely discount any and all scientific facts if they contradict a closely held personal belief. It is just too scary to have to consider the extent of uncertainty that could creep into one’s belief system as a result of accepting even a small portion of such data. It’s best to just ignore it.
LB: Is the pro-audio world more receptive?
EG: No not really, in some ways they may even be worse. There is a lot of traditional folklore and dogma surrounding audio, in both Pro and consumer. This dogma tends to support the status quo in terms of sales and marketing and as such no one who makes a living in the business is all that interested in changing it.
LB: One of your areas of research concerns the psychoacoustics of distortion. Please first explain the various types of distortion, and the distinction between linear and non-linear distortion.
EG: According to the strict dictionary definition of distortion, any change in the waveform shape is distortion. This means that any change in the frequency response, which would distort the shape of a complex waveform, is distortion. Mere frequency response changes are called “linear” distortion because they act the same regardless of the signal level of the signal and no new frequency content is created.
On the other hand there is nonlinear distortion which does create new frequencies not present in the original signal and this type of distortion is level dependent. For example, clipping has no effect until the clipping limit is reached and then the signal gets distorted. A crossover nonlinearity has a large effect on a small signal but a small effect on a large signal. Linear distortion is well understood, and it is the most significant audible form of distortion, but nonlinear distortion has not been studied in any depth (probably because it is far more complex and hence difficult to study than linear distortion).
LB: What prompted you to study these phenomena?
EG: I had long realized that nonlinear distortion was a characteristic of the system and was not about the signal being used to test for it. THD and IMD are simply different ways of looking at the same problem – system nonlinearity – using different signals. They are not different “types” of distortion. I wanted to find a way to quantify the system’s nonlinearity in a signal-independent manner, and one that I could show was correlated with perception. (I have had my suspicions about THD and IMD for a long time.)
In the study that Lidia and I did we confirmed that THD and IMD were basically useless indictors of sound quality because the measured values for either of these metrics did not correlate with the perception of music played through the system.
LB: An issue that engenders a great deal of ill will in the high-end audio community is the role of measurements. On the one hand are those who feel that measurements are indispensable and tell the entire story, while on the other are those who believe that measurements are of limited utility, and that all that matters is “how it sounds.” One of your important findings is that the perception of nonlinear distortion does not correlate with commonly used metrics of distortion, thus apparently lending support to the latter group. However, a strong correlation was found with the “GedLee metric.” Please tell us more about this.
EG: I have always thought that if someone’s measurements do not “tell the whole story” then they are the wrong measurements. Technology has simply come too far to believe that “there are things that we cannot measure.” I have also never believed that all that matters is “how it sounds,” because this is such an unstable and personal opinion. Sound quality opinions can and will differ from person to person, system to system and most importantly even within the same person on different days (as I said before, I have personally witnessed this in well regarded “reviewers”). Personal preferences have such a low stability as to be an almost completely pointless thing to stake a claim to. “Hi-Fi” does not mean “pleasant” — it means “accurate”; accuracy, as opposed to preference, is absolutely quantifiable and extremely stable – as stable as I care to control in my lab from day to day or test to test (but in any case its uncertainty is easy to quantify and understand). Decisions based on accuracy are therefore much more likely to be valid than decisions based on “how it sounds.” I do not see how one could ever support a position that “preference” trumps “accuracy.” That’s simply taking a giant step backwards in the evolution of Hi-Fi.
I am not saying that measurements are infallible, and I don’t believe that measurements are likely to ever be 100% reliable, but that does not mean that we cannot obtain measurements that are far better than the unstable subjective opinions that are so often relied upon. One has to know what measures are important and to what degree of resolution we need to know the results to be meaningful. What I see most people do are either the wrong things or not accurate enough to “tell the story.” And there are some things that I think are crucial to sound quality that are not measured by anyone I know of (myself excluded) at the present time.
All too often audio measurements are taken as an all or nothing proposition – “they aren’t perfect or completely reliable so why take them? I know what I hear so why not just listen and evaluate?” It is necessary to understand the importance of what you are measuring in the final analysis and how any particular aberration enters into the whole. An aberration at 12 kHz is not the same as an aberration at 3 kHz. Good measurements are all about finding those things to measure that matter, focusing on those and optimizing the design for the important things at the sake of the lesser importance ones. It is not always easy to know how these tradeoffs are to be made and that is where psychoacoustics comes in. The measurements that I usually see done certainly do not tell the whole story, they tend to be woefully inadequate.
One other problem with “listening tests” is what many designers and researchers have come to know as “acclimation.” We know that listeners will get used to or acclimated to a particular sound signature and that this signature then gets imprinted on their expectation. Expectation is a powerful bias in perception, maybe too powerful. This expectation problem tends to stunt the growth of real improvements because they can be counter to expectation. Accurate reproduction can often make a favorite sound recording be perceived as less than the expectation. This is then put down as a “flaw,” which may not be the case. Once the masses become acclimated to a particular sound signature it can be very difficult to change them from this path. I find this all the time with my speakers. They don’t sound like other speakers, yet I can prove that they are objectively more accurate. Over time my customers and I have come to appreciate the open and transparent sound that accuracy provides. Now all other speakers sound colored and distorted. One could argue that we have all become “acclimated” to the sound signature of our loudspeakers, but at least this signature can objectively be shown to be free from significant sonic aberrations. If you are going to get acclimated to a particular sound, then it only makes sense to get acclimated to the most accurate one.
Our study of distortion showed not only that what was being measured was the wrong thing, but that if you added some psychoacoustics to the situation one could develop a measurement that did correlate with perception. This result is completely consistent with I have been saying here – if your measurements don’t work then fix them. But don’t claim that something “can’t be measured.” That’s just a cop-out to doing the real work of finding which measurements matter and which ones don’t.
LB: Is it currently possible to incorporate the GedLee metric into speaker design?
EG: Well it is and I do, but perhaps not in the way that you might think. I do not use this metric to measure my speakers, in fact I don’t measure nonlinear distortion in my speakers at all. Our research taught me that nonlinearity in a loudspeaker is not all that important and that’s how it was used in my speaker designs. The GedLee Metric was shown to be a better way to analyze nonlinear distortion. It uses the actual nonlinear transfer curve that is the root cause of nonlinear distortion instead of some symptom of this nonlinearity like THD. By weighting the orders of the nonlinear transfer function according to how our ear perceives them we were able to show a much higher degree of correlation to subjective perception that any of the traditional metrics such as THD. Basically, for a loudspeaker the GedLee Metric is most likely to be very low except for problems like “Rub and Buzz” which are very high order and quite perceptible.
Our studies indicated that distortion in a loudspeaker is not likely to be a major factor as long as the loudspeaker is operated within its design limits. This is because low orders of distortion (2nd, 3rd, etc.) are not highly audible because of masking. Loudspeakers can essentially only exhibit low orders of distortion because the higher orders require large accelerations, i.e. large forces. Loudspeakers do exhibit very large amounts of low order distortion but not high orders of distortion (6th, 7th, etc.) – as long as they are not overdriven or have design flaws like Rub and Buzz – but the low orders are simply not audible. Using a well-made driver within its design limits lets one completely ignore the issue of nonlinear distortion in a loudspeaker.
This is not true at all for electronics. Electronics can generate very high orders of nonlinearity such as crossover distortion or clipping. One must be very careful in electronics design to prevent these higher orders from occurring especially at low levels. The problem is that this problem is never evaluated for electronics (well certainly never shown) and I doubt that it is even tested very often. What they do show is THD as a function of level but fail to note if the low level result is for crossover distortion or noise. If it is crossover distortion (which is very high order at very low levels) then it is highly audible even at fractions of a percent. That’s the problem with THD, it just does not show what we need to know.
Several years back I developed a test for amplifiers that used a special signal and form of synchronous averaging to measure the nonlinearity of the amp well down below the amplifier’s noise floor. This test revealed significant differences in the amplifiers as the signal level was reduced into the noise floor (the THD as the signal level is raised is again not important because it is masked and “THD + noise” at low levels, as usually shown, could be all noise). These amps all had excellent “normal” specs, but under my test they were vastly different.
Again the lesson here is that if your tests don’t work then fix them. Don’t blame the philosophy and hide behind the dogma.
LB: High-end audio is characterized by products with unsubstantiated claims; examples include a wide variety of (typically very expensive) cables, cryo-treatments, resonators that attach to walls and ceilings, anti-resonance devices upon which gear sits, amongst others. While it is admittedly unfair to lump all these products together, what are your thoughts about them? Is it possible that they have subtle sonic properties that we simply have not yet measured?
EG: I cannot be convinced that in this day-and-age there is anything that we cannot measure. The question is what to measure and are we asking the right questions. Are we able to accurately quantify the question and the answers? My position is that if some manufacturer claims an improvement in some sonic property, subtle or not, then it is their obligation to measure this (even if they have to figure out how to do that) and show in a statistically significant way that it makes an audible difference. Otherwise, I just don’t pay much attention to it because it’s just an unsubstantiated opinion. There are so many things that can be measured that have been shown to make significant sonic differences that paying attention to ones that don’t is simply a waste of time and money.
A comprehensive set of data for all my speakers is shown on my website – far more comprehensive than one normally finds. I can show how every aspect of what is measured and displayed has been found to be sonically important. It can also be shown how the tests I do not do regularly, like THD or waterfalls, are not that sonically significant. Close attention is paid to those things that matter and not those things that don’t. This is why Geddes speakers are in a class by themselves as far as value goes. They may not be the best, but they are certainly not the most expensive either (nor the cheapest I suppose), but I do claim that they are the best bang-for-the-buck in their price range.
This is what has gotten lost in the High-end audio world – value. Why would someone want to pay for something, often a lot, that is not supported by scientific evidence of any kind? This just doesn’t make sense to me. Of course there are those who completely discount science and as such their beliefs are all they have to go on. It appears to me that the High-end market is going after this rather small and select group – those who don’t require proof for what they believe or purchase – and as such a premium is charged to this unique market for being so unique. But I don’t think that there is much future in that. Sure it exists, but it seems to me that it is shrinking.
LB: Let’s switch gears to the topic of horns and directivity. You have published articles on the mathematical modeling of horns, and I believe you were the first to use the term “wave guide.” What is a waveguide? How does it differ from a horn, or is it a type of horn?
EG: The term “waveguide” is not new, it has been around for decades. However, the term “Acoustic Waveguide” is new. Unfortunately, most often the “Acoustic” gets dropped. I began studying horn theory back in the 80’s. I found that the assumptions used to derive Webster’s Horn Equation (the foundation of “horn theory”) were very restrictive and that virtually nothing actually being designed using this equation was within the limits of the assumptions that were made in its derivation. In other words there was no reason to believe that these horns should work as the equations predicted that they would, and, basically, they don’t. I set out to find an approach to the problem that was free from these limiting assumptions and when I found it, I figured that it would be necessary to change the description of what I was talking about from “horns” to something else in order to clearly differentiate what I was doing from what Webster had done. His were “horns” and so I decided to call my approach “Acoustic Waveguides.” That was the title of my first paper on the subject back in 1991.
The term waveguide has since become very popular, I guess it sounds “cool,” and so now everybody has adopted its use to whatever situation that they want. As a result its common usage has come to be somewhat meaningless. Outside of my definition, there is no clear definition. When I use it, I still mean that the device adheres to my new approach to designing these devices. When others use it, I have found that it can mean just about anything. Since one can apply Webster’s Equation to my contours, all “waveguides” are “horns,” but since there are only a very few contours which fit my requirements, very few “horns” are “waveguides.”
Webster was interested in the loading and impedance of horn-like devices, while I was more interested in directivity control. Webster’s approach can calculate the impedance of any device with reasonable accuracy, but it cannot calculate the directivity because it simply is not sophisticated enough. My approach can also calculate the impedance but much more importantly it can calculate the directivity as well and this is a significant difference. It’s often said, (although it’s not exactly true) that horns are used for loading and waveguides are used for directivity control. The price one has to pay for this added level of analytical enlightenment is a serious escalation in the mathematical complexity of the solutions. Nothing comes for free.
LB: What are the HOMs that I keep hearing about?
EG: One of the most significant things to come out of my new approach to horns was totally unexpected. In Horn Theory there is only one wave that can travel down the device, i.e. the one that is assumed to travel, a plane wave, but as it turns out the use of Acoustic Waveguide Theory predicts that there can be a very large number of different waves propagating down the device. These alternate waves are called Higher Order Modes or HOMs. It also turns out that all horns actually have HOMs too, but since the Horn equation did not predict them no one had ever tried to investigate them or do anything about them.
HOMs are fundamentally different from the more common waves that travel in the device. The common wavefronts travel perpendicular to the walls (the wave velocity is everywhere parallel to the walls) expanding outward as the contour expands (at least to the extent that physics allows this). But an HOM travels in a direction that forces it to reflect off of the walls. Because of this, the HOM travel a longer distance in the device, and hence they arrive at the listener later in time, which has profound sonic effects. If the primary wave cannot follow the contour because it is receding too rapidly, then HOMs are generated as the primary wave leaves the boundary.
Lidia and I did a study of what might be the likely perceptual effect of these HOMs and we found that their effect could be quite pronounced. Horns are often described as being colored or harsh sounding and it appears that these negative attributes are the result of the presence of HOMs in these devices. A detailed discussion of HOMs is beyond the scope of this discussion, but suffice it to say that reducing them has become a major goal of our research and development.
There are two things that one can do to minimize the presence of the HOMs. First, don’t generate them in the first place. This requires careful attention to the contour of the waveguide to find that contour that physics says will generate the least HOMs. Also any abrupt area or slope changes along the device will generate HOMs. The phase plug design is an aspect of this as well. Next, one can attenuate the HOMs by using some absorptive foam in the waveguide itself. This patented technique achieves a sound quality for a waveguide which had been previously unknown. It places a waveguide-loaded compression driver into the same class of sound quality as the very best tweeters of any kind, but with many additional positive attributes that direct-radiating tweeters cannot achieve – like high power handling and efficiency, very low thermal modulation and most importantly a well-controlled constant directivity. A foam filled waveguide is simply the most effective way to achieve an extremely high quality, highly controlled high frequency response. I believe it is the future of high-end audio sound systems (where small size is not the overwhelming consideration).
LB: Why is speaker directivity/power response important in a speaker?
EG: It is more or less important depending on the situation, but, as I can show, it is critical for the optimum setup of sound reproduction in a small room.
In an anechoic chamber directivity cannot possibly matter, nor can power response — the direct response is all there is. But anechoic chambers are very poor listening rooms. They have no feeling of spaciousness since there are no reflections. What is generally viewed as the ideal in a playback system is one that has good imaging – locations of instruments – as well as good spaciousness – the feeling of being in an acoustic space. The trouble is that these two criteria are counter opposed in a small room unless they are dealt with in a very particular way.
Image perception is dominated by the very earliest sound from the speaker, i.e. the direct sound (first arrival), and the sound that arrives in the first 5-10 milliseconds. The ear simply integrates this all into one lumped sum. This includes the speakers’ anechoic response along the listening axis, cabinet diffractions, and diffractions and reflections from nearby objects like equipment cabinets or televisions. Basically what one wants for good imaging is a pseudo point source response, i.e. a single direct sound free from any diffractions or reflections for at least 5, but hopefully a full 10 milliseconds. Let me call these Very Early Reflections VER (but they will also include the early diffractions as well).
It is extremely hard to get this kind of low VER in most listening rooms and almost no situation in a typical home listening room can fulfill this. If high amounts of absorption are used then the VER are decreased, but then spaciousness is lost as well. To get spaciousness you need a lot of reflections (preferably lateral) from places other than the direct sound for times greater than 10 ms. I think that you can see the problem. In most typical home listening room situations using traditional loudspeakers you can have imaging or you can have spaciousness, but you cannot have both. You have to give up on one to get the other. This compromise is where the concept of selecting a loudspeaker that suites your taste and your room comes from – the consumer trades off one of these for the other to suit their taste.
But there is another approach, one that does not require a tradeoff between the two highly desirable features of imaging and spaciousness. Directivity control is central to this alternate approach.
If the source is fairly directional (< 90 degrees), then it can be placed and aimed such that the VER are minimized simply because the speaker does not illuminate the nearby walls. (Of course a diffraction-free cabinet is still required and assumed and a diffraction-free area around the speakers is a good idea as well.) If this same speaker is designed to be flat on the listening axis — which would not be the central (axial) axis in this case – then we have the ideal situation of a flat direct field with a naturally occurring suppression of the VER. We are now free to design the room to be fairly live/reverberant because we have controlled the VER with directivity (hence we will have good imaging). This is not what is normally done because most loudspeakers don’t act this way. Since the room is now very reverberant the sound radiated off of the listening-axis of the speaker creates a significant later-time reverberation field which is then heard as spaciousness. The problem with this approach is that it requires a speaker which has a directivity that is independent of angle, i.e. Constant Directivity (CD), and a narrow coverage angle (a fairly rare combination of design attributes and quite contrary to what is usually done in High-End loudspeakers). Without a narrow coverage CD the result will not be satisfactory because 1) the direct field along the listening axis would not be flat and 2) the reverberation field, which is now quite significant because of the room’s high reverberation, will also not be flat. The result will be coloration and most likely a complete collapse of what we want for imaging and spaciousness.
It was this need for CD with a narrow coverage angle that led me to study “horns.” Only a horn (actually only a waveguide) can achieve these two requirements simultaneously because no direct radiating source can do this. Unfortunately, at that time, the only CD horns available used diffraction, and very large amounts of it, to achieve their directivity control. This was like throwing the baby out with the bathwater. Diffraction horns are quite colored and, well, not really very listenable in a small room because of the presence of very high amounts of HOMs. In order to achieve the advantages that a CD horn had to offer, but without the coloration that plagued the designs at that time I needed an approach that could yield the directivity of the device (while not generating HOMs) and not just its impedance.
There is more to the optimal design of a small listening room to be sure, but this is the basic concept that I came up with and why directivity control is essential in a no-compromise playback situation in a small room.
LB: Is the audio community beginning to appreciate the importance of even power spectrum? How about the pro-audio world?
EG: You hear it being talked about more and more in consumer audio. In the pro world its more about coverage than power response. Pro’s want CD because it makes the coverage of a large space more uniform while consumers want CD because it makes the power response more uniform in a small space. Both groups need the same thing, but for different reasons.
LB: Let’s turn now to your own speakers. When did you first begin designing and building speakers, and when did you decide to commercialize them?
EG: I had researched and/or designed almost everything to do with audio at one time or the other, but I was particularly interested in loudspeakers because they were, it seemed to me, the weak point of every sound system I had ever experienced. I had left Ford/Visteon and written my book on Transducers in the early 2000’s and I was doing some teaching in loudspeaker design when I was approached by a guy from Asia who wanted to start a loudspeaker company. He had read (or at least he had) my book and was impressed by my scientific approach and my breadth of knowledge on the subject. Over the next couple of years I spent most of my time in Bangkok, Thailand doing basic R&D for a completely new and unique line of loudspeakers which were all based on my new discoveries and patents. I was simultaneously setting up a factory to mass-produce these designs. My partner at AI (Audio Intelligence) was the financial and business side and I was the technical and manufacturing side. Just as we were just starting full scale production we needed a loan to carry us “over the hump.” That was 2007 and you can easily guess what happened. There was no money to be found anywhere, so the business was forced to close despite having a the foundations of a superior product line.
I knew that the designs that I had done in Thailand were exceptional (we had the objective data and customer comments to prove that) and once I was back-in-the-USA I wanted to see if there was some way to duplicate these designs here. The problem was that the manufacturing processes from Thailand were not conducive to manufacturing in the US (way too much labor, which was not a factor in Thailand). So I set about trying to find new and better ways to make the speakers while retaining the sound quality that I knew was in the basic concepts and designs. I literally started making molds and doing casting in my garage and cutting foam with home-made hot wire jigs.
I started by making some Nathan kits because basically I could not “finish” them at the time – I could barely make the parts. (This first attempt was so amateur that I just named the speaker after my son, almost as a joke.) The first kits were well received despite being a complete disaster as far as quality control was concerned. To the early adopters it was the sound that mattered and the sound was seriously better than anything else that you could buy at the time. Those early adopters felt that the extremely low price made up for the low quality, and none of them have been unhappy with their purchases. (To my knowledge none of them have replaced their Nathans, and even Abbeys are very rare on the used market.)
Then I tried to scale up my product line and designed and built the Abbey (after my daughter — she was jealous). The Abbey has been the sweetheart of my company because it gets such wide praise from my customers. It sells 2:1 over all other speakers combined. I went on to add a subwoofer and a surround speaker, the Harper (my niece). For a time I actually sold a speaker that I called the Summa which was my very first design. Summas were the speakers that I had built for myself (I still have the original prototype set and they are all that I ever listen to even today). I could make them, but their process was the old fashion molded fiberglass, which was very expensive to do. I sold several pair, but the costs were prohibitive and quite frankly the Abbey is a far better deal. I suspect that those few Summas will someday be collectors’ items. I plan to remake a Summa clone using my improved manufacturing processes someday, but that takes time, something that I don’t seem to have much in excess.
LB: You have some unusual ideas about low frequency sound reproduction in listening rooms. Tell us more about those.
EG: As I said before, I did my Ph.D. thesis on the low frequency sound field in a small room, the frequency region that is dominated by discrete room modes. In that study I learned that all rooms are different in detail, but basically when one looked at them statistically they all act the same. This means that if one is trying to develop a uniform approach to good bass reproduction then they should look at the problem statistically.
A source in a room at low frequencies, any room, it doesn’t matter, can be thought of as a random variable which is going to have peaks and dips in its response. There will always be a first mode response peak, which, as it turns out, is basically independent of the rooms shape and depends only on the room’s volume. Then there will be a region with several discrete widely spaced modes and the response in this region will be quite ragged as a result of the very distinct nature of these individual modes. As the frequency goes up, the modal density gets greater and greater and the variations of the peaks and dips (the frequency variance – there is also a spatial or seat-to-seat variance as well) about the mean response will decrease to a constant value that becomes essentially independent of frequency, or room shape or size. (Although the transition point is dependent on the rooms size.)
One “radical” point of view that I took away from this work was that it is not “modes” that are a problem, it’s the lack of modes that is the problem. It is simply not true that one does not want to excite the modes, one actually wants to excite as many modes as possible, which will result in a notably smoother response. This is quite different from (virtually the opposite of) the usual dogma in audio. Damping the low frequency modes causes them to spread and overlap more, which also reduces the variances in the response. But achieving high damping at these very low frequencies has to be done in the room’s structure itself since no added damping material is going to have any significant effect. And it’s important to keep this extra damping just at the lower frequencies because one does not want high room damping at the higher frequencies because that will destroy the spaciousness. High damping at low frequencies with low damping at high frequencies ends up being a fairly tough architectural problem.
Now, if we look at the room’s low frequency response problem as one in which we have a single sample – a single source – of a statistically stable population (the sound field in both frequency and space), then from classic statistics we know that for each additional sample that we add to the result, the variance will drop by approximately 1/n where n is the total number of samples. Hence, if samples are sources, then we should expect that the variance of the sound field will also go as 1/n where n is the number of sources (subwoofers to us). This is in fact what happens (within some limits). The more subs that we have in a small room at low frequencies the smoother the response will become in both frequency and space. We can’t really increase the number of modes, but we can easily increase the number of sources.
There are some limiting assumptions in the above and these are important. The most important limitation is that for the 1/n improvement to hold the samples/sources must each be independent of all the others (uncorrelated in statistical acoustics parlance). Two subs right next to each other are not independent, they are highly correlated. The farther the subs are from each other the less correlated they become. This means that we should start to add subs that are as widely spaced about the room as possible. However as you add more and more subs they have to get closer and closer together and hence the 1/n improvement will begin to vanish and it can be seen that the variance of the sound field can never be brought down to zero no matter how many subs we have. In fact, in practice I have found a significant improvement with the addition of a second sub, a smaller improvement when a third sub is added and the fourth sub usually shows only a small to negligible improvement. Beyond four subs is pretty much a waste of assets since the performance gains tend to zero (except for more headroom, of course).
LB: Your speakers are currently sold direct. Do you intend to stick with that business model, or would you like to establish a dealer network?
EG: My current business model was not something that I set out to do, it became what it is because that was all that I could do. When I returned from the Thailand deal I had to boot-up my new business from nothing with no funds. The thing with business models is that one has to be flexible and go with that model that works and this can be very dependent on how you start and how things change over time. Picking a business model and running with it despite the fact that it isn’t working is a sure way to fail.
When I look at the marketplace today I see business models in all aspects and markets changing dramatically. I never go out shopping anymore, I buy everything on Amazon. I recently bought a car without ever doing a test-drive or going to a dealership, except to pick it up and sign the papers. I see brick-and-mortar stores becoming a thing of the past except in some very specific areas. I don’t usually buy clothes on line because I need to see if they fit (although this is changing as well!)
People seem to believe that they have to hear a pair of speakers before they can buy them to see if they “fit,” but speakers aren’t like clothes and I don’t believe that this is true. Still, it is a widely held belief. The problem is that the luxury of being able to audition speakers before you buy them is a very expensive proposition and I don’t think that most people understand that because it has been hidden from them. Basically, for me to start using a “dealer network” I would have to double my prices to accommodate the dealers markup and overhead. Double is a lot! Would it make sense for me to do that? It seems to me that I would lose far more sales from the dramatic cost increase than I would ever gain from a dealer network. This would certainly be true in the short term.
It seems to me that it is a far better idea to buy a set of speakers based on measurements and user recommendations than to have to pay double just to be able to audition them conveniently. The trend that I think is becoming popular is to send out the speakers for an in-home trial and if the customer is not happy with them then they can send them back. This works fine for lower cost (smaller), higher volume speakers, but for my built-to-order process and my large and heavy speakers this is more of a problem. That said, I would certainly take back a pair of speakers if the customer was dissatisfied, because to date this would not have happened even once. I have not had a customer who was not elated with their purchase, so there would be very little risk for me.
So the answer to your question is that I intend to use whatever business model works best for me and my customers. Right now what I am doing seems to work, but I would agree that it doesn’t scale very well. That’s not to say that I intend to scale up my business. As you know I am also a consultant in a number of audio related areas and that business is also very good. Ramping up a manufacturing operations is just not that attractive at my age. In all honesty, I’d like to sell the business and just stay on to design new products.
What I think is certain is that I will continue to make my loudspeaker designs available one way or another, what is not certain is where the prices will go. They have only gone up thus far and they have gone up substantially since I introduced the line. My guess is that they will go up even more since no business model that I can see would allow for anything like a price reduction or discounts.
LB: What are your current thoughts on high-end audio, and where do you see it headed in the next 10 years?
EG: Elite high-end will always be there, but it will get increasingly smaller and as a result increasingly more expensive. It is inevitable that people will get educated in what matters in audio and see that there is no relationship in the high-end between what matters and the prices. As fewer and fewer people “buy-into” the high-end dogma the margins will have to go up to sustain the ever shrinking market. We see this now. The future, I believe, will belong to “value.” The number of people who appreciate good sound will remain relatively constant, but as fewer and fewer of them can afford the elite products, they will become more and more value-oriented. It is much more satisfying to listen to a really good system that didn’t cost more than your house than it is to brag about how much you spent on something that isn’t really any better than your neighbor’s. My systems are not all that expensive and yet I would put them up against anything out there today at any price level. I consider myself to be the value leader in loudspeaker and audio design today. This has been my goal – not gouging the consumer with snake-oil techniques.
LB: Earl, on behalf of Dagogo and our readership, I’d like to thank you for taking the time to speak with us. We wish you continued success.
EG: Thank you. It’s been fun.
- (Page 1 of 1)
I am very interested in your most expensive speaker. Is there a customer within 100 miles (of Kingman, Arizona (google a map) or in Las Vegas, NV that has said speakers and might be willing to let me audition his/her speakers so that I can confirm all that I have read and justify a purchase of those same speakers from your company? If so, please e-mail to me the name of your customer and his/her telephone number so that I may request an audition of their speakers at a time convenient for all concerned.