Maybe I should call this article the Burn In Manifesto. It began as a few suggestions designed to caution against giving one’s self over wholeheartedly to the suggestion that a component under consideration will change significantly with usage. However, it turned into an exercise complete with informal testing. Burn in, a process which is suggested to happen over dozens to hundreds of hours, is not readily subject to examination or demonstration. Yet it is held by many as a tenet of high-end audio. Is it a reality or a psychological phenomenon? I have had a burning desire to get to the bottom of that question.
When audiophiles reach a certain point of insanity, or immersion into the hobby, they begin to obsess over the condition called “burning in” components. Initially, burn in was part of the quality control process, testing which stressed a product for a predetermined period of time to see if the component would fail during initial operation (c.f. Wikipedia entry “Burn-In”). Over time this practice has been extended to a principle regarding enhancement of operation. In audiophile parlance, “burning in” (otherwise known as “break in”; I will use the terms synonymously in this article) can refer to the process of extended initial operation of a component/cable to establish a theoretically higher level of performance. It can also refer to the end result, a component or cable which has been burned in or broken in. Due to the mechanical nature of speaker drivers, a physical break in phenomena would be expected; I’m leaving them out of this discussion. However, the day may come when I apply some of the principles I consider here to assess them as well.
The concept that components break in is widely held in audiophile circles. However, it is not without debate. From casual observation, a majority feel it is an actual physical change which occurs in the component/cable, resulting in an audible change. Fewer feel that it is largely a psychological effect, attributable to one’s getting used to the sound of the equipment, and that no significant change happens except in the perception of the listener. Another minority of audiophiles prefer a compromise, that both the component and the ears undergo a transition.
Testing that which is elusive
While it is relatively easy to find break in “Tests”, products which claim to enhance performance through use for a length of time in a system, it is much more difficult to find tests, as in studies, determining if an electro-mechanical change causes audible change. Searching online for entries such as “burn in test” or “break in test” yields little in the area of formal tests to ascertain the reality of the phenomenon. What is discovered are scads of opinions. On audiophile sites, the conclusions reached vary widely, as do the methods of reaching those conclusions. Some suggest that once broken in, if a component is shipped, it somehow loses the benefits of break in and must undergo the process again. Similar is the contention that if a cable is removed it must be re-burned to have its dielectric charged in order to return to its former sound.
Many manufacturers give burn in guidelines, sometimes with low-priority warnings about getting the job done right. One amp manufacturer matter-of-factly stated that the best method of burn in for their product is to not play it more than 8 hours on the initial usage, but to play it regularly for shorter intervals. Others mildly suggest that such a blunder (limited initial break in) would render their unit less than ideally broken in. Certainly, there is no uniform belief or methodology when it comes to audiophile burning in of equipment.
Fraught with uncertainty
The entire topic of burn in is fraught with uncertainty. Typically listening sessions are conducted over a span of time to judge if a component has changed. One person burns in a component for dozens to hundreds of hours and hears a change while another does the same and feels there is no change. If measurements were used, an argument would rage regarding whether those measurements indicated an audible change.
Add to this the reality of audiophile politics, and burn in becomes a tricky thing to tackle. I wonder how many pieces of gear have been auditioned under the assumption they have been burned in when they have not, or far from the length of time expected by some. At audio shows I have discussed with manufacturers their last minute assembly of speakers, amps and the like, operating them for the first time at the show! Some very exciting sounding rigs at shows hold components which wouldn’t be qualified as properly burned in.
Cutting through the burn in smoke
Mangled metaphors aside, it seems the most clearly practical method of ascertaining the efficacy of burn in is to compare two identical components or cables, one broken in and the other not. If they sounded evidently different then the phenomenon would be experientially confirmed. If they sounded identical, it would be experientially disconfirmed. Most audiophiles will not have multiples of components to do this, however some may have multiples of cables. I am blessed to have the means to afford three systems of varying quality and price. I have bought more than one unit of components I enjoy; one set residing in my office, thereby setting myself up for a test.
Meet my friend.
Obviously, if I’m conducting an informal test I will be using personal judgement as opposed to measurements. While this opens up my observations to criticism, empirical observation is the backbone of audiophilia, a widely used “tried and true” form of assessment. It is also the final arbiter of any individual’s selection of gear. In this particular instance, I invited an audiophile friend whose ears I trust, who has always been able to easily hear nuances and variables in the music as I do, to join me. He does not always agree with me and is comfortable in holding a contrary opinion. However, we have always been able to understand each other and clearly elucidate our perceptions. His presence in the comparison test was invaluable as his experience and conclusion might counter mine; he might not hear the same thing as me and thus could weaken my conclusion. If I’m going to be as objective as I can and get the most out of the experience, I have to accept that possibility and report on it. It might be noted that he does read my writing and would most certainly call me out if I fudged his involvement in the matter! I would expect no less from an honest, committed audiophile.
I’m going to go one step further, and introduce him to you! In fact, I will allow him to make comment so you can have his input. Certainly it makes things more interesting when, like Siskel and Ebert, a duo sounds off! His name is David, and he has been an avid audiophile for decades, especially enjoying vinyl. He has lovingly developed a beautiful system in its own right, which can be seen on Audiogon under the virtual system name “Storyhill”. He has visited my listening room no less than a dozen times in the past three years, such that my wife suggested that I introduce him as, “…the man who lives in our basement.” He has a better handle on the sound of my system than anyone else. He would confirm that the quality of the system is such that with use of the components in this test it would be easy to hear the distinction that switching one power cable or one set of interconnects would make, much less an exchange of a component.
That is an important point, since critics of this experiment might suggest that the gear is not of sufficiently high quality to ascertain meaningful results. I strongly disagree, and I believe David would disagree as well based on past experiences listening to these components. In fact, David was privy to hearing the results achieved with the Peachtree Audio Nova which was written up in the March 2010 Audio Blast column. He openly applauded the quality of the sound from it, so I think he would not agree with an attempt to dismiss the test due to discounting of the gear used.
I have not shared the article with Dave but only this segment which introduces him, so as to ensure that he will speak without being influenced by my discussion. I’m going to let him share a few thoughts about the test and conclusions:
“I spent an evening at Doug Schroeder’s house where I participated in an audiophile test. I have become quite familiar with Doug’s room and equipment. Through the three years I have known him, his system has evolved from impressive to reference quality sound. I have heard the likes of Legacy Audio, Jeff Roland, Pathos, VAC, Kingsound, Ayon and many others [at his home].
A couple of weeks ago, Doug, who strives to take the untraveled path, had set up a relatively economical Peachtree integrated, fed by an Ayon CD-5, powering Kingsound speakers. An unlikely combination that yielded impressive results. So impressive as to provide a most enjoyable evening of music. Fine. There’s Doug again breaking another audiophile rule by pairing $20,000 worth of source and speakers to a $1,000 integrated. With unlikely success.
Doug was so impressed by the synergy of this set up, and still needing an integrated for his office in which the original Peachtree was to serve, he bought another one. Along with his two Cambridge Audio 840C, one for his office and a new one for his budget reference, his test could be conducted.
Does break-in make an audible difference compared to fresh out-of-the-box electronics?
Sequence: Cambridge Audio, to Peachtree, to Kings. Simple. Since I am pretty familiar with Doug’s system, there was a bit of a letdown as the Ayon CD5 was not feeding the Peachtree. However, after playing a series of tracks, we settled into the pleasant and yet still impressive sound of the Kings.
First up was the “broken-in” duo. Very nice. Sometimes when audiophiles compare equipment or upgrades we strain to hear and note the differences. But sometimes they are just not there. When we connected the “new” items I tried to perceive a measurable, qualitative shift. Difficult. When we went back to the “broken in” pieces, once again difficult.
Again, sometimes I’ll evaluate the benefit of one sound to another by my overall impression of the music. Am I moved by the performance? How? I did this most recently comparing phono cartridges. I had a distinct preference for one over the other, in part, as a result of my emotional and intellectual impression of the music.
In the case of Doug’s test, I couldn’t identify a sonic quality difference, and my enjoyment of the music remained the same.”
(I have only one slight correction to David’s recollection; the new Azur 840C went to the office along with the new Peachtree Nova Integrated. The test would pit the seldom used office gear against the much used listening room gear.)
A word about the level of quality of the components used in this test. Just as I am in the throes of writing this article, I see a discussion on one of the audiophile sites wading into the topic of the efficacy of warm-up periods for gear. A tangential tangle of topics include the assertion that higher-end gear undergoes more of a break in than lower priced gear. Huh? When has that been demonstrated? Has the relative break in of different quality gear been tested? How would one make such a sweeping conclusion in regards to such an open ended inquiry? I wonder if it rests more on hearsay and speculation than testing, formal or informal? It may be nothing more than received wisdom resting on a hunch. If there is an organized answer to this particular question I would invite links and sources to be directed my way. However, I am not interested in argument for the sake of argument.
Besides, we are ultimately concerned with systems. How does one go about assessing the relative change of one system to that of a completely different system? How does one measure the efficacy of burn in of two unique stacks of gear and whether Stack A is more altered than Stack B, despite them both being run for two hundred hours? Considering that all three of the components/cable used in this particular test have been demonstrated to alter the sound of a rig at the $25K point and up, I suggest that they have the capability of being burned in as well as any upper echelon piece of audio equipment.
The Law of Efficacy & the Test
As a hobbyist I went through a lot of equipment trying to upgrade my sound. I eventually developed what I call my Law of Efficacy to eliminate marginal improvements to the rig. My conclusion after years of swapping gear was that it is of no long term value to make perceptually minor changes to a rig. All that does is limit the amount of improvement! A change had to be big, huge, overwhelming – whatever you wish to call it. Dave commented about the progress my systems have made in a short number of years. This has largely resulted because I have rejected marginal improvements. I pursue for my own system only perceptually massive increases in performance quality. If a component or cable(s) fail to bring a massive change, then it is out.
I have increasingly applied the principle behind the Law of Efficacy to many aspects of listening, and here apply it to the phenomenon of burn in. My question: Is burn in worth it? Whether or not it is real or perceptual, does it yield a significant enough change to warrant paying attention to it? I intend on commenting about both of these questions.
The test itself was quite simple, an establishment of two identical rigs, one with three elements having been used for over 200 hours total time and being subjected to an additional 50 hours of pre-test burn in, and the other used scarcely and not given the 50-hour burn in period (my office rig). In addition, the test began with the broken in gear remaining on and being used immediately at the conclusion of the 50-hour run, while the other, less used set was used from a cold start.
Addressing a potential objection to my methodology, someone might complain about my only burning in the equipment for 50 hours. The truth is no one knows how long burn in is supposed to last ideally. It’s highly subjective in terms of the length of the burn. The period is determined by a sense that it is complete. How many manufacturers have tests which are able to show that the sound is “mature”? With a pizza you can see when it’s done. With a car you can see the odometer indicating an oil change is needed. What if cars had no odometer, and one had to get an oil change by sensing when it was needed? What if you had to guess when the time was right for the oil change? That’s the scenario we’re dealing with in the matter of burn in. A lot of people are guessing when their rigs are burned in. A man breaks it in for 100 hours. Wouldn’t it have been better to run it 200 hours? The argument can be extended infinitely; I have seen claims that certain equipment must be run 500 to 1,000 hours. There is no certainty in any of this.
In a related thought, has anyone spoken of quality hours of burn in somewhat akin to quality watts? The “first watt” is heralded to be the most critical. What about the first hour of break in? Is that the most critical time period? Maybe the first 50 hours are most important. Does one have to run a component or burn in a cable 200 hours to get the best performance? It becomes readily apparent that subjectivity can run amok in this discussion. If someone wants to condemn my results because I didn’t break in the components 100 or 200 hours, so be it. But recall that the already used gear had hundreds of hours on them prior to the 50-hour burn in. They were already used regularly, whereas the others were quite close to being fresh out of the box, nowhere near the time usage as the others.
It was necessary to use solid-state equipment for this test, as tubes would be problematic. Allowing a warm up time of up to an hour would cause understandable questions regarding the results. An hour is a long time to wait when comparing the sound of two rigs. Much better is the two minutes necessary to swap two power cords, one set of interconnects, and one set of speaker cables for component stacks sitting side by side, which was the case in this test. The conditions were kept as perfectly identical as possible including listening level, type of cabling used, speaker connections, seating, running the entire test without interruption, etc.
I decided to use a more economical rig representative of something which more audiophiles might be able to afford. I recently was very favorably impressed by the Peachtree Audio Nova paired with the Ayon CD-5 player. I wrote about it in a separate Audio Blast. It was the acquisition of the second Nova which stirred my thinking about having enough gear to test a system for burn in effects.
The two systems were composed of the following; visualize the system starting from the wall outlet leading toward the speakers:
– WireWorld Electra power cord used on Azur 840C (Tested component)
– Cambridge Audio Azur 840C player (Tested component)
– WireWorld Gold Starlight Digital Cables (burned in; used in both systems)
– Clarity Cable Vortex power cord used on Nova (burned in; used in both systems)
– Peachtree Audio Nova Integrated/DAC (Tested component)
– WireWorld Silver Eclipse speaker cables (biwired; burned in; used in both systems)
– Kingsound King ESL speakers using VAC Royal Power Supply (burned in; used in both systems)
– Clarity Cable Vortex power cords feeding VAC Royal Power Supply (burned in; used in both systems)
The listening test was simple; we played four songs first on the Burned In stack, then switched cabling and played them all, in the same order, on the fresh components. The songs we heard were Lisa Loeb’s “Underdog” and “Falling in Love”, and Leo Ciran’s “Hypnotized”. The results were interesting but not really surprising to me in that we were unable to detect any meaningful differences between the two systems. When I say “meaningful” I am indicating that we were both listening critically, focused on the songs in a hyper-attentive way. We were ready to catch any differences, even subtle ones, between the two systems. To both of our minds there were none! During this trial of the test the burned in equipment failed the Law of Efficacy spectacularly; there was no evident improvement in the sound from running the components 50 hours nonstop and warmed up. In terms of selecting which stack of gear we thought sounded better to us, though we agreed there was no evident difference, both Dave and I preferred the less burned in equipment. This was unexpected. I felt it was due to hearing the music more recently on the non-burned in system.
In order to test that perception I suggested we reverse the test and play two additional songs on the fresh system, then return to the one which had received burn in. We did so, listening to Sarah Mclaughlin’s “Angel” and Sade’s “Soldier of Love”. Again, we had no impression that the music had been altered between the two systems. This was significant. Let me point out that Dave and I regularly can discern changes as minor as one power cord or one set of interconnects being switched in my systems, even on ones with these very components. We can hear the difference in the sound when the optional 24bit/192kHz upsampling is engaged in the Ayon CD-5 player. We easily hear the difference in sound between a CD which has been treated with a polish/cleaner and one which has not. In other words, we can hear what might be considered more subtle changes between conditions in the system or media quite easily. I would assert that if an audiophile has good gear, a good room, and good ears they also should be able to hear such things easily.
A practical consequence of the Law of Efficacy is that if I cannot hear easily the distinction between two conditions, then the treatment, component, or cable fails the Law of Efficacy. In other words, if it’s a treatment it’s not worth doing, and if it’s a component, it’s not worth changing. A struggle to discern a difference means you can’t say you really heard a difference. It’s one thing to have difficulty describing a difference in sound. It’s another to have uncertainty you did hear a difference. We did not have uncertainty in this regard. We were clearly unconvinced that a meaningful change had occurred. If there was a change, it was so evasive that it would not be gratifying to merit the time spent on burn in. It certainly would not be enough for me to consider holding on to a component which I didn’t like the sound of in the hope that it would change for the better over time.
Not only was this a failure of one component to display a difference from burn in, but a global failure of a system of three newer components, including a cable to show any difference due to burn in. That is significant, as there were potentially three chances for a perceptual change to take place. The fact that none of the three did strengthens the conclusion that at a minimum initial burn in of 50 hours does little to alter the sound of cables or components.
My original recommendation was upheld by this test: Don’t rely upon anticipated changes from burn in to determine whether to buy a component. Dave joked with me using the rhetorical argument of infinite regression – I had tested only one system with this method, and I couldn’t apply my findings to all systems. I rebut that an experiment, even one conducted casually, carries more weight than mere conjecture. The results suggest that this likely is not the only rig where burn in has little audible effect. In fact, it raises my curiosity to similarly test more gear in the future.
In principle, it is possible that on a rig with exceptionally high-end gear, costing perhaps $150K and up, subtle distinctions from burn in for 50 hours might reveal themselves. It also may only be detectable in such systems, so that the vast majority of audiophiles never experience it. For them it would be a moot point; why subject gear to burn in when it can only be heard with state-of-the-art systems? I am not suggesting the equipment used was insufficient to hear break in. I am wondering if break in may be so inscrutable that only state-of-the-art systems have a chance at being audibly influenced by it. However, I have doubts about that. I would tend to think that it is more likely that even on very high-end rigs break in compared between two identical components, or even systems, would not pass the Law of Efficacy. Based on the results of the test, the more you burn in a component, the more you are prematurely wearing it out.
What if it’s all in your mind?
Are we willing to suggest that we have such honed aural memory that we can be certain the equipment has changed its sound over days to weeks? I find that claim tenuous. I have said publicly that I can hear the difference in my rig when a power cable is switched on the source or amp. But that is an immediate, physical change. One argument which may carry some weight in terms of physical causes of burn in is the annealing of metal. There is also the argument that the dielectric of cables becomes charged and is disrupted when they are moved. Both of these should be able to be further tested by audiophiles in the manner in which I conducted this test.
I can effect an immediate change in sound by replacing a cable, which is more helpful to making a decision than trying to sense what is happening (or not happening) over weeks. If I compare two components or cables side by side, I would think that logically I should choose the one which apparently sounds best. Yet, when speakers, amps etc. are sold at stores, customers are told to “give them time” and wait for burn in. I do not advise this. If you are hesitant about the sound, it is supposed to be that much differentover time? No, not in my experience; the results of the burn in test give no indication that it will happen.
Do we not change?
I would assert that a significant change happens to the hearer during the honeymoon period as the ears acclimate to the product. Why is there so little discussion of “Acclimatization”? Why do we not recognize it as an audiophile phenomenon, but argue over burn in? Would anyone argue that the listener does not acclimatize to the nature of the system over time; that one is able to “hear more deeply into” the speakers, preamp or cables, to become more intimately familiar with the sound? What precisely is there to prove that it is not hearing acclimatization,namely growing more comfortable, more intimate with the component, which is happening and being given the term burn in?
In the end, the reader will determine whether four ears, Dave’s and mine, are to be trusted in this matter. That is the nature of the game; a person writes what they hear, or in this case didn’t hear differently. You must determine whether their experience was legitimate, and if so whether it is transferable to your situation. I am fairly certain that I will not let the matter rest with just this one test. I foresee several more comparisons in the future in an attempt to either strengthen or reverse my conclusions as objectively as possible. This was just one test, but a test with a very specific outcome and which leads me toward specific conclusions regarding burn in. Could my thinking be reversed with continued testing? I would be hard headed and foolish to say that it could not. But for now (remember, excepting speakers), my conclusion is that burn in is more appropriately linked to a mood or feeling, not a physical change in components.
- (Page 1 of 1)