Publisher Profile

Audiophile Law: Burn In Test Redux

By: |

It has been nearly a decade since I wrote the last, #6 Audiophile Law, “Thou Shalt Not Overemphasize Burn In.” At the time I thought it might create a firestorm of conversation and controversy. Instead, there was the distinct sound of the wind blowing; largely, no push back, no big discussion. What do I make of that, and the fact that when I link to it in threads there still is not much argument? Perhaps the article was so well argued that there is not much to debate. The other possibility is that audiophiles are so arrogant, so self-confident, that they cannot be bothered by such an inconvenience as a test showing they fantasize changes to systems that do not exist. Never underestimate the hubris of an audiophile in such matters!

The comparisons chronicled in that article moved me to fundamentally change the way I operate as an audiophile — and how I review. I summarily ceased paying attention to the nebulous phenomenon of “burn in” and no longer concern myself about warm up of equipment. I concluded that attempting to work with these actually hinders the audiophile from making progress with their system. Consequently, I make changes to systems much faster now, trusting my ears, and get to an optimized system more quickly. I stopped using insipid methods.

If the circumstances allow, I circle back to things that I have experienced in the past to see if anything significant has changed. I did my own work with sets of cables prior to when I started reviewing, and that formed the basis for my methods with an emphasis on proper assessment requiring comparison of sets of cables. Since then, I have taken the time intermittently to conduct cable reviews, always working with a full set from the manufacturer. I dabble occasionally in dynamic, panel, and hybrid speakers, and float between the genres. A few times I drifted between solid state amps and tube amps, and lately have returned to what I would consider my third round of assessment of class D amplification. Each cycle of involvement has been instructive. For instance, the amp used in the comparisons here, the Legacy Audio i.V4 Ultra (four channel version as indicated by “4”), has forced an update of my opinion of the sound quality of class D amplification to such a degree that I am making some changes to how I review. You can read about this amp in the review to appear at Dagogo.

The comparison discussed below will follow the same methodology as in the first article. I do not have an additional set of ears for this set of tests, because most of my audiophile friends are isolating due to Covid-19. But I am not willing to push this article off indefinitely.

Those interested in my previous discussion of the topic should read my original article, where I use the terms “burn in” and “break in” synonymously. Technically, there are supposed to be distinctions between them, but not experientially. Does the capacitor “burn in”? Does the speaker driver “break in”? Does a cable burn in or break in? Debating such things, apart from appreciation of the manufacturing process, is of little value in terms of actually building a superior rig. The purported result is supposed to be the same, an easily distinguished audible change over time. My bias in the matter is that, especially after the first set of comparisons, no audiophile can hear purported changes over time, and no changes were revealed in direct comparison when put to the test. Having admitted my bias, I attempted to return to the comparisons with a mental tabula rasa.

It should be evident that I do not hold such conclusions absolutely, as though there can be no alternative outcomes or advancements that could change the outcome, for I am returning to comparisons. Just as I am returning to working with class D amps, I am returning to comparisons of tweaks and will either find further support for my previous conclusions, or contradictory evidence, forcing me to rethink them. This is a possibility, as returning to class D amps has led to a rethink in regards to their quality and suitability for two channel reference listening. However, in terms of the comparisons under study here, it must be admitted that the degree of change between an amp circuit and a weight or isolation puck is large, thereby theoretically reducing the potential for a reversal of my previous conclusions.

 

The System

The system comprised a digital front end: Small Green Computer sonicTransporter AP i7 4T with SMG 5V Linear Power Supply; SONORE Signature Rendu SE with systemOptique optical network; Clarity Cable Supernatural USB (1m); Exogal Comet DAC with Plus Power Supply. Also: Iconoclast by Belden “Generation 2” XLR Interconnects; Legacy Audio i.V4 Ultra Amplifier; Iconoclast by Belden SPTPC (Silver Plated Tough Pitch Copper) Speaker Cables, spade terminations; Salk Sound SS 9.5 Speakers. All power cords Belden (BAV) Power Cables with 15A IEC, with the exception of a Clarity Cable Vortex PC on the Legacy amp. The music source was Tidal’s upper tier service, played through Roon’s media software; all settings of Roon’s Digital Engine were turned Off. The cables and amplifier in the “optimized” system were broken in for 200 hours.

This comparison has an emphasis less on components and more on what would be considered tweaks or methods. The eight variables under testing were: Broken in power cord, interconnects, speaker cables, and amplifier; amp stand; isolation cones under amp footers; weights atop the amp; and warmed up amplifier VS none of these tweaks or methods. Stop to consider for a moment the import of stacked conditions under comparison. All have been claimed to be important in terms of purported sonic change that supposedly is not only significant, but is often discussed with adjectives such as “huge,” and promoted as vital for system building to obtain synergy or optimal results. This article is not assessing impact of isolation on analogue sources, or warm up of tube equipment, though I did treat that topic in my previous article. An air of superiority surrounds those who use such methods, and persons who neglect them might be ridiculed as not being shrewd in methods of system advancement.

 

Results

As a lead in to the results, I remind the reader that in order for a change in sound to pass my Law of Efficacy, it must be immediately noticeable, repeatable and significant enough to be thought of as a large/big change.

The music selected included large and small venue recordings, chiefly vocals and acoustic instruments. Male and female voices were used, and some symphonic music.

The two systems, “optimized” vs “non-optimized”, were set up allowing for only interconnects and speaker cables needing to be changed, which were in position, ready to be changed to economize on time. Burned in cables were marked with masking tape to avoid potential confusion. I allowed the optimized system to warm up for one hour, while the new Legacy Audio i.V4 Amplifier had never been turned on previously. I had kept it aside waiting for months for this series of comparisons.

Starting with the warmed up, optimized system, I played “Straight On ‘Till Morning” from the Wendy original soundtrack, followed immediately by “Hey Eugene (Watch Your Back)” by Pink Martini. The speaker cables and interconnects were changed with the listening level on the Exogal Comet’s digital dial matched (all tracks were played back at a setting of “77”), then the tracks were played back in reverse order, ensuring the most recent acoustic memory was preserved for assessment. No difference in the presentation of the music was detected. The optimized system failed the Law of Efficacy.

For the second round I began with the system configured with the non-optimized set of equipment, then after hearing two tracks, reverted to the optimized system. I chose two different tracks, “Holy Water (Church Sessions)” by We The Kingdom, and “Everybody’s Cryin’ Mercy” by Jamison Ross. The results were consistent with the first round; no change in the sound between the two sets of equipment. The optimized system failed the Law of Efficacy.

The final round began with the optimized system and, as with the first round, moved to the non-optimized set. The music used for this set of comparisons was Acoustic Alchemy’s “Templemeads” from Live in London, and Don Williams’s “I’m Just A Country Boy.” Once again, there was no discernible difference between the sound of the optimized and non-optimized equipment. The optimized system failed the Law of Efficacy.

The results between the systems were so consistent that there were moments I preferred emotionally the sound of the new equipment, even though I could not identify any difference and the systems were identical! I could not isolate any parameter of sound quality that had changed between the two systems. I particularly listened for changes in tone, decay, definition, headroom, frequency extension, coherence of drivers, transients, and both macrodynamics and microdynamics. The results were so similar that, for illustration, I was able to hear breathing and lips parting in equal measure between the two systems. I remind the reader that I spend a tremendous amount of time —hundreds of hours every year — assessing systems for sound quality – essentially, all my time spent in listening is in analysis of sound quality. I rather enjoy critiquing the sound quality, and build systems regularly to attain higher sound quality. I constantly hear nuances, differences associated with substitution of cables and components, not to mention speakers. When two systems sound the same to my ears, when comprised of the same equipment, they are for practical purposes the same. This stands to reason, as nothing was demonstrably different in the actual treatment of the signal; the precise same equipment was used, down to the power cords, so why would there be any expectation of significant change to the sound? The outcome was precisely as one would expect of two systems comprised of the same cables, components and speakers!

Note that this was as close to real time comparison as allowed. If had a switch box I believe it would have resulted in the same outcome. It is over time, days or weeks, that audiophiles think they hear changes due to the eight variables tested, but timely comparison shows it is not happening. It would be wonderful if simple use improved one’s system audibly, but this flies in the face of both sensibility and reality. This experience, in concordance with the outcome ten years ago, reinforces it is an audiophile fantasy that changes are heard to audio systems over longer periods of time, and that equipment changes its performance audibly with use.

 

A conclusion not easily dismissed, except by the arrogant

Stacking these conditions or tweaks is a judgment of the lot of them. These are some of the most revered tweaks used by thousands of audiophiles, and they are promoted continuously as efficacious. Obviously, very few people, including manufacturers and reviewers, are actually conducting comparisons, for they fall prey to the same misnomers as the rest of the community. In fact, they promote the same misnomers while appealing to their authority!

Probabilistic analysis shows that as more conditions/tweaks are added to the testing, my conclusion becomes stronger. Taking the example of flipping a coin, where the probability is ½, or .5, and attempting to flip 8 heads in a row yields a .0039% chance of success. I approached the task similarly, starting with the presumption that each element of the comparison had a 50/50 chance of conferring audible change to the sound (it either does or it doesn’t). The conclusion is clear; it is far more unlikely that the components had a 50/50 chance of changing sonically than to conclude that in all eight instances there was a change that happened to not be detectable. There is a .0039%, approximately four thousandths of one percent chance for that outcome! It becomes obvious that the expectation of a change is specious, and the odds of it happening are vanishingly low. Ergo, it is a waste of time for the audiophile to pay attention to break in, tweaks involving vibration control, and warm up. These have been heralded as essential methods, absolute methods of premium system building, yet they do nothing to alter the sound.

One might argue that I weighted the odds of a change of sound quality attributed to an isolation device, or a weight atop an amp, perhaps warm up, too favorably. Someone might argue that they deserve only a 25% chance of occurrence. This argument is a slippery slope, with no chance of verification. My assumption of a 50/50 chance is predicated upon the notion that if these items do change, the change will be audible. If not, then why bother to adhere to the method or tweak at all? They either do or do not change the sound of a system audibly. Probabilistically, the results are strongly against any of them having meaning for the audiophile, especially when the previous article’s trials are taken into consideration!

 

Objections

What objections can be mustered in the face of such a pitiful outcome by eight purportedly important methods of system building? Dismissal of my hearing acuity has no particular force, given the body of work I have created in regard to dissecting sound quality, and the optimized environment, as quiet at a mastering studio, in which the testing proceeded. The procedure I followed more closely mirrors blind testing than the long term, infrequent comparisons that characterize subjective conclusions on the efficacy of methods and tweaks similar to the ones tested. Most hard-headed audiophiles, due to nothing more than hubris, an appeal to their hearing apart from comparisons, will dismiss the results out of hand. It is too difficult for some audiophiles to admit that much of what they want to believe is happening is not, and that their longer-term acoustic memory is poor.

Likely the most popular objection to my testing will be an appeal to a favored designer or manufacturer, or the weak argument that the 200 hours of break in was insufficient to reveal a change in sound. To the charge that I used the wrong products I reply, you prove otherwise! Please, have at it! Go obtain the Clarity Cable isolation devices and weight for the top of the amp, then compare to like products from other manufacturers. The results also speak to the testimonials of persons who have compared competing isolation products and claimed to hear a difference. There is no resolution of this difference of opinion short of a demo where the ears of conflicting parties are in attendance. Graphs and charts of diminishment of vibration are insufficient to demonstrate the change is audible, and attention should be paid to the scale used in the graphs, as the measurements may be insignificant. I encourage fanatics of cable break in to secure two identical new sets, spend the time to break in one set, then conduct a comparison. Perhaps you would like to put in the money, time and effort to speak with experience.

Regarding the claim of insufficient break in, it begs the question, presuming that adequate break in time brings an audible change. It is well known that there is no demonstrable accepted period for burn in/break in, and recommendations range from none to more than 500 hours! I suggest that there is no audiophile alive who can measure the demonstrated effect of the degradation of electronics over time compared to the putative improvement of their sound with use.

 

Don’t believe me? You do the testing!!

I will have many critics, naysayers, in regard to this article. Why don’t you do the testing? If you want to be so serious about your break in, warm up and tweaks, then you compare! Be prepared to see for yourself that you have been wrong, and try to come to grips with the failure of dearly loved practices that are supposed to define serious audiophilia. Many will counter with one of the most arrogant claims in audiophilia, “I don’t have to; I know I’m right!” I used to talk the same way in regard to such matters back when I was an arrogant audiophile who wouldn’t spend a dime, take a moment, or lift a finger to test it! What changed? I became a reviewer, so now I have the equipment on loan to me at no cost, and thus the motivation to conduct the testing is higher.

 

Who is “selling” now?

I see an irony in all of this; as a reviewer I am regularly accused directly or indirectly of “selling,” as though I have an agenda to promote questionable methods and products without regard to the audiophile’s fiscal welfare. Let it be noted that herewith, I recommend avoidance of fancy audiophile tweaks for isolation, room tuning and system setup services or sales using peripheral objects, and purchase decisions predicated upon acceptance of phenomena such as break in or warmup. I am a staunch proponent of building better systems by methods that pass my Law of Efficacy, among them:

-Comparison of full sets of cables

-Avoidance of esoteric products that will not “play nice” with most equipment

-Proper tuning of a room using panels, diffusors, and bass traps

-Ideal positioning of the speakers, but not as replacement for equipment upgrades

-Clean higher power amplification and (ideally) higher efficiency speakers

-Pursuit of significant technological changes to genres of equipment

 

What will I be doing going forward?

How will I apply what I have learned from these tests? They confirm my decision a decade ago to stop sitting around waiting for things to change. They confirm my decision to stop seeking significant system change through peripheral means, but to focus on what influences the power and signal paths. They affirm my resolution to be proactive, not inactive in resolving disappointment in a system’s sound.

I will be building systems, as I have for the past decade. I will change them aggressively until I make them sound pleasing to me. I will not wait for warm up, burn in, break in, or pixie dust to fall from the heavens. I will not walk away for a week and pretend I hear something different. I will trust that manufacturers can make gear that is more consistent in operation than my subjective hearing over a week or a month. In brief, I will continue to trim the nonsense from the hobby, and I invite others to follow my lead.

 

Copy editor: Dan Rubin

 

  • (Page 1 of 1)

2 Responses to Audiophile Law: Burn In Test Redux


  1. “Building better systems by methods that pass my Law of Efficacy”. Have followed the majority of your methods and can heartily agree with them. Puts most of the opinions of reviewers of major and minor publications to shame. A great approach free of self serving BS.

  2. michele surdi says:

    we may also posit that changes happen between the hearer’s ears,it’s called aesthetics.thank you for your lucid prose.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Popups Powered By : XYZScripts.com