Publisher Profile

Audio by Van Alstine ABX Comparator Review, Part 2: Trials

By: |

What to listen for in blind testing

Having gotten my feet wet in ABX I began to isolate what properties of the sound were giving me clues that a switch of components or systems occurred. I was surprised to find that timbre was not the strongest indicator of any given system’s identity, but rather spatial clues about the soundstage and the bass presence were easier to ascertain. Center image size, focus, and depth were critical to quickly identifying a system, and once familiar with any given system’s imaging and sound staging properties I went to that aspect of sound first as my “touchstone” for identifying the system. The weighting, cleanness and intensity of the bass were confirming aspects, which invariably supported the phantom image results. The only time there was difficulty was in the aforementioned case of Orientation Error.

Tonality or timbre was far harder to distinguish, and eventually I came to nearly ignore it when discerning between systems! This might surprise, given that for the bulk of audiophiles tonality is seen as so important that premium sound cannot be obtained when it is not correct. That may be so, but it was not the easiest way to identify a system! In fact, tonality at times was so similar between systems that if I were listening primarily for it, I believe I would have scored poorly many trials.

At the conclusion of the review I touched base with Frank again, and he told me that a change had been made to the ABX Comparator in regard to the audible clicking of the switches when moving from system A to system B. It seems that in a quiet room there is a noticeable difference in the clicking of the relays. A user had clued Frank in to the fact that one could tell whether circuit A or B was in use by the sound of the switching!

It never occurred to me to listen to the internal switches clicking, as I was so obsessed with focusing on what happened after the clicking, namely the test sample, that I never noticed the slight difference. Perhaps I was desensitized to such clicking by the Gatling Gun clicking sound of the stepped attenuator inside the Cambridge Audio Azur 840E Preamp, which I reviewed and own. I’m sure that if I returned to using the ABX unit again I would be able to hear the difference. I am thankful I paid no attention, because for me it would have ruined the experience. If I had a “cheat” like that, to actively ignore it would have been awful, to the point that it would be nearly impossible to be completely objective. I do not believe it influenced my testing, as my scores showed variances due to system structure. In fact, I would suspect someone’s results to be fudged if they uniformly passed every ABX Comparator trial! It’s good for future users that the timing of the relay has been changed to make switching between both systems A and B sound identical.

 

Grrrr… orientation error!

In my third trial the dreaded Orientation Error surfaced again, this time causing me to miss my first three tests, but afterward putting up a flawless finish. This makes me wonder if Orientation Error may account for a bulk of poor scores in ABX testing?

Moving on to a fourth trial I added another hurdle, that of judging between systems using 8 different pieces of music, a new one for each test. I returned a different day and simply went into the trial, without familiarizing myself with the pieces of music, and using less known tracks. I expected a tough outcome, but was shocked that I scored a perfect 8 for 8! How does one explain that previewed tracks did not always result in a perfect score, but unique pieces not previewed were no barrier to success?

Part of the answer may lie in the freshness of my ears, as I was well rested and focused on the trial. Also, obviously, Orientation Error did not enter into this particular trial, so as long as I chose accurately on the first test I had a strong possibility of doing well on all of them. Finally, by staying focused on the properties that marked the different systems, center image and bass response, I was guided toward the correct answers. It was becoming obvious that there are a many delicate influences on how one does in ABX testing.

As long as I was fresh at the task of conducting the comparisons and focusing well on each test, and avoided the dreaded Orientation Error, my scoring was consistently superb. Seven of 8 or a perfect score was not uncommon. My confidence was renewed as I moved to the second set of systems.

 

A new system and some new music

The second system I built was more upscale, partially to determine whether better gear was a significant factor in passing ABX tests. In hindsight I do not think the better gear made it easier to choose accurately in blind testing. It did, however, lend much needed additional credibility when it came to assessment of amps, which I will discuss shortly.

The second set of systems were comprised of the following:

System A: Mac Mini/Clarity Cable USB/ EE Minimax DAC Supreme with Silnote Poseidon GS Power Cord/ Clarity Organic IC to ABX Box/ ABX Comparator with Silnote Poseidon GL Power Cord/ Morrow Audio MA7 Grand Reference Interconnects/ VH Audio “house blend” speaker cable 3’ lengths for amps (Thank you, Chris V.!)/Wells Audio Innamorata/ Signature Sound Z.1 Speakers.

System B: Identical except for VAC Phi 200 tube amp/ Kirksaeter Silverline 220 speakers.

Once again, the selection of speakers was partially based on obtaining as closely matched towers as possible, with the presumption that if discernment was possible between these, it would be even easier with vastly different speakers, i.e. those of different technology such as electrostatic versus dynamic, or a full range floor standing speaker versus a bookshelf model.

I tried to keep things fresh in terms of the music selections for testing, so as to prevent errors due to familiarity, or give myself an advantage. Forcing myself to shuffle the music and intersperse well worn and lesser-known pieces would keep me on my toes. Some of the songs used included “Until You Come Back to Me” by Hil St. Soul, “Reference Point” by Acoustic Alchemy, “100 Years” by Five For Fighting, “Here Comes the Rain Again” (live), by the Eurythmics, “Lonely River,” by Susan Ashton, and “Make You Feel My Love” (live), by Adele.

Results of sighted comparisons followed suit with the lesser systems. Listening sighted and switching between systems I found preferences in terms of both individual components and systems. However, blind testing forced me to listen more acutely, though I was able to distinguish with consistency the systems. On one blind trial I used “Soulfood to Go” by Manhattan transfer, listening to 30 second segments, with my preferred pairings of the Innamorata and Signature Sound speakers, and the Phi 200 with the Kirksaeter Silverline 220’s. I blew the first test due to Orientation Error, and after adjusting in the second test scored 7 for 8.

I conducted a second trial, changing the music, having no preview, to “Sierra” by Boz Skaggs. On this occasion I felt extreme confidence, and blew through the 8 tests of the Trial. The results were as I expected, a perfect 8 of 8. Such results continued to confirm that there indeed exist discernible differences in components and systems, which can be heard in blind testing. Yet, through it all the thought that those differences were not nearly as large as often described was ever present.

By trial number three I was feeling cocky, and I queued up Santana’s “Maria Maria” featuring The Product G&B, ran through the tests – and blew it by scoring 5 of 8. Ouch! Overconfidence hurts ABX scoring! But, slowing down once again and taking the entire 30-second interval, with “Holly Cole’s “River” the outcome was a perfect 8 of 8.

The presence of perfect scores in trails is strong evidence that I was picking out correct systems using a blind testing method. Had I no ability to do so, or if the systems presented themselves with no discernible distinctions, then I would not have been able to consistently score beyond guesswork, about 50% scoring.

I believe most seasoned listeners who concentrate and react to a potential Orientation Error could pass such ABX testing. Frank Van Alstine seems to concur that one can pass ABX Comparator testing with regularity, but that it takes focused concentration. In a noisier environment, or with distractions, even those caused by other listeners, I think the results could suffer significantly.

9 Responses to Audio by Van Alstine ABX Comparator Review, Part 2: Trials


  1. Dan Kuechle says:

    Here is a list of the latest changes / additions to the ABX box:

    1) As mentioned in the article, the common ground requirement between speaker level components has been be removed, thus allowing bridged amp testing and no damage from an inadvertent reversed speaker connection.
    2) At any time now, during blind test, system A or system B can be selected (as well as going back to system X). This allows the listener, at any time, to reference either system and then go back to the unknown system under evaluation.
    3) In blind test mode it has always been possible to run thru the 8 tests as many times as you want. moving from test 8 to test 1 in the “up” direction and from test 1 to test 8 in the “down” direction.

    Dan Kuechle, designer – AVA ABX Box

  2. Anonymous says:

    I wouldn’t use a additional box and cables to compare audio equipment, it just adds more to the system that’s not required or normal to have and all it’s going to do is mitigate any differences there are between the equipment being compared.

  3. Werd says:

    Hello
    While in possession of the ABX comparator did you leave the comparator your main listening system?
    That is not in use as comparator but running, inserted and hooked up in your system?

  4. Dan,
    God’s Joy,
    Thank you for the updates to the unit!

    “Anonymous”
    Unless you have perfect hearing, which I highly doubt, and perfect recall for that matter, then the additional equipment IS necessary to ascertain whether one is able to pick out gear or systems in a blind test. Also, unless one has definitive proof that all such equipment actually does mitigate any differences, then your assertion is conjecture.

    Blessings,
    Douglas Schroeder

  5. Werd,
    The Joy of the Lord to you,

    No, I removed the ABX when building systems for other reviewing purposes.

    Blessings,
    Douglas Schroeder

  6. Bob Bashaw says:

    Was it possible to compare just two units, such as two amplifiers? I would love to know if you could hear the difference between two level matched amplifiers with everything else being identical.

    • Anthony says:

      Hah – of course that is what it was designed for and the way it should have been done and would have been the scientific way to test something – one change at a time and limiting all other variables, but that would be too sensible. A reviewer would never do that as it would show them up.

      The differences in speakers are many orders of magnitude bigger than the imperceptible differences in _properly_ designed and _compatible_ electronic components.

      Comparing two completely different systems with different speakers in different room positions as the reviewer has done, and then trying to attribute differences to any one component within that system is ridiculous. This is a botched attempt by someone trying to “play” science, rather than real science. Despite that it is a step in the right direction and is better than most audiophile reviewers – as long as he excepts that his testing was flawed and communicates this. If he thinks he did things correctly and that the testing is pointless it is yet another step backwards.

      If he had tested a cd player and a dac with the exact same source data, eg testing a non hdcp cd with a file ripped from that cd (rather than a download from a different remastering) or just using the digital output from the cd player into the dac, and played level matched (properly level matched – like with a multimeter rather than by ear), and ensured that the cd player and dac output impedances were suitable for the input impedance of the amplifier, and that both devices had been tested to have flat frequency response, and that their are no obvious issues/flaws with the system such as hum or weird tones due to ground loops etc. If he did _all_ that and had the rest of the system completely the same, and tested blind, he would not have a chance of doing better than random chance guessing.

      I suspect he could even have used this ABX tester to test just a change in interconnects with the rest of the system remaining the same. Perhaps even just a change in speaker cables.

      If you want to read a good write up of how this sort of comparison should be done, have a read of the “Audio Equipment Testing White Paper” by Roger Sanders over at Sanders Sound Systems website. It is quite accessible for the laymen to read and covers the essential concepts of what is required for valid scientific testing.

      What I am seeing from reviewers and audiophiles who are afraid of a system like this, is that that are going into it with the wrong idea. Rather than treat it as a test of you personally being able to detect differences in components, you should use it as a test of the components themselves and whether they actually sound different from each other or if they sound exactly the same. It is not a test of your manhood or your golden ears, it is a test of the component and whether it actually sounds different or not.

      With science you start with a theory and try and produce _valid_ tests to disprove or prove it and you are to accept the results whatever they may be, providing your tests were actually valid and can be shown to be so.

      With situations like this, you seem to have a presupposition that for example $30000 cd player “A” _must_ sound different than $3000 cd player “B” because that is just what everyone says and they say they can hear it so it _must_ be true. So your theory then becomes that since A is better than B, if I can’t hear the difference then I must be faulty. I don’t want to risk knowing that so I won’t do the test or at least I’ll set it up lop sided so I can’t be proven to be deficient. Unfortunately if the original presupposition is faulty (which is could be) then that invalidates your conclusion that your hearing is deficient.

      What you should be doing is saying ‘people say $30000 cd player “A” sounds better than $3000 cd player “B” but there is no valid proof – just subjective opinions’. So the theory is $30000 cd player “A” sounds better than $3000 cd player “B” so I will design proper valid tests to test this theory and if this is proven true or false – that result is what will be accepted. If the tests are valid and cd player A and B are proven to sound exactly the same, then the theory is disproven and we must accept that they do in fact sound the same and anyone hearing differences between them therefore are incorrect and there must be some other factor at play to account for why they are hearing differences or why they say they are hearing differences. For example expectational bias, invalid testing (not level matched for example), changing too many variables at a time, taking too long between comparisons and relying on flawed memory of what something sounds like (even a few seconds can be too long), lying so they don’t lose resale value when they offload it, lying to promote a product, something else.

      I really wish you audiophiles would wake up and accept this. You could save yourselves a lot of money and you could then spend that money where it actually makes an audible improvement and would probably cure your virulent upgraditus in no time. When you keep spreading misinformation, you not only do yourself a disservice, you also hurt other people that listen to your misinformation and act on it.

  7. Bob Bashaw says:

    Oops, my computer didn’t print out the entire article earlier! Sorry, thanks for comparisson of the amps. I too had a similar experience. It was single blind, but I lost the ability to hear any differences. I never knew what to make of it. New heights of humility!

  8. Anthony,
    God’s peace to you,

    I did compare using single changes. I did more comparisons than I wrote about. It’s obviously not a lab test when conducted in a listening room. Imo, your objections are overdone. Without any animus, you sound like you have a bias and agenda to defend electronics and systems priced below a certain point. But, I’m not interested in extended argument about the article, nor your methods.

    Blessings,
    Douglas Schroeder

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Popups Powered By : XYZScripts.com