Moving a file from one place to another is easy in a data network, packets can be sent, dropped, delayed, re-sent, and yet re-assembled into the exact same file at the other end.
But media files add another dimension – time. We don’t just need the ones and zeros. We also need the clock data. And a digital receiver works better when the clock data is totally stable. This does not automatically happen when transmitting over a data network.
Streaming protocols, buffering and re-clocking are all important to the outcome, even necessary for reasonably high fidelity, but the suggestion that these methods completely fix the clock data is simply not true. Different streaming protocols are better than others. Different buffering and re-clocking stages are better than others. None of them are perfect, and with all of them, the quality of the output is affected by the quality of the input.
Some argue that since the re-generated signal is a different signal from the input signal, that it is therefore independent of the quality of the input signal. This is wrong because it ignores the impact of noise interference and intermodulation between the signals due to their co-existence in the same system. The argument is also an assumption, unless the people making this argument have tested it by listening – which we recommend you do for yourself.
See our Architecture page for a description of how, for a computer audio system to deliver true high-end audio quality, several stages are required. It also describes how asynchronous transfer is not a solution, but a trade-off that is only part of the end-to-end solution.
We cannot speak for every competitor, but what seems to set us apart from the bulk of the competition is that we design our music servers from the ground up to minimize noise interference with the digital audio signal, and the fact that this enables us to do a better job of maximising bandwidth.
Others focus predominantly on noise, and so use slow linear power supplies and filters, that compromise transmission bandwidth. Typically, low noise and compromised bandwidth results in pleasant sounds, but boring music.
In the Oladra project, we found that making the best trade-off, between reducing noise and increasing bandwidth, yields more musical results. Which is why we focus on designing noise out from the ground up, because it dramatically reduces the need to make those trade-offs. It enables us to achieve the best of both worlds. See our Noise & Bandwidth Page for an explanation of the issues.
There is a musically critical difference between just getting nice sounds, and the explosive urgency and excitement of being right there with the artists. This is what the Antipodes Oladra technology is all about, and what sets Antipodes music servers apart for lovers of music.
This question is more about the solution architecture than it is about differences in connection performance. Ethernet is used between a server stage and a player stage. USB is used between a player stage and a re-clocker stage. S/PDIF, AES3 and I2S are used between a re-clocker stage and a DAC stage.
Please read our Architecture page for a full explanation of this.