Skip to content

Technology FAQ

Using binary data and packet switched data networks can move a file across chaotic networks and reassemble the file at its destination perfectly. But a music signal is two-dimensional - amplitude and time. We hear sound when the amplitude of a signal changes, and the rate at which it changes determines the pitch of the sound. The data in a music file only represents the amplitude dimension of an analog music signal. The file includes an instruction about the rate at which the samples should be played, but the data network is not designed to deliver the packets with accurate timing. Getting the time dimension right is the job of the music server.

The DAC chip needs to process a square wave that is accurate in both dimensions, if it is to accurately replicate the original recording. Any distortion in the time dimension distorts the analog signal produced by the DAC chip. The role of the music server is to receive the file and turn it onto a digital audio signal that is accurate in both the amplitude and time dimensions. A basic computing device can do a reasonable job of getting the timing dimension right. A high quality music server can do a much better job of getting the timing dimension right.
Streaming protocols, buffering/re-clocking, and galvanic isolation are all important to the outcome, even necessary for reasonably high fidelity, but to suggest that these methods completely fix the clock data is simply not realistic. This is because of real world issues like noise interference, bandwidth constraints, imperfect electronic parts, imperfect performance of actual circuits, etc, as with any other audio circuitry.

For example, a high quality music server solution will deal with timing in stages. In the early stages the file is written into memory at the same time that it is read out of memory to go to the next stage. This involves using a computer that is also running server and or player apps, plus potentially many other services at the same time. Achieving high precision timing at this stage is not practical. It requires more than just a high quality reference clock. The process has to achieve very high bandwidth and very low noise, to avoid obscuring the timing data, and this is not practical in a high power computing device. See our Noise & Bandwidth page for a detailed explanation of this.

High-precision reclocking occurs in later stages, after the file has been turned into a synchronous digital audio signal, and the process typically uses a PLL (Phase Locked Loop). A PLL compares the timing of the incoming digital audio signal with a reference clock to measure the signal error in the time dimension. The error is inverted and fed back into the signal to attempt to correct the timing error. As with any real electronic circuit, this can be done well, but can never be done perfectly. Getting the timing closer to perfection involves better design, better parts and higher cost.

While it is true that a DAC can receive an asynchronous signal off the network and play it, if you want better performance from digital audio, the job of achieving high-precision timing needs to be done more comprehensively.

See our Architecture page for a description of how, for a computer audio system to deliver true high-end audio quality, several stages are required. It also describes how asynchronous transfer is not a solution, but a trade-off that is only part of the end-to-end solution.
We cannot speak for every competitor, but what seems to set us apart from the bulk of the competition is that we design our music servers from the ground up to minimize noise interference with the digital audio signal, and the fact that this enables us to do a better job of maximising bandwidth.

Others focus predominantly on noise, and so use slow linear power supplies and filters, that compromise transmission bandwidth. Typically, low noise and compromised bandwidth results in pleasant sounds, but boring music.

In the Oladra project, we found that making the best trade-off, between reducing noise and increasing bandwidth, yields more musical results. Which is why we focus on designing noise out from the ground up, because it dramatically reduces the need to make those trade-offs. It enables us to achieve the best of both worlds. See our Noise & Bandwidth page for an explanation of the issues.

There is a musically critical difference between just getting nice sounds, and the explosive urgency and excitement of being right there with the artists. This is what the Antipodes Oladra technology is all about, and what sets Antipodes music servers apart for lovers of music.
This question is more about the solution architecture than it is about differences in connection performance. Ethernet is used between a server stage and a player stage. USB is used between a player stage and a re-clocker stage. S/PDIF, AES3 and I2S are used between a re-clocker stage and a DAC stage.

Please read our Architecture page for a full explanation of this.