Loading...
Skip to Content

TECHNOLOGY

OVERVIEW

This page explains how music servers and music streamers work, and how we approach design, using terminology used in the following diagram.

Simplified Process Diagram
  1. Server Applications. Server applications manage Internet Streaming Services and music files on Local Storage, and when a file is to be played it streams the music file to the Player Application.
  2. Player Applications. A Player application converts a streamed music file into a digital audio signal and transmits it to your DAC.

A lot of insights are provided and organised under the Topics Menu. It is a long read, but some of you might find it useful to understand more about music server/streamers, the way we approach things, and the design choices we have made.

THE CHALLENGE

A music signal can fully replicate sound, and describes how the amplitude of the signal changes over time. If the amplitude dimension or the time dimension is distorted, then the sound we hear is distorted.

A digital audio file represents the amplitude information using a binary numbering system. Ignoring errors in the recording, this makes it easy for almost any computer system to transmit the amplitude information to your DAC without information losses. But the timing accuracy has to be generated by the playback system and the information must enter the DAC chip with perfect timing or the time dimension is distorted and the audio we hear will be distorted. This is much harder than it sounds if you are starting with a stored music file or an internet stream, and why you need a high-quality digital audio source to get high-end sound quality from streaming services and music files.

Jitter Example

The purpose of a high-end music server is to generate, regenerate and reclock the signal, sometimes more than once, to provide a better-timed signal to the DAC. But none of the regeneration or reclocking steps will ever be perfect, and so digital audio will always have some level of distortion. The same can be said of analog audio, but the nature of the distortion is fundamentally different.

ELECTRONIC DESIGN

Dancing is a surrender to euphoria, and it is quite amazing that music can do that to us. The urge to dance does not come from our cognitive evaluation of sound quality. It comes from the visceral emotions we feel when we are captured by the music. But this concept is so mercurial that many audiophiles do not trust themselves to make decisions on equipment this way. Even reviewers spend more of their words on describing the sound, than on how the music makes them feel. Some even use a machine to measure the sound for them.

Our design approach is focused on experimentation and trusting our emotional reactions to each experiment. From this we develop the insights and theories that drive the next round of experiments. You will notice that this is the scientific method, except that we do not insist on objective, repeatable measurement, because to do that would lead us to miss the point.

Our theories remain just our current theories and we do not allow them to become beliefs, because technology keeps evolving. At each major step forward in technology, the best hardware design may be different from the last one. For that reason we don’t draw any lines in the sand on music server/streamer design, but we can tell you about what we do now, given the choices available with current technology.

Our free-thinking approach often causes us to be outliers, or mavericks, using technologies that others don’t, and rejecting mantras held by many in the industry. We prefer to live or die by how well we serve the music rather than by serving up a technology story.

There are three key ways we do things a bit differently.

1. MACRO-DESIGN

We use a cascade approach as illustrated by the image below, where the files and streams come in from the top, get improved progressively through each stage, and exit out the bottom. Optimising each step requires a different hardware design for each step.

Detailed Process Diagram

Step 1 runs the Server app and does the best job if it is a computer with relatively high power (but not too high), and with a lot of RAM. It is the heavy-lifting stage and achieves a lot, but the power needed makes it relatively noisy (in an electronic perspective, not an acoustic one). In a car cleaning analogy, this stage is where you water-blast the big bits of dirt off, but the result is not yet acceptable.

Step 2 runs the Player app and this is where the essential neutrality of the sound is determined. To achieve this we use only a moderately powerful computer because we need to get the electronic noise interference levels down to much lower levels than what comes out of Step 1. In the car cleaning analogy, this is where you attend to the detail with a sponge and soapy water, and at this stage the output is pretty good. Note that in the entry models K21 and K22 we run the Server app and Player app together in a moderate powered device. This allows us to get a very musical result but not with the same layers of insight as with the K50 and Oladra which use a high-power computer for Step 1 and a separate moderate power computer for Step 2.

Step 3 provides isolation and clocking of the signal using relatively powerful micro-processor resources. In the car cleaning analogy, this is where you wax and polish to get a perfect finish, and the output quality is now exceptional. In the Oladra, K50 and K22, we complete this step separately for the asynchronous USB output, from the synchronous outputs (S/PDIF, AES3 & I2S).

Step 4 is the final clocking stage and needs to be completed in the DAC, as close to the DAC chip as possible. In the car cleaning analogy, you drive it into the showroom and give it a quick final polish to get rid of anything picked up along the way. DACs do not typically use high-power re-clocking inside the DAC, so they benefit greatly from Step 3 being completed in the music server/streamer. Some DAC manufacturers do provide high power re-clocking in a separate case and arguably, this is doing Step 3 of the process. However, in this case the best interface is to use a USB connection and so you are then using a Step 3 in the music server/streamer and then a Step 3 in the separate re-clocker to also convert from USB to a synchronous signal going to the DAC.

This macro-design is not typically used by other manufacturers. The most common approach is to use a single powerful computer for both Server apps and Player apps. The performance of the Player stage is compromised because a high-power computer is used. Detail retrieval is not so obviously compromised but the naturalness of timbre, musical expression and natural ease and flow of the music are compromised.

We also differ in the signal interface used between Step 2 and Step 3. Most use a PCIe interface, whereas we find that older interfaces are more musically engaging. Similarly we far prefer the musical experience of using enterprise-grade sata SSDs to using PCIe/NVMe. In our experience, newer technology in some areas seems to always improve the music, and we grab it with both hands. But in certain other areas newer technology seems to always rob the sound of its authenticity, so we stick with the technology that continues to serve the music better. We get it that some customers do not feel comfortable unless they have the latest technology, but we favour a better music experience over chasing that objective.

These two departures from the conventional wisdom are big reasons why many audiophiles and reviewers find our products dig further into the emotional message in the music.

One way to generalise it is that using too much power in certain places and newer signal interfaces in certain places tends to emphasise the outlines or edges of the sounds, at the expense of the body of the sound. We found that this is because fast transients at any frequency, including bass notes, are being smeared and so the transients grab the attention of the ear/brain. The effect is to accentuate transient details and even to make the sound more exciting in the short term. But the unnaturalness of timbre gets in the way of the ear/brain engaging fully with the musical content, and we find the perceived sound becomes mechanical and sterile. Because the transients are smeared they also become relentless and fatiguing over time.

We believe our approach achieves a better balance between the outlines and the body of sounds and because you are not being distracted by the accentuated transients, you will perceive greater saturation of tonal colour with Antipodes products. Not every audiophile is going to agree with our preferences and they may get a lot of enjoyment from the type of sound delivered by other products. We are never going to argue with what people like. The purpose of our statements here is to describe where our objectives lie, and why that leads us to do some things that don't necessarily conform to the conventional wisdom.

2. MOTHERBOARD TUNING

We don't subscribe to the myth that you can do digital badly and just fix it later in a sigle step, so instead of relying on add-on cards we put a lot of emphasis on tuning the active elements on our premium-brand enterprise/medical-grade motherboards, as if they were members of an orchestra, to reduce the total noise floor to very low levels, without constraining bandwidth. We use firmware-level tweaks to reduce noise and shift the frequency peaks of the noise. Just shifting the frequency peaks can achieve a significant reduction in the noise floor by eliminating nodes created by multiple noise sources.

While reducing noise is widely accepted as being important, the need for wide bandwidth is just as important. If transmission bandwidth is constrained, then squaring out the square wave is impaired. A square wave needs to switch between a 0 and a 1 instantaneously to get the time dimension right, which makes speed/rise-time/bandwidth so important. Our motherboard tuning deals with reducing noise, shifting noise to frequencies where it is more benign, and maximising bandwidth simultaneously.

This approach is another key reason why Antipodes music servers sound natural and excel at revealing the emotional content in music.

3. POWER SUPPLY DESIGN

We believe we were the first commercial suppliers of music server/streamers to use linear power supplies instead of switch-mode power supplies. It is an audiophile mantra that linear power supplies are better than switch-mode power supplies. But, we find ourselves being the mavericks again. Advances in switch mode technology have been much greater than in linear power supplies and in the digital world, the tables have turned. Provided the design is excellent and the parameters of the active filter are well judged, then everything about digital works better with the superior speed of a switch-mode power supply. The issue of switching noise is readily dealt with by placing that at a very high frequency, using high-quality active filtering and physical layout of the internal electronics.

Our power supply design cascades three different switch-mode topologies, that are designed to work together in such a way as to produce their combined magic. The combination of topologies used are a key to the way Antipodes music server/streamers achieve natural timbres. The result is superior timbral accuracy, cleaner transients and greater ease and flow than with using a linear power supply.

The armchair critics didn't like the fact that we adopted switch mode technology, but that does not bother us, as any audition of the music-making ability of our music server/streamers quickly makes their concerns irrelevant. Some quite vocal critics of our approach now own Oladras and we are all happy about that.

VIBRATION DESIGN

Vibration treatment is important in any high-end audio system, and any serious system is placed in a well-engineered rack. The same should apply to the equipment cases. They should be just as well engineered to deal with vibration, and we take that very seriously in every Antipodes model.

Case-Work

Our cases are CNC-machined exclusively from thick alloy plate. This contrasts with most cases that involve folding/bending of wafer-thin metal, requiring clinch-nuts and additional internal corner brackets to allow assembly. Thick plate is more inert, but there are other benefits. Every panel begins as a thick alloy plate, enabling us to do things like machining an 'L' shaped end to a side plate, allowing us to screw the side plate directly to the front plate from the inside of the case. Similarly the front, side and back plates are thick enough to screw the top and bottom plates directly into the side and back plates, and into a rebated section of the front plate. This completely eliminates the need for flimsy corner connection brackets or clinch nuts, that affect the fit, and that can come loose over time to become a major source of vibration. We can simply screw the 6 plates together to create the outer casing, and all internal dividers are also CNC machined from thick alloy plate. Every case part is CNC-machined to a tolerance of a small fraction of a millimeter, so the cases are assembled and fit together like a Swiss watch. All of these features combine to minimise component vibration, and allow us to strike the optimum balance between damping and the speed of release of the vibration energy.

For the Oladra we avoid the need for separate front, top and side panels through machining those four sides from 100mm thick alloy plate, and this also increases the thickness of the surface the top and bottom plates are screwed to. That makes a much more solid 3-panel case instead of the KALA Series six-panelled case, and the hefty solid top damps the bottom and back plates.

We also put a lot of design effort into the footers we use. The footers include a special polymer that is specifically designed for vibration control, and the very detailed specifications provided by the manufacturer of this material allow us to precisely calculate the thickness to be used in each instance so that the pressure applied to the polymer is always precisely what is needed for optimum vibration control performance.

THE MODELS

The process maps below summarise the inputs and outputs for all of the models, including showing the input to output flows. The maps illustrate that a decision about how to connect to your DAC is not about which connection is best, it is about what functions you want to have performed in the Antipodes as opposed to in the DAC. Some DAC manufacturers will insist their Ethernet input is best, but what they are really saying is that they want the player role performed in their DAC rather than in your desktop/laptop. We agree, but when using an Antipodes music server/streamer you are likely to find the opposite to be true and the Ethernet connection will not be as good as the re-clocked outputs.

Direct Streaming and Network Streaming outputs use only the Server functionality, and bypass any separate Player and Re-clocker in your Antipodes.

The USB outputs use the Server and Player functionality as well as the USB isolator, re-generator/clocker in your Antipodes.

The S/PDIF, AES3 and I2S outputs Server and Player functionality as well as the Synchronous outputs isolator/re-clocker in your Antipodes.

When you use less functionality in the music server/streamer through your choice of connection, you transfer electronically noisy functions to the DAC environment.

In our experience, performing the processor-intensive functions in the music server/streamer delivers superior sound quality. The playing field is more even when the DAC manufacturer provides a Player/Re-clocker unit separate from the DAC unit, and in that case you should experiment to find the best combination for your equipment rather than make any assumptions.

OLADRA

Oladra & K50 Process Map

K50

Oladra & K50 Process Map

K41

K22 Process Map

K22

K41 Process Map

K21

K21 Process Map