Category Archives: Tech

Harmonic Content, Bass and Energy


Most of the sounds we hear are made up of many different frequencies all vibrating together at the same time. The energy in a wave depends on its amplitude and frequency. The higher the amplitude, the more energy. With sound, amplitude is related to loudness. Also the higher the frequency, the more energy. The amplitude part of this makes intuitive sense. The frequency part does too, but it is less obvious.

Consider a musical instrument playing a sound. Since energy depends on amplitude and frequency, if it puts equal energy into all the frequencies it emits, then the higher frequencies must have a smaller amplitude. Musical instruments don’t actually put equal energy at all the frequencies they emit, but this does hold true roughly or approximately. If you do a spectrum analysis, they are loudest at or near the fundamental (lowest) frequency and their amplitude drops with frequency. Typically, roughly around 6 dB per octave. That is, every doubling of the frequency roughly halves the amplitude.

For example, here is amplitude vs. frequency for a high quality orchestral recording:

This graph shows amplitude dropping as frequency increases. Since energy is based on amplitude and frequency, this means roughly constant energy across the spectrum (all frequencies).

This implies that low frequencies are responsible for most of the amplitude in a musical waveform. So, if you look at a typical musical waveform, it looks like a big slow bass wave with ripples on it. Those ripples are the higher frequencies which have lower amplitudes. Further below I have an example picture.

Audio Linearity

Audio devices are not perfectly linear. They are usually designed to have the best linearity for low level signals, and as the signal amplitude approaches the maximum extremes they can become less linear. This is generally true with analog devices like speakers and amplifiers, and to a lesser extent with digital devices like DACs.

For example, consider a test signal like 19 and 20 kHz played simultaneously. If you encode this signal at a high level just below clipping, it’s not uncommon for DACs to produce more distortion than they do for the same signal encoded at a lower level like -12 dB. I’ve seen much smaller level changes, like a 1 dB reduction in level giving a 30 dB reduction in distortion! The same can be true for amplifiers.

Furthermore, the lower the level of a sound, the fewer bits remain to encode it. 16-bit audio refers to a full scale signal. But a signal at -36 dB has only 10 bits to encode it because the 6 most significant bits are all zero. Because the high frequencies are usually at lower levels, they are encoded with fewer bits, which is lower resolution. The Redbook CD standard had a solution to this called pre-emphasis: boost the high frequencies before digital encoding, then cut them after decoding. This is no longer used because it reduces high frequency headroom and most recordings are made in 24 bit and are dithered when converted to 16-bit.

The Importance of Bass Response

One insight from the above facts is that bass response is more important than we might realize. At low frequencies (say 40 Hz), the lowest level of distortion that trained listeners can detect is around 5%. But at high frequencies (say, 2 kHz), that threshold can be as low as 0.5%.

So one could say who cares if an audio device isn’t perfectly linear? Because of the energy spectrum of music, the highest amplitudes that approach non-linearity are usually in the bass, and we’re 10 times less sensitive to distortion in the bass, so we won’t hear it.

But this view is incorrect. It is based on a faulty intuition. The musical signal is a not a bunch of frequencies propagating independently. It is a single wave with all those frequencies superimposed together. Thus, the high frequencies are riding as a ripple on the bass wave. If the bass wave has high amplitude approaching the non-linear regions of a device, it is carrying the lower amplitude along with it, forcing even those low amplitude signals into the non-linear region.

A picture’s worth 1,000 words so here’s what I’m talking about, a snippet from a musical waveform. The areas marked in red are the midrange & treble which is lower amplitude and normally would be centered around zero, but riding on top of the bass wave has forced them toward the extreme positive and negative ranges:

Speaker Example

This reminds me of a practical example. Decades ago, I owned a pair of Polk Audio 10B speakers. They had two 6.5″ midrange drivers, a 1″ dome tweeter, and a 10″ tuned passive radiator. The midrange drivers produced the bass and midrange. As you turned up the volume playing music having significant bass, at some point you started hearing distortion in the midrange. This is the point where the bass energy is driving the 6.5″ driver excursion near its limits where its response goes non-linear. All the frequencies it produces are more or less equally affected by this distortion, but our hearing is more sensitive in the higher frequencies so that’s where we hear it first.

Obviously, if you turn down the volume, the distortion goes away. However, if you use EQ or a tone control to turn down the bass, the same thing happens – the distortion goes away. Here the midrange frequencies are just as loud as before, but they’re perfectly clear because the distortion was caused by the larger amplitude bass wave forcing the driver to non-linear excursion.

Other Applications: Headphones

The best quality dynamic headphones have < 1 % distortion through the midrange and treble, but distortion increases at low frequencies, typically reaching 5% or more by the time it reaches down to 20 Hz. The best planar magnetic headphones have < 1% distortion through the entire audible range, even down to 20 Hz and lower.

Most people think it doesn’t matter that dynamic headphones have higher bass distortion, because we can’t easily hear distortion in the bass. But remember that the mids and treble are just a ripple riding on the bass wave, and most headphones have a single full-range driver. If you listen at low levels, it doesn’t matter. But as you turn up the volume, their bass distortion will leak into the mids and treble and become audible.

Thus, low bass distortion is more important in a speaker or headphone, than it might at first seem.

Other Applications: amplifiers and DACs

Amplifiers and DACs have a similar issue, though to a lesser extent. This concept could apply here as well – especially when considering the dynamic range compression that is so often applied to music these days.

Consider a digital recording that is made with dynamic range compression and leveled too hot, so it has inter-sample overs or clipping. Sadly, this describes most modern music rock/pop recordings, though it’s less common in jazz and classical.

Most of the energy in the musical waveform is in the bass, so if you attenuate the bass you reduce the overall levels by almost the same amount. This will entirely fix inter-sample overs, though it can’t fix clipping. Remember the 19+20 kHz example above, showing that distortion increases as amplitude levels approach full scale? With most music, attenuating the bass will fix that too, since the higher frequencies are usually riding on that bass wave. For example, this explains how the subsonic filter on an LP may improve midrange and treble response.

VueScan Multi-Crop – How To

Continued from a few years ago … VueScan is a great scanning app but it has a UI that only an engineer could love. Multi-Crop is a feature where you can scan several things at once on the scanner deck and have them saved as separate files. I use this to scan 35mm film, since my scanner can load 12 frames at a time. While this feature is very useful, it’s hard to figure out how it works. Here, I describe how I use this feature to scan 35mm film with my Epson V600 which has a tray that loads 12 at a time.

First, load your media in the scanner. For this, I use the 35mm film negative tray and load up 2 parallel strips each having 6 photos.

Next, turn on the scanner, start VueScan, and make the right settings:

  • input mode: Transparency
  • input media: Color Negative
  • input bits per pixel: 24 bit RGB
  • input batch scan: List
    • This will make it scan each cropped sub-image and save as a separate file
  • input scan resolution: 3200 dpi (anything higher is overkill for most film negatives)
  • JPEG, quality 95 (anything higher is overkill for most film negatives)
  • output default folder: make sure it exists
    • else VueScan won’t save the pictures and it won’t give you any error message

Now hit the Preview button. It will scan then show the slides all together. With this displayed you can set up the Color tab. Read the fine print along the edge of the film to get the vendor, brand and type.

  • negative vendor: Kodak (or whatever)
  • negative brand: GOLD (or whatever)
  • negative type: 400 Gen 5 (or whatever)

You will set the rest of the image settings later, for each individual picture. Now go to the Crop tab. For 35mm film in my scanner’s tray, VueScan’s 35mm film setting works and is simpler.

  • crop size: 35mm Film
  • multi crop: 35mm Film

A grid will appear over the scan preview. It shows a dotted line rectangle over each of the slides. It won’t be perfectly lined up, but as long as it’s reasonably close it’s OK because you’ll fix that next.

Note: if it’s not even close, or if you haven’t filled the entire tray and you don’t want to waste time scanning blanks, you can select CUSTOM and do your own layout (rows, columns, sizes).

  • Note the blue <- and -> arrows at the bottom right of the screen; right of the image rotate buttons.
    • These move the focus forward and back across the sub-images of the multi-crop.
  • Click the left <- button until it disappears (so you can only shift right -> not left), to shift focus to the 1st image.
  • When focused on each image, VueScan remembers the settings you make for that image.
  • Click the + button (lower right of UI) to zoom so this single image fills the screen.
  • In the image preview, click inside the image near a corner, then drag a rectangle to mark the crop area to contain the image.
  • If needed, click the rotate buttons (lower right of UI) to ensure the image is right-side-up.
  • Go to the Color tab and ensure the exposure is right. The key controls for this are:
    • Black point: what % of the pixels are mapped to black (lowest intensity).
      • I find that 0.1% usually works well.
    • White point: what % of the pixels are mapped to white (highest intensity).
      • I find that 0.5% usually works well.
    • Curve low & high: set the shape of the contrast curve
      • Low and high are the 25th and 75th percentiles
      • If you set them to .25 and .75, you get a 1:1 linear mapping
      • To increase mid-intensity contrast, at the expense of losing detail in the darkest and lightest parts of the image: increase the low, decrease the high
  • Images on old film often look washed out. To fix this, go to the Filter tab and check one of these options:
    • Restore colors
    • Restore fading
  • After you’re done with image, click the blue next -> arrow to focus on the next image.
  • Repeat the above steps to set the crop box for each image.
  • When you’re done, click the previous <- arrow to review each of the images, ensuring your settings are correct.

Now click the “Scan” button at the bottom left of the screen. VueScan will scan each image, which can take about 3 minutes per image (over 30 minutes for a deck of 12). This is fully automatic so you can walk away and come back later to check the results.

Corda Soul & WM8741 DAC Filters

The Corda Soul uses the WM8741 DAC chip. Actually, it uses 2 of them, each in mono mode which gives slightly better performance. This chip has 5 different anti-aliasing reconstruction filters. The Corda Soul has a switch to select either of 2 different filters. Here I describe these filters, show some measurements I made, and from this make an educated guess which 2 of these filters the Corda Soul uses, at various sampling rates. At higher sampling frequencies the digital filter should make less difference; more on that here. My measurements and observations below are consistent with that.

Note: this DAC chip has a mode called OSR for oversampling. The Soul uses this chip in OSR high, which means it always oversamples the digital signal at the highest rate possible, to 192 or 176.4 kHz, whichever is an integer multiple of the source. For example, 44.1k is oversampled 4x to 176.4k and 96k is oversampled 2x to 192k. The function of the digital filters depends on this OSR mode.

Summary: the filters have 3 key attributes:

  • Frequency Response: how fast (sharp) or slow they attenuate high frequencies.
  • Frequency Response: the filter stop-band – is it above, at, or below Nyquist.
  • Phase: whether the filter is linear (constant group delay, FIR) or minimum phase (variable group delay, IIR).

This table summarizes key filter attributes – taken from the WM8741 data sheet linked above, for 44.1k / 48k sampling in OSR high mode.

NameRatePhasePassbandStopbandNyquistGroup Delay
1sharplin [min?]20,021 / 21,79224,079 / 26,208-6.0243
2slowmin [lin?]17,993 / 19,58423,020 / 25,056-28.078
3sharplin20,021 / 21,79224,079 / 26,208-6.437
4slowmin18,390 / 20,01622,050 / 24,000-116.1947
5slowlin18,390 / 20,01622,050 / 24,000-122.68

Note: at 44.1 kHz sampling, filters 1 and 3 are almost identical. The first is called “soft knee” while the third is called “brickwall”. Yet strangely, their frequency response is the same (despite their names which suggest otherwise) and the only difference is that 1 has more group delay. This suggests that the labels for filters 1 and 2 might have been mistakenly reversed in the WM8741 data sheet. Brickwall is usually the standard sharp filter closest to the ideal mathematical response. But not here, because being only -6 dB at Nyquist, it can allow ultrasonic noise to leak into the passband.

Filters 4 and 5 are labeled as apodizing. From what I read, this means their stop-band is a little below Nyquist. Why set the stop-band below Nyquist? Theoretically this is unnecessary. The reason given is that rejecting the upper band just below Nyquist is supposed to be an extra-safe way of avoiding any distortion introduced by the AD conversion during recording. Here, the stop-band of the apodizing filters is at Nyquist, but that’s still a bit lower than the others which are above Nyquist (which is an improper implementation).

Based on the above chart, filter 5 is the most correct implementation because it is the only filter that is fully attenuated by Nyquist, with flat phase response (minimal group delay). However, filter 5 rolls off a little early to achieve this. If you want flat response to 20 kHz, filter 3 is the best choice, though it does so at the price of allowing some noise above Nyquist. If one wanted a minimum phase alternative, the best choice would be filter 4. Both 1 and 4 are minimum phase, but 1 is not fully attenuated at Nyquist. Filter 4 is. However, to achieve this, filter 4 sacrifices FR with an earlier roll off.

For comparison, here’s how these filters behave at 96k / 88.2 k sampling (also in OSR high mode).

NameRatePhasePassbandStopbandNyquistGroup Delay
1sharplin [min]19,968/18,34648,000/44,100-120.4117
2slowmin [lin]19,968/18,34648,000/44,100–120.89

At these higher sampling rates, all the filters are fully attenuated by Nyquist (or lower). That’s a good thing and Wolfson should have done this at the lower rates too. Also, filters 1, 2, 4 and 5 (all but 3) take advantage of the higher sampling frequency to have a wide transition band with gentler slope. This sacrifices response above 20k (which we don’t need) to minimize passband distortion, particularly phase shift. The numbers reflect this, as they all have flatter (better) phase response than filter 3.

As with the first table, filters 1 and 2 look like a mis-print; both have the same transition and stop bands. But all else equal, linear phase should have less phase shift, not more. This is probably a typo, because as you’ll see below, the impulse response for filter 1 is asymmetric, and for filter 2 is symmetric, and symmetric impulse response usually implies linear phase.

Based on this data, filters 2, 3 or 5 are the most correct implementations. Filter 3 has flat FR up to 40 kHz, but this extra octave comes at the price of a narrower transition band having more phase shift and group delay. Filters 2 and 5 have flatter phase response but start rolling off around 20 kHz to get a wider transition band. If one wanted a minimum phase alternative, filters 1 or 4 are the only choices and either would be fine.

I measured the Soul’s output with the digital filter switch in each mode, sharp and slow, using 2 test signals: a frequency sweep and a square wave. From this, I measured frequency and phase response, group delay and impulse response. Charts/graphs are below, in the appendix.

Here’s the square wave: first sharp, then slow:

Overall, at 44.1 kHz I observed 3 key differences:

  1. In sharp mode, frequency response and group delay are both flat to 20 kHz.
  2. In slow mode, frequency response starts to roll off and group delay starts to rise between 18 and 19 kHz.
  3. In slow mode, the square wave shows no ripple before a transition, and ripples with greater amplitude and longer duration after a transition.
  4. The above curves are similar when comparing the sharp & slow filters at 48k sampling.

From these observations I conclude that for 44.1k and 48k signals, the Soul uses filters 3 and 4 in sharp and slow modes, respectively. Here’s why:

  • Because FR is flat to 20 kHz in sharp mode, it must be using filter 1 or 3.
  • Because GD is flat in sharp mode, it must be using filter 3.
  • Because FR rolls off just above 18k in slow mode, it must be using filter 2, 4 or 5.
  • Because GD rises in slow mode, it must be using filter 4.


I recorded these graphs using my sound card, an ESI Juli@. This is not a great setup, but it’s the best I can do without dedicated equipment.

PC USB Audio output –> Corda Soul USB input –> Corda Soul analog output –> sound card analog input


  • Configured the sound card for analog balanced input & output (flip its daughter board from unbalanced to balanced.
  • Cabled from Soul to Juli@, using 3-pin XLR to 1/4″ TRS.
  • On PC:
    • Disable pulseaudio
    • Use Room EQ Wizard (REQW) on PC, in ALSA mode
    • Configure REQW
      • set desired sampling rate (44.1, 48, 88.1, 96)
      • set audio output to USB
      • set audio input to Juli@ analog
    • Configure Corda Soul
      • Select USB audio input
      • Ensure all DSP disabled (knobs at 12:00)
      • Set volume as desired
        • measured at max: 0 dB
        • measured at 12:00; -16 dB; 34 clicks down
    • Use REQW “Measure” function
    • Confirm proper sampling rate light on Corda Soul

Important Note: My measurements depend as much on the Corda Soul as they do on the Juli@ sound card. For example, if the Juli@ rolls off the frequency response faster than the Soul, then I will measure the same FR in both mods of the Soul. And if the Juli@ applies a minimum phase filter that adds phase distortion, then I will measure that phase distortion in both modes of the Soul. This probably explains why the digital filter responses were so similar at 88 and 96 kHz.

Here are FR, phase, GD and impulse plots for all tested sampling rates. Each is sharp top, slow bottom. Observe that at multiples of 44.1k (44.1k and 88.2k), the sharp filter has flat phase response while the slow filter does not. But at multiples of 48k (48k and 96k), both filters have similar non-flat phase response. This is probably due to the Juli@ card. However, the comments below assume the Juli@ card is transparent and all differences are due to the Soul.

In all cases, both filters at all sampling rates:

  • Frequency response: starts to taper at 20 kHz for the widest possible transition band.
  • Impulse response: sharp is symmetric, slow is asymmetric.
  • Group delay: sharp is flatter than slow.
  • At high sampling rates, the difference between the filters becomes immaterial. This is consistent with theory.

44.1 kHz: sharp is filter 3 and slow is filter 4.

  • Sharp FR doesn’t taper until past 20k, so it must be filter 1 or 3.
  • Sharp has flat GD, so it must be filter 3.
  • Slow FR tapers past 19k, so it must be filter 4 or 5.
  • Slow has more GD than sharp, so it must be filter 4.

48 kHz: sharp is filter 3 and slow is filter 4, for the same reasons as above.

88.2 kHz: Sharp is filter 2 and slow is filter 1.

  • Both FR start to taper at 20 kHz, so neither can be filter 3.
  • Both have a stopband at 44,100 kHz (beyond 40k), so neither can be filter 4 or 5.
  • Sharp has flatter phase / less group delay, which is filter 2.

96 kHz: Sharp is filter 2 and slow is filter 1.

  • Both FR start to taper at 20 kHz, so neither can be filter 3.
  • Both have a stopband at 48 kHz (beyond 44k), so neither can be filter 4 or 5.
  • Sharp has flatter phase / less group delay, which is filter 2.

Corda Jazz: Measurements

I own this headphone amp and use it every day at work. It has great sound quality with some unique features. I previously reviewed it and compared with other amps here.

Earlier this year I loaned this amp to Amir to measure for Audio Science Review, here. Amir does a great service to the audiophile community, I’ve met him in person and he’s a good guy with industry experience and a knowledgeable audiophile. However, we are all human with different opinions, and even objective measurements can be misleading.

Take SNR (signal:noise ratio) and SINAD (Signal over Noise and Distortion) for example. These are typically measured at a device’s full scale output, as this usually gives the highest number. But with headphone amps, we don’t listen at full volume. Their max output level is around 2-4 Vrms, sometimes more. This is far too loud for average listening levels; it would be painful or cause hearing damage. We typically listen with average levels around 70 or 80 dB SPL, which, perceptually, most people would describe as medium-loud. Most headphones reach this level with a voltage around 50 mV.

For example, consider the Matrix Audio Element, which Amir recently reviewed. It is one of the best DACs he’s ever measured, with a SINAD of 120 dB. However, its 50 mV SINAD is only 81 dB.

For comparison, The Corda Jazz measured about 87 dB SINAD at full output, and 90 dB at 50 mV output.

This illustrates an important point. We start with 2 devices. One has a SINAD of only 87 dB, which seems low. The other has a SINAD of 120 dB, which is the best he’s ever measured. Objective measurements tell us one is better! However, that is highly misleading because when you measure the output at levels we actually use, the exact opposite happens. The Jazz is actually 9 dB better than the Matrix. That’s a 65% drop in noise & distortion, which is a significant, audible improvement.

In short, the max SINAD measurement is correct, but misleading because it describes conditions that nobody actually uses when listening. The 50 mV SINAD is a better measurement because it represents actual listening conditions. But virtually nobody measures this; Amir (much to his credit) is the only person I know of who does this. Furthermore, the large variance between these two belies their similarity: as in the above example, the devices measuring the highest peak SINAD often do not measure the highest 50 mV SINAD, which proves how important it is to more closely scrutinize the measurements we make and their relevance to what we hear.

Enough said about this. Next I’ll talk about how the way an amp is designed affects this. If you don’t care about engineering details just skip to the conclusion.

Lesson learned: an amplifier’s SNR or SINAD can be quite different at 50 mV than it is at full output. How does this happen? The conventional amplifier has its internal gain-feedback loop set to whatever fixed gain ratio produces the desired maximum output, and the volume control is a potentiometer (variable resistor) that attenuates this. This “fixed gain with attenuation” means the noise level is relatively constant (based on the gain ratio, which is fixed), so as you turn the volume down, you reduce the SNR and SINAD at the same time.

This is easily seen with the Matrix. Full output is 3.9 V, so 50 mV is 38 dB quieter. And its 81 dB 50 mV SINAD is 39 dB less than 120 dB. What a coincidence: turn the volume down by 38 dB and SINAD drops by 39 dB! They have a virtually perfect 1:1 relationship. Not a coincidence; that’s by design.

So what’s happening with the Jazz? Its SINAD actually gets better at lower volumes. The Jazz is designed differently from typical amps. It does not use fixed gain with separate attenuation, but instead it uses variable gain to set the attenuation you need, obviating any need for separate attenuation.

The Jazz volume control changes the resistors in its internal gain-feedback loop. At low volumes, it has less gain and more negative feedback (wider bandwidth, lower noise and distortion). As you turn up the volume, you are increasing the gain (reducing negative feedback). [Incidentally, this means it must be inverting, for its gain-feedback loop to have less than unity gain. But its final fixed-gain stage is also inverting, so overall it does not invert.] Finally, this volume control is not a potentiometer; there is no potentiometer in the signal path.

This means the Jazz produces its best sound quality at the low to medium levels we actually use for listening. It also means the Jazz has perfect channel balance at every volume setting. Another observation from Amir’s measurements is that the Jazz is not current limited. It puts out 10x more power into 30 ohms, than 300 ohms.


Amir didn’t like the Jazz in his review, mainly because of its limited output power. One of the limitations of the Jazz’s unique volume control is that the resistors in the gain-feedback loop can only handle limited voltages. If you turn up the volume too high, it produces huge amounts of audible distortion due to input stage voltage clipping. The Jazz maximum output level before the onset of this clipping & distortion is about 3.7 V. That equates to 116 dB SPL with Sennheiser HD-580 and 120 dB on Audeze LCD-2. This is more than loud enough for me. Anyone listening this loud risks damaging his hearing.

In summary, the Jazz is an amp that Amir’s measurements show has perfectly flat frequency response, perfect channel balance at all volume settings, less than 1 ohm output impedance (not current limited), and SINAD among the best he’s ever measured, at actual listening levels (50 mV). Yet he doesn’t recommend this amp because of its limited output voltage. At the same time, he does recommend amps like the Matrix, which have higher output power, but inferior measurements at the levels we actually listen. Amir is correct that exceeding an amp’s power limits creates audible distortion, thus is the most likely way listeners will hear distortion from an amp. However, if the limits are high enough (as with the Jazz), we won’t exceed them.

Put differently: it makes no sense to sacrifice sound quality at the moderate volume levels we actually use, in order to gain more power that we can’t use without damaging our hearing.

Classical Music Streaming: Primephonic & Idagio

The Problem

Streaming classical music has 2 basic problems.

Note: I use the term “classical” in the most general sense, from ancient (pre-renaissance) to modern, including early music, baroque, classical, romantic, etc.


ID3 has become the standard metadata for music, defining fields like title, artist, album, etc. This has an impedance mismatch with classical music. For example, if the Chicago Symphony is playing the Brahms violin concerto with conductor Reiner and soloist Heifetz, who is the artist? Brahms, Chicago Symphony, Reiner or Heifetz? What is the title? Violin Concerto in D Major, Opus 77, Chicago Symphony Live, or some nickname? If you search for this piece on streaming services like Spotify, Tidal, or Amazon, you will find all of the above, each individual recording having different metadata. Exacerbating this problem is the fact that every piece from every composer typically has tens if not hundreds of different recorded performances by different artists. This inconsistency makes it frustrating to find classical music.

Sound Quality

The sonic quality of the recording presents another problem. Most popular music is recorded with terrible sound quality: massive dynamic compression with clipping, and extreme amounts of EQ and other processing. They’re engineered to sound as loud as possible for radio, streaming and listening in noisy environments with crappy earbuds. This makes it easier for streaming, since the recording was already squashed to death by the studio during production, sound quality doesn’t matter because there’s nothing to preserve. However, sound quality matters with classical music. These recordings are made to a higher standard, having minimal studio processing, preserving dynamics and detail that lossy compression would destroy. This is important to reveal subtle variations in artistry, such as how a pianist voices chords, to a cello player’s bowing technique, to a flute player’s tone colors. This makes it harder to stream classical music.

So while there is plenty of classical music on standard streaming services, finding the piece you want, and the available recordings, is frustrating if not impossible. And when you finally do find it, listening to it through the streaming service’s lossy compression can be more disappointing than satisfying.

Thus it comes as no surprise that streaming accounts for only about 25% of classical music consumption, compared to 64% for the rest of the market.

The Solution

Even though classical makes up only about 3% of music sales, companies have formed to solve these problems. The 2 most popular are Idagio and Primephonic, and they address both of the above problems. I did not explore Naxos, because my experience owning about 100 of their CD recordings is that their sound quality (with a few notable exceptions) is second rate, and they only stream their own content, making great performances of the past inaccessible.

These classical music streaming services define and populate their own metadata customized for classical music, and they stream at lossless CD quality. This transforms the classical music streaming experience and has the potential to fundamentally change how music lovers experience classical music.

If that last statement sounds over the top, let me explain. With hundreds of composers, each writing hundreds of works, each having hundreds of recordings by different artists, each bringing something new to the artistic expression of the work, there is more classical music than any normal person can listen to in one lifetime. Of course, not all performances, nor all recordings, are equal. So music lovers have relied on reviewers to help sort through all of this. But reviewers and listeners are all people with different opinions. The work or recording a listener is interested in might not have been reviewed. When it has, a listener might find to his consternation that he disagrees with the reviewer. And many other works that a listener doesn’t even know about might be worth consideration. For decades, classical music listeners have relied on reviewers as gatekeepers and guides.

Streaming upends all of this by reducing to zero the marginal cost of the next recording you listen to. Browse the full catalog, using the classical music customized metadata to find works and performances in your area of interest. Take a chance on new works, recordings or artists, that the cost of individual CDs or downloads might have prevented you from listening to. Listen to everything and decide for yourself; the only constraint is your time. And, listen anywhere you are: home, work, in the car or wherever.

Furthermore, these streaming services cost less than a subscription to a classical music magazine like Grammophon or Fanfare. More on costs below.


Idagio is a German company that’s about 4 years old. They are based in Berlin and their service became available in the USA about a year ago (September 2018).

Primephonic is a newcomer; their service started about a year ago (August 2018).

Both companies are staffed by a mix of musicians, musical scholars, agents and software engineers. They believe in what they’re doing and have the domain expertise to do it right.

I found many reviews of Idagio and Primephonic, but most were pretty shallow, as if the reviewers didn’t actually use the services in-depth on different devices and situations to discover their strengths & weaknesses. Since both services provide a 2-week free trial, I did this myself during a period where I did some business travel so I got their full experience from home, work, and traveling. Here is what I learned.

Getting Started

Both services offer a 14-day free trial. Primephonic is the quickest and easiest, since they don’t require a credit card. Just sign up with your email and it’s ready to go. Idagio requires a credit card to sign up for the trial, but they don’t bill anything to it until the 14th day.

Both services also let you sign up with a Facebook or Google account instead of using your email. I don’t do social media and prefer not to link online accounts, so I did not use this option.


Their catalogs are roughly the same total size, and similar: both services had about 75% of the pieces I searched for, from early (pre-renaissance) music to modern. Where they differ, Primephonic has better coverage of early music, and less well known works and artists. Idagio has better coverage of baroque to modern classical music. For example, Idagio didn’t have some Piffaro (only 3 albums versus 6) and Joel Frederiksen early music albums that Primephonic had. Primephonic didn’t have Levin’s Mozart Requiem performance with the Violins du Roy, but Idagio did.

Some notable works were missing from both catalogs. Neither had anything from Jacqueline DuPre, nor did either have the Hillier Ensemble’s Age of Cathedrals (this is just one of several albums I have that was not in the catalogs of either service).

Metadata and Search

They both have metadata customized for classical music. You can search by any keyword, from composer to work to group, to album. And the search results are cross-referenced, so if you find a work, for example, you can click on it to see all other works from that composer, or all albums having that work.

I found their metadata doesn’t have much information about the album. For example if I search for “Liszt Transcendental Etudes”, they both show a list of albums. If I click on one, say Berezovsky (available in both), it shows me a picture of the album cover and says, “1996 Teldec Classics”. But there is no catalog number or other recording info, not to mention liner notes.

Both Idagio & Primephonic have the album booklets in PDF format for many albums (but not all). Primephonic has them more often than Idagio, and Primephonic makes them available in the mobile app as well as the browser, in contrast to Idagio which makes them available only in the browser. Coverage is gradually increasing with both services.

Primephonic’s search may not be quite as robust as Idagio. I searched for the Brahms Piano Quintet Op 34 in both. Idagio showed several recordings of it. It did not appear in Primephonic at all, as if they didn’t have this popular work in their catalog. When I mentioned this to Primephonic support, they sent me a link to the piece and said they would update their search. So they do indeed have it, but it wasn’t coming back in search results. But it did come back the next day, so they are listening to customers and actively improving their platform.

Music Discover-Ability

Despite this Primephonic glitch, in the Android app, their search is better than Idago’s. This is best explained by example. Suppose you want to find recordings of Liszt’s Transcendental Etudes.

In Primephonic: search for Liszt, tap him in results, and it shows a list of popular works. Tap Show All, but this list is too long to bother scrolling through, and you’re not sure whether it will appear under E for Etudes or T for Transcendental. The app has a Sort By box, enabling you to sort by Opus number, then you scroll to 139. Tap this, and it shows you 83 recordings which you can sort by popularity, A-Z, Z-A, newest, oldest, longest or shortest.

In Idagio: search for Liszt, tap him in results, and it shows 3 tabs: Works, Recordings, Albums. The Works tab has no way to sort or sub-search, it’s unclear how it’s sorted, and the list is too long to scroll, so that’s not helpful. The Recordings tab can sort by Date, Most Popular, or Recently Added, none of which help you find the Transcendental Etudes, so that’s not helpful. The Albums tab can sort by year or alphabetically, so this is not helpful either.

In short: Idagios’s Android app lacks sub-search or sort, making it more difficult to find the pieces you’re looking for. It’s easier to find things in the Primephonic app.

However, Idagio’s web browser does better than their app. Here, when you tap Liszt, Works can be grouped by Keyboard, Secular, Chamber, etc. This makes it easier to find stuff, but sort is still only by popularity or alphabet, so it’s still not as good as Primephonic.

Applications / Players

Both services are fully functional in a web browser, and in Android and iOS apps that are free to install (not including the subscription price) from the standard app stores. By fully functional I mean you can search the catalog and play music. I ran both services on my Browser (Chrome & Firefox on Ubuntu 16 and 18), phone (Galaxy Note 4 SM-N910T running LineageOS 16 / Android 9) and my tablet (Galaxy Tab S SM-T700 running LineageOS 14 / Android 7).

Primephonic audio had brief gaps or glitches every 10 seconds or so when playing from Firefox on my laptop (which makes listening impossible), but this didn’t happen from Chrome on the same laptop, nor did it happen in Firefox on my desktop. So this problem was probably Firefox, not Primephonic. Audio from both apps was seamless on my phone & tablet.

UPDATE: these audio glitches turned out to be caused by Pulseaudio. Idagio streams at lossless CD quality which Pulseaudio handled just fine. Primephonic streams at higher than CD quality which was causing buffer under-runs in Pulseaudio. I reconfigured Pulseaudio to increase audio buffering and this made Primephonic glitch-free at all audio rates up to 192-24.

Idagio is more reliable with faster, smoother performance in both the browser and the Android app. Primephonic occasionally hung (both the app, and the web page) and had to be restarted or reloaded, which Idagio never did. Also, Primephonic had a bug in which the app’s streaming quality settings don’t appear to be saved, but revert to the defaults every time I saw them, even after I changed them.

UPDATE: as of June 2020, Primephonic has fixed these bugs in their app.

The Primephonic app supports both portrait & landscape mode, which makes it easier to use on my tablet. This is a nice little touch compared to Idagio’s app, which is always in portrait mode, even on the tablet.

Both apps enable you to download tracks or entire albums to your device so you can play them back anytime, even when disconnected. This was great on a cross-country flight. However, neither app supports external SD cards, so whatever you download consumes internal storage. When downloading, Idagio’s app creates an Android notification with a progress bar, and it also indicates in your music library the pending download status. Primephonic’s download is more of a black box – it doesn’t have a notification and you’re never sure exactly when it’s downloading, or when it might finish. But it does mark which tracks or albums in your library are downloaded, when complete.

UPDATE: as of June 2020, Primephonic app downloads give status notifications like Idagio.

Both apps stream smoothly and seamlessly, whether live streaming or playing pre-downloaded content, listening on headphones plugged into the device, or over bluetooth in my car. And my car’s audio next/previous track controls also worked when playing music from the apps on my phone.

Sound Quality

Both support CD quality streaming as FLAC, which uses lossless compression. Listening to them on my audio system, the sound quality of both services was as good (or bad) as the recordings themselves on CD. To test this, I configured each service to stream in CD quality, then found CDs in my collection in each service, and streamed it with the CD playing, and quick switching back and forth I found them indistinguishable. My audio system is quite transparent and I can distinguish 320 kbps MP3 from CDs in blind listening tests, so this test suggests that each service is streaming the audio stream as-is, without processing it.

Primephonic streams at higher than CD quality for titles that support it. Primephonic’s highest audio quality setting uses MPEG4-SLS which streams the lossless raw recording when network bandwidth supports it, and falls back to AAC lossy compression when it doesn’t. As of June 2020, roughly half the content I listen to on Primephonic streams at higher than CD quality. I’ve seen sample rates of 44.1k, 48k, 88.2k, 96k, 176.4k and 192k, so it appears that Primephonic is streaming whatever raw bits the record companies provide, without resampling or converting them.

Both services also support lower quality (lossy compression) streaming to reduce data usage, which is useful for phones. These still offer good sound quality (192-320 kbps) that exceeds most other music streaming services.

Primephonic has settings for different rates on mobile versus Wifi data, which is useful and distinguishes it from Idagio, which just has a single quality setting.

Primephonic has gapless playback, but Idagio does not. Frequently, classical tracks or movements blend right into each other without any break in the music. Without gapless playback, the audio system inserts a break. This could be an important consideration for some listeners.

Data Consumption

I mentioned that both apps can stream audio at true CD quality, yet they also provide lossy compression to save mobile data. This is especially useful because when listening on your phone, you’re often in a situation where reference quality audio isn’t needed: in the car or other noisy environment, using BlueTooth audio or earbuds plugged into your phone. Even some of the best IEMs and earbuds don’t have the same reference audio quality as full size headphones or listening rooms. So CD quality streaming only wastes mobile data when you can’t hear the difference.

I measured the actual data usage by each app when streaming audio over my mobile connection.

Before getting into the differences, here is approximate expected data usage per hour at a few standard music streaming rates:

  • 128 kbps = 1 MB / minute, 60 MB / hour
  • 320 kbps = 2.4 MB / minute, 144 MB / hour
  • CD (44 k / 16 b uncompressed)¬† = 1,411 kbps = 10.5 MB / minute, 640 MB / hour
  • CD FLAC (lossless compression) = 6 MB / minute, 400 MB / hour
  • 192-24 (the highest audio rate you’ll likely use)= 9,216 kbps = 69 MB / minute, 4.14 GB / hour


Offers 4 quality settings: Normal (128 kbps), High (256 kbps), Superior (320 kbps), Full (lossless up to 192-24). Also, allows different settings on WiFi versus mobile, which is quite useful.

However, when streaming music in the mobile app, Primephonic consumed about 200 MB per hour regardless of the setting. That is higher than 320 kbps. This is a bug in their Android app that makes it essentially unusable for streaming over mobile.

Update: As of June 2020, Primephonic has fixed this bug.


Offers 3 quality settings: Normal (AAC 192 kbps), High (MP3 320 kbps), Lossless (FLAC of 1411 kbps). This is a single global setting whether on WiFi or mobile. It also offers a quality setting for downloads: Normal (750 Kb per minute, about 128 kbps), High (2.5 Mb per minute, about 320 kbps), or Lossless (up to 10 Mb per minute, but about 2/3 of that due to lossless compression).

When streaming music, Idagio consumed about 80 MB per hour at Normal and 200 MB per hour at High.

Customer Support

I emailed support for both services with various bug reports & suggestions. Both responded to all my emails, and not robotically but from an actual human who understood my message and gave a courteous, intelligent response. Primephonic was a bit faster, responding in less than 24 hours even on weekends. Idagio took a couple of days to respond, which is still quite good.


Their cost is similar but not the same. Idagio is simple with a single service tier: $10 / month. No discount for buying a year up front, so it’s $120 / year.

Primephonic has tiered service depending on the streaming audio quality. It costs $10 / month for up to 320kbps lossy, and $15 / month for CD quality or higher. Primephonic has discounts for buying a year up front, which costs $100 and $150 respectively.

So, Primephonic can be the same price or more expensive than Idagio, depending on whether you want full CD quality streaming.

Artist Reimbursement

Both services reimburse performers differently from other streaming services, in a way that is better suited to classical music, where track lengths vary tremendously. Reimbursing by track play starts just doesn’t make sense. Instead, they reimburse performers based on the time individual subscribers spend listening to specific tracks. In short, reimbursement is based on time spent, not starts.


To say that I’ve enjoyed these trials would be an understatement. It’s wonderful to have such a huge library of classical music at my disposal to listen wherever I want, at home, at work, in my car, or while traveling. Also, each service has curated lists of music in different areas of interest, which can be a useful exploration guide.

I like early music so I lean toward Primephonic due to their slightly better coverage, gapless playback, and their slightly better music search & discover-ability. However, the fact that their Android app always consumes 200 MB / hour when streaming is a show-stopper. And they’re more expensive, at least for full CD quality, and their app is a little more buggy.

I’m definitely going to subscribe to one of these services, but I still haven’t decided which one. They’re quite similar, each has its minor differences, pro & con, and neither is clearly better. I hope this detailed review has helped you decide whether you want a service like this and which might be best for you.

Getting the Mouse Right on Ubuntu

I recently posted about my Elecom thumb trackball and how to set it up in Ubuntu to auto-configure whenever it is plugged in. Since then I’ve learned a few new things.

The mouse would occasionally stutter, by which I mean briefly stop moving (while I was moving the ball), or jump a few pixels, or across the screen. Not often enough to be completely unusable, but enough to be frustrating.

Since the Logitech mouse never did this, I thought it was a problem with the Elecom mouse itself. But it might also be a software issue. So I explored that and learned a couple of new things.

First: both libinput and synaptics were installed and conflicting with each other. This computer was a desktop not having a touchpad, so I removed synaptics and installed libinput:

sudo apt install xserver-xorg-input-libinput
sudo apt purge xserver-xorg-input-synaptics

NOTE: on my laptop, synaptics was called xserver-xorg-input-synaptics-hwe-16.04.

NOTE: I was not able to install xserver-xorg-input-libinput on my Ubuntu 16 laptop. Apparently it has a version conflict:

The following packages have unmet dependencies:
xserver-xorg-input-libinput : Depends: xorg-input-abi-22
Depends: xserver-xorg-core (>= 2:

If I check the versions, this message seems to be true:

ii xserver-xorg-core-hwe-16.04 2:1.19.6-1ubuntu4.1~16.04.2 amd64 Xorg X server - core server

This helped: the mouse still occasionally stuttered, but less frequently. So I looked for other possible software conflicts.

I noticed that using xset with different values, like this which should really slow down the mouse:

xset m 1/64 100

sometimes had an effect, sometimes did not. This suggested that evdev and libinput were both trying to control the mouse.

Since I was setting my mouse with xinput, I disabled xset. I considered the opposite, but the Elecom trackball has very high DPI (750 or 1500) and I couldn’t get xset to turn down the mouse sensitivity. Yet xinput does this nicely. Disabling xset is a simple command:

xset m 0 0

You can check what xset is doing with the mouse using the command:

xset q

Look for the section called “Pointer Control”. If acceleration and threshold are both 0, it’s disabled.

Now things got a bit more complex because I discovered every time the mouse is unplugged and replugged, xset starts up again. So I added this xset command to the script that udev fires when the mouse is plugged in.

Next, I found that I needed to ensure libinput controlled the mouse instead of evdev. In directory /usr/share/X11/xorg.conf.d I saw config files for both evdev and libinput. One thing that helped was to rename 90-libinput.conf to 05-libinput.conf so it is read first, before anything else.

Finally, to slow the mouse down I used different libinput settings. I had been using the Coordinate Transform Matrix property with values less than 1.0. This works OK, but causes a problem in some Steam games where the mouse is completely unusable. Setting this property to the identity matrix (its default value) eliminates that problem. This suggested to me that this feature might have a bug. So I use the Accel Speed property instead, setting it to negative values to slow down the mouse. This seems to be smoother and more reliable.

Another point: on my laptop, I disabled horizontal 2-finger scrolling. To my surprise, this was causing the mouse to occasionally jump across the screen when I clicked a button. I have no idea why these settings were related, but they were.

Problem solved: no more mouse stutter!

Bits and Dynamic Range

When digital audio came out I wondered how the number of bits per sample correlated to the amplitude of waves. I imagined that the total expressible range was independent of the size of the smallest discernible gradation. Since this appeared to be a trade-off, I wondered how anyone decided what was a good balance.

Later I realized this is a false distinction. First: the number of bits per sample determines the size of the smallest gradation. Second: total expressible range is not a “thing” in the digital domain. Third: if the total range is a pie of arbitrary size, dynamic range is the number of slices. The smaller the slices, the bigger the dynamic range.

Regarding the first: to be more precise, bits per sample determines the size of the smallest amplitude gradation, as a fraction of full scale. Put differently: what % of full scale is the smallest amplitude gradation. But full scale is the amplitude of the analog wave, which is determined after D/A conversion, so it’s simply not part of the digital specification.

Amplitude swings back and forth. Half the bits are used for negative, the other for positive, values. Thus 16 bit audio gives 2^16 = 65,536 amplitudes, which is 32,768 for positive and negative each (actually one of the 65,536 values is zero, which leaves an odd number of values to cover the + and – amplitude swings, making them asymmetric by 1 value, which is a negligible difference). Measuring symmetrically from zero, we have 32,768 amplitudes in either direction. So the finest amplitude gradation is 1/32,768 of full scale in either direction, or 1/65,536 of peak-to-peak. 16-bit slices the amplitude pie into 65,536 equal pieces.

Here’s another way to think about this: the first bit gives you 2 values and each additional bit doubles the number of values. Amplitude is measured as voltage, and doubling the voltage is 6 dB. So each bit gives 6 dB of range, and 16 bits gives 96 dB of range. But this emphasizes the total range of amplitude, which can be misleading because what we’re really talking about is the size of the finest gradation.

So let’s follow this line of reasoning but think of it as halving, rather than doubling. We start with some arbitrary amplitude range (defined in the analog domain after the D/A conversion). It can be anything; you can suppose it’s 1 Volt but it doesn’t matter. The first digital bit halves it into 2 bins, and each additional bit doubles the number of bins, slicing each bin to half its size. Each of these halving operations shrinks the size of the bins by 6 dB. So 16 bits gives us a bin size 96 dB smaller than full scale. Put differently, twiddling the least significant bit creates noise 96 dB quieter than full scale.

To check our math, let’s work it backward. For any 2 voltages V1 and V2, the definition of voltage dB is:

20 * log(V1/V2) = dB

So 96 dB means for some ratio R,

20 * log R = 96

where R is the ratio of full scale to the smallest bin. This implies that

R = 10 ^ (96/20) = 63,096

That’s almost the 65,536 we expected. The reason it’s slightly off, is that doubling the voltage is not exactly 6 db. That’s just a convenient approximation. To be more precise:

20 * log 2 = 6.0206

So doubling (or halving) the voltage changes the level by 6.0206 dB. If we use this more precise figure, then 16 bits gives us 96.3296 dB of dynamic range. If we compute:

20 * log R = 96.3296

We get

R = 10 ^ (96.3296 / 20) = 65,536

When the math works, it’s always a nice sanity check.


The term dynamic range implies how “big” the signal can be. But it is both more precise and more intuitive to imagine the concept of dynamic range as the opposite: the size of the smallest amplitude gradation or “bin”, relative to full scale. Put differently: dynamic range is defined as the ratio of full scale, to the smallest amplitude bin.

With 16 bits, that smallest bin is 1/65,536 of full scale, which is 96 dB quieter. With 16-bit amplitudes, if you randomly wiggle the least significant bit, you create noise that is 96 dB below full scale.

With 24 bits, that smallest bin is 1/16,777,216 of full scale, which is 144 dB quieter. With 24-bit amplitudes, if you randomly wiggle the least significant bit, you create noise that is 144 dB below full scale.

Typically, the least significant bit is randomized with dither, so we get half a bit less dynamic range, so for 16-bit we get 93 dB and 24-bit we get 141 dB.

Practical Dynamic Range

Virtually nothing we record, from music to explosions, requires more than 93 dB of dynamic range, so why does anyone use 24-bit recording? With more bits, you slice the amplitude pie into a larger number of smaller pieces, which gives more fine-grained amplitude resolution–and, consequently, a larger range of amplitudes to play with. This can be useful during live recording, when you aren’t sure exactly how high peak levels will be. More bits gives you the freedom to set levels conservatively low, so peaks won’t overload, but without losing resolution.

Another reason that 24 bits can be useful is related to the frequency spectrum of musical energy. Most music has its maximum energy at or near its lowest frequencies, and from the lower midrange upward, energy usually drops by around 6 dB / octave. By the time you get to the top octave, the level is down 30 dB or more, so you’ve lost at least 5 bits of resolution — sometimes more. You might think that 16 – 5 = 11 bits is enough. But since the overall level was below full scale to begin with, you don’t have 16 bits. You typically have only 8 bits for these high frequencies, which is only 48 dB. Recording in 24-bit gives you 8 more bits which solves the problem, giving you 16 bits in this top octave.

Back in the late 80s there was another solution to this, part of the CD Redbook standard called “emphasis”. They applied an EQ to boost the top octave by about 10 dB before digitally encoding it, giving about 2 bits more resolution. Then after decoding it, then cut it by the same amount. In principle, it’s Dolby B for digital audio. However, this is never used anymore because the latest ADC and DACs are so much better now than they used to be.

However, once that recording is completed, you know what the peak level recorded was. You can up-shift the amplitude of the entire recording to set the peak level to 0 dB (or something close like -0.1 dB). So long as the recording had less than 93 dB of dynamic range, this transforms the recording to 16-bit without any loss of information (such as dynamic range compression).

In the extremely rare case that the recording had more than 93 dB of dynamic range, you can keep it in 24-bit, or you can apply a slight amount of dynamic range compression while in the 24-bit domain, to shrink it to 93 dB before transforming it. There are sound engineering reasons to use compression in this situation, even for purist audiophiles!

To put this into perspective: 93 dB of dynamic range is beyond what most people can pragmatically enjoy. Consider: a really quiet listening room has an ambient noise level around 30 dB SPL. If you listened to a recording with 93 dB of dynamic range, and you wanted to hear the quietest parts, the loud parts would would peak at 93 + 30 = 123 dB SPL. That is so loud as to be painful; the maximum safe exposure is only a couple of seconds. And whether your speakers or amplifier can do this at all, let alone without distortion, is a whole ‘nuther question. You’d have to apply some amount of dynamic range compression simply to make such a recording listenable.

Mechanical Keyboards: Why the Buckling Spring Rules!

I have quite a bit of experience with high quality keyboards. I learned to type on a manual typewriter, later graduated to electrics. My first computer was a TRS-80 CoCo which had a cheap chicklet type keyboard, up there with Apple’s laptops as one of the worst keyboards I’ve ever used. We soon upgraded it to one that had buckling spring mechanical key switches. It typed like an electric typewriter, which was awesome. Largely due to hours spent on that computer, I was typing 90+ wpm as a teenager (less common in the 1980s than it is now). In college I had a PC-XT clone, a Leading Edge Model D, which had another excellent buckling spring keyboard. All through the 1980s, just about every PC had this kind of keyboard. This keyboard defined “the sound of work” in offices across the USA.

Over the 90s cheap bubble dome keyboards became more common, until the turn of the century they were ubiquitous and it became nigh impossible to find a mechanical buckling spring keyboard. In 1999 I special ordered one from Unicomp and I still have it today; it works perfectly though it has an outdated PS/2 connector.

Later I discovered the ergonomic joy of ten-key-less 87-key keyboards. Chop off the numpad that you hardly ever used, and the mouse (trackball in my case) gets closer. The rest of the keyboard has the exact same layout (including home,end,arrows) as the classic IBM 101, so your hands and fingers know where to go without thinking. But it’s more comfortable because you don’t have to stretch your right arm as far to reach the mouse.

Problem: nobody makes an 87-key buckling spring keyboard. I own and have extensively used a few 87-key keyboards with other mechanical switches: Cherry, Oetmu, Zorro. They’re way better than bubble dome switches (no double or missed strikes), but not as nice as buckling springs. Why? Two key differences:

  • Crisp: a buckling spring has a crisp snap when the key strikes. You can type with confidence, nary any doubt whether a light key hit struck.
  • Force: a buckling spring requires a bit more force to actuate than other mechanical switches, which can accidentally strike when you are just resting your fingers on them.

It sounds small, but these two points make all the difference in the world. There is nothing like a buckling spring keyboard. Cherry and other mechanical switches are better than bubble domes, but pale in comparison.

Virtualbox and /usr/lib: FIXED

I ran Virtualbox today and nothing happened, as if I had not clicked its icon. There was no error message but I figured it was failing so I ran it from the command line and got the following error:

VirtualBox: Error -610 in supR3HardenedMainInitRuntime!
VirtualBox: dlopen("/usr/lib/virtualbox/",) failed: <NULL>
VirtualBox: Tip! It may help to reinstall VirtualBox.

When I Googled this I found all kinds of ideas, none of which worked. I checked the package versions, there were no version mis-matches. Finally I found an obscure thing that fixed it:

My /usr/lib directory was set to root:staff, mode 775. I had been working on Nixnote, an open source Evernote client, and needed to install some stuff there, so I changed the ownership so I wouldn’t have to sudo to copy new files there.

Well it turns out Virtualbox won’t run unless /usr/lib is root:root. Don’t ask me why, I don’t know and it doesn’t make sense. And Virtualbox doesn’t tell you this, nor is it mentioned in their docs. But setting it back fixed the problem.

LineageOS: Home Screen Setup

LineageOS is the open source version of Android. Combined with OpenGApps, it has all the functionality of Android, without any of the bloat-ware, crapplets or customized skins that manufacturers and carriers add to Android. And, it’s supported by a community of developers so you can get the latest version of Android on your device long after the manufacturer abandons it to planned obsolescence.

The LineageOS home screen launcher is called Trebuchet. It is more configurable than most stock ROMs. There’s a particular setup I like that combines a dark background (which saves battery on OLED screens) with big icons. Setting this up is simple:

First, long-touch a blank spot on the home screen. The home setup screen appears. Your icons will be different from shown, but the bottom area will be the same with the WALLPAPERS, triple-dot button, and WIDGETS:


Touch WALLPAPERS in the lower left. A screen like below will appear asking you how you want to pick wallpaper. Pick the one circled in RED:


A screen like below appears asking which wallpaper you want. Pick the one shown below:


A screen appears asking where you want to use this wallpaper. I usually pick both home screen and lock screen:


Next, we’ll tell the home screen to use large icons. So go back to the home screen and long-touch a blank spot again to get the home setup screen. Then touch the triple-dot button:


The following menu will appear. You can set a bunch of stuff here; icon size is at the bottom: