Monthly Archives: November 2018

Getting the Mouse Right on Ubuntu

I recently posted about my Elecom thumb trackball and how to set it up in Ubuntu to auto-configure whenever it is plugged in. Since then I’ve learned a few new things.

The mouse would occasionally stutter, by which I mean briefly stop moving (while I was moving the ball), or jump a few pixels, or across the screen. Not often enough to be completely unusable, but enough to be frustrating.

Since the Logitech mouse never did this, I thought it was a problem with the Elecom mouse itself. But it might also be a software issue. So I explored that and learned a couple of new things.

First: both libinput and synaptics were installed and conflicting with each other. This computer was a desktop not having a touchpad, so I removed synaptics and installed libinput:

sudo apt install xserver-xorg-input-libinput
sudo apt purge xserver-xorg-input-synaptics

NOTE: on my laptop, synaptics was called xserver-xorg-input-synaptics-hwe-16.04.

NOTE: I was not able to install xserver-xorg-input-libinput on my Ubuntu 16 laptop. Apparently it has a version conflict:

The following packages have unmet dependencies:
xserver-xorg-input-libinput : Depends: xorg-input-abi-22
Depends: xserver-xorg-core (>= 2:

If I check the versions, this message seems to be true:

ii xserver-xorg-core-hwe-16.04 2:1.19.6-1ubuntu4.1~16.04.2 amd64 Xorg X server - core server

This helped: the mouse still occasionally stuttered, but less frequently. So I looked for other possible software conflicts.

I noticed that using xset with different values, like this which should really slow down the mouse:

xset m 1/64 100

sometimes had an effect, sometimes did not. This suggested that evdev and libinput were both trying to control the mouse.

Since I was setting my mouse with xinput, I disabled xset. I considered the opposite, but the Elecom trackball has very high DPI (750 or 1500) and I couldn’t get xset to turn down the mouse sensitivity. Yet xinput does this nicely. Disabling xset is a simple command:

xset m 0 0

You can check what xset is doing with the mouse using the command:

xset q

Look for the section called “Pointer Control”. If acceleration and threshold are both 0, it’s disabled.

Now things got a bit more complex because I discovered every time the mouse is unplugged and replugged, xset starts up again. So I added this xset command to the script that udev fires when the mouse is plugged in.

Next, I found that I needed to ensure libinput controlled the mouse instead of evdev. In directory /usr/share/X11/xorg.conf.d I saw config files for both evdev and libinput. One thing that helped was to rename 90-libinput.conf to 05-libinput.conf so it is read first, before anything else.

Finally, to slow the mouse down I used different libinput settings. I had been using the Coordinate Transform Matrix property with values less than 1.0. This works OK, but causes a problem in some Steam games where the mouse is completely unusable. Setting this property to the identity matrix (its default value) eliminates that problem. This suggested to me that this feature might have a bug. So I use the Accel Speed property instead, setting it to negative values to slow down the mouse. This seems to be smoother and more reliable.

Another point: on my laptop, I disabled horizontal 2-finger scrolling. To my surprise, this was causing the mouse to occasionally jump across the screen when I clicked a button. I have no idea why these settings were related, but they were.

Problem solved: no more mouse stutter!

How Loud Does it Get?

Magnepan 3.6/R specs don’t give efficiency, but they give voltage sensitivity. That’s 86 dB @ 500 Hz @ 2.83 V. From this we can determine efficiency.¬† 500 Hz is carried by the midrange panel which has 4.2 Ohm impedance, so 2.83 V drives 2.83/4.2 = 0.674 A of current, which makes 2.83 * 0.674 = 1.907 Watts.

So, 1.9 W of power makes 86 dB SPL at 1 meter. That’s lowish efficiency for a speaker.

The Adcom 5800 is rated at 400 W continuous in each channel with 2.1 dB of headroom. 400 W is 10 * log (400 / 1.9) = 23 dB louder than 1.9 W, which makes 86+23 = 109 dB SPL in each speaker. 2 speakers is twice the power which is +3 dB making 112 dB SPL from both speakers. Plus 2.1 for headroom makes 114 dB SPL peak.

I’m ignoring distance mainly because (A) dispersion is line source not spherical so it decays less with distance and (B) it’s in a room so some energy is not lost but reflected back, and (C) listener position is close, only about 2 meters from the speakers.

Subjectively, I can say this is VERY loud. Over the 26 years I’ve owned this amp I can count on the fingers of 1 hand the number of times I’ve seen its yellow 1% distortion warning lights briefly flicker during a transient peak.

NOTE: I tested this last night by holding an SPL meter while listening to a test CD. A full scale (0 dB) digital signal, passing through my preamp (Oppo HA-1) with volume at 0 dB measures 104 dB SPL at the listening position. The power amp (Adcom 5800) warning lights do not even flicker. The preamp goes up to +6 dB output, which would be 110 dB SPL. That’s pretty close to the theoretical measurement–within 2 dB.

That 2.1 dB of headroom means peak power is 10^(2.1/10) = 1.62 times higher than continuous, making 400 * 1.62 = 648 Watts.

Also we can sanity check the amp’s overall efficiency. The 5800’s max continuous power draw is rated at 1800 VA (Watts). While delivering 800 W to a pair of speakers, that’s 44% efficient. It’s actually less efficient at lower volumes because it’s biased to run in symmetric class¬† A up to about 10 Watts output. The max theoretical efficiency of class A is 25%. It’s rated to draw about 250 W when idle.

Next question: if the Adcom 5800 operates in symmetric class A up to 10 W, how loud can it play these speakers while in class A, before transitioning to class AB?

From above, the speakers play at 86 dB SPL when consuming 1.907 watts. 10 watts is 7.2 dB louder, plus 86 = 93.2 dB SPL. That’s per side, so +3 dB makes 96.2 dB SPL. That’s pretty loud. But most likely, the transition from A to AB depends on voltage not current so the power level will vary depending speaker impedance.

Double-check the answer: 400 watts is 16 dB louder than 10, so add 16 dB to 96.2 and you get 112.2 dB. The math checks: same answer as above.

Bits and Dynamic Range

When digital audio came out I wondered how the number of bits per sample correlated to the amplitude of waves. I imagined that the total expressible range was independent of the size of the smallest discernible gradation. Since this appeared to be a trade-off, I wondered how anyone decided what was a good balance.

Later I realized this is a false distinction. First: the number of bits per sample determines the size of the smallest gradation. Second: total expressible range is not a “thing” in the digital domain. Third: if the total range is a pie of arbitrary size, dynamic range is the number of slices. The smaller the slices, the bigger the dynamic range.

Regarding the first: to be more precise, bits per sample determines the size of the smallest amplitude gradation, as a fraction of full scale. Put differently: what % of full scale is the smallest amplitude gradation. But full scale is the amplitude of the analog wave, which is determined after D/A conversion, so it’s simply not part of the digital specification.

Amplitude swings back and forth. Half the bits are used for negative, the other for positive, values. Thus 16 bit audio gives 2^16 = 65,536 amplitudes, which is 32,768 for positive and negative each (actually one of the 65,536 values is zero, which leaves an odd number of values to cover the + and – amplitude swings, making them asymmetric by 1 value, which is a negligible difference). Measuring symmetrically from zero, we have 32,768 amplitudes in either direction. So the finest amplitude gradation is 1/32,768 of full scale in either direction, or 1/65,536 of peak-to-peak. 16-bit slices the amplitude pie into 65,536 equal pieces.

Here’s another way to think about this: the first bit gives you 2 values and each additional bit doubles the number of values. Amplitude is measured as voltage, and doubling the voltage is 6 dB. So each bit gives 6 dB of range, and 16 bits gives 96 dB of range. But this emphasizes the total range of amplitude, which can be misleading because what we’re really talking about is the size of the finest gradation.

So let’s follow this line of reasoning but think of it as halving, rather than doubling. We start with some arbitrary amplitude range (defined in the analog domain after the D/A conversion). It can be anything; you can suppose it’s 1 Volt but it doesn’t matter. The first digital bit halves it into 2 bins, and each additional bit doubles the number of bins, slicing each bin to half its size. Each of these halving operations shrinks the size of the bins by 6 dB. So 16 bits gives us a bin size 96 dB smaller than full scale. Put differently, twiddling the least significant bit creates noise 96 dB quieter than full scale.

To check our math, let’s work it backward. For any 2 voltages V1 and V2, the definition of voltage dB is:

20 * log(V1/V2) = dB

So 96 dB means for some ratio R,

20 * log R = 96

where R is the ratio of full scale to the smallest bin. This implies that

R = 10 ^ (96/20) = 63,096

That’s almost the 65,536 we expected. The reason it’s slightly off, is that doubling the voltage is not exactly 6 db. That’s just a convenient approximation. To be more precise:

20 * log 2 = 6.0206

So doubling (or halving) the voltage changes the level by 6.0206 dB. If we use this more precise figure, then 16 bits gives us 96.3296 dB of dynamic range. If we compute:

20 * log R = 96.3296

We get

R = 10 ^ (96.3296 / 20) = 65,536

When the math works, it’s always a nice sanity check.


The term dynamic range implies how “big” the signal can be. But it is both more precise and more intuitive to imagine the concept of dynamic range as the opposite: the size of the smallest amplitude gradation or “bin”, relative to full scale. Put differently: dynamic range is defined as the ratio of full scale, to the smallest amplitude bin.

With 16 bits, that smallest bin is 1/65,536 of full scale, which is 96 dB quieter. With 16-bit amplitudes, if you randomly wiggle the least significant bit, you create noise that is 96 dB below full scale.

With 24 bits, that smallest bin is 1/16,777,216 of full scale, which is 144 dB quieter. With 24-bit amplitudes, if you randomly wiggle the least significant bit, you create noise that is 144 dB below full scale.

Typically, the least significant bit is randomized with dither, so we get half a bit less dynamic range, so for 16-bit we get 93 dB and 24-bit we get 141 dB.

Practical Dynamic Range

Virtually nothing we record, from music to explosions, requires more than 93 dB of dynamic range, so why does anyone use 24-bit recording? With more bits, you slice the amplitude pie into a larger number of smaller pieces, which gives more fine-grained amplitude resolution–and, consequently, a larger range of amplitudes to play with. This can be useful during live recording, when you aren’t sure exactly how high peak levels will be. More bits gives you the freedom to set levels conservatively low, so peaks won’t overload, but without losing resolution.

Another reason that 24 bits can be useful is related to the frequency spectrum of musical energy. Most music has its maximum energy at or near its lowest frequencies, and from the lower midrange upward, energy usually drops by around 6 dB / octave. By the time you get to the top octave, the level is down 30 dB or more, so you’ve lost at least 5 bits of resolution — sometimes more. You might think that 16 – 5 = 11 bits is enough. But since the overall level was below full scale to begin with, you don’t have 16 bits. You typically have only 8 bits for these high frequencies, which is only 48 dB. Recording in 24-bit gives you 8 more bits which solves the problem, giving you 16 bits in this top octave.

Back in the late 80s there was another solution to this, part of the CD Redbook standard called “emphasis”. They applied an EQ to boost the top octave by about 10 dB before digitally encoding it, giving about 2 bits more resolution. Then after decoding it, then cut it by the same amount. In principle, it’s Dolby B for digital audio. However, this is never used anymore because the latest ADC and DACs are so much better now than they used to be.

However, once that recording is completed, you know what the peak level recorded was. You can up-shift the amplitude of the entire recording to set the peak level to 0 dB (or something close like -0.1 dB). So long as the recording had less than 93 dB of dynamic range, this transforms the recording to 16-bit without any loss of information (such as dynamic range compression).

In the extremely rare case that the recording had more than 93 dB of dynamic range, you can keep it in 24-bit, or you can apply a slight amount of dynamic range compression while in the 24-bit domain, to shrink it to 93 dB before transforming it. There are sound engineering reasons to use compression in this situation, even for purist audiophiles!

To put this into perspective: 93 dB of dynamic range is beyond what most people can pragmatically enjoy. Consider: a really quiet listening room has an ambient noise level around 30 dB SPL. If you listened to a recording with 93 dB of dynamic range, and you wanted to hear the quietest parts, the loud parts would would peak at 93 + 30 = 123 dB SPL. That is so loud as to be painful; the maximum safe exposure is only a couple of seconds. And whether your speakers or amplifier can do this at all, let alone without distortion, is a whole ‘nuther question. You’d have to apply some amount of dynamic range compression simply to make such a recording listenable.

Mechanical Keyboards: Why the Buckling Spring Rules!

I have quite a bit of experience with high quality keyboards. I learned to type on a manual typewriter, later graduated to electrics. My first computer was a TRS-80 CoCo which had a cheap chicklet type keyboard, up there with Apple’s laptops as one of the worst keyboards I’ve ever used. We soon upgraded it to one that had buckling spring mechanical key switches. It typed like an electric typewriter, which was awesome. Largely due to hours spent on that computer, I was typing 90+ wpm as a teenager (less common in the 1980s than it is now). In college I had a PC-XT clone, a Leading Edge Model D, which had another excellent buckling spring keyboard. All through the 1980s, just about every PC had this kind of keyboard. This keyboard defined “the sound of work” in offices across the USA.

Over the 90s cheap bubble dome keyboards became more common, until the turn of the century they were ubiquitous and it became nigh impossible to find a mechanical buckling spring keyboard. In 1999 I special ordered one from Unicomp and I still have it today; it works perfectly though it has an outdated PS/2 connector.

Later I discovered the ergonomic joy of ten-key-less 87-key keyboards. Chop off the numpad that you hardly ever used, and the mouse (trackball in my case) gets closer. The rest of the keyboard has the exact same layout (including home,end,arrows) as the classic IBM 101, so your hands and fingers know where to go without thinking. But it’s more comfortable because you don’t have to stretch your right arm as far to reach the mouse.

Problem: nobody makes an 87-key buckling spring keyboard. I own and have extensively used a few 87-key keyboards with other mechanical switches: Cherry, Oetmu, Zorro. They’re way better than bubble dome switches (no double or missed strikes), but not as nice as buckling springs. Why? Two key differences:

  • Crisp: a buckling spring has a crisp snap when the key strikes. You can type with confidence, nary any doubt whether a light key hit struck.
  • Force: a buckling spring requires a bit more force to actuate than other mechanical switches, which can accidentally strike when you are just resting your fingers on them.

It sounds small, but these two points make all the difference in the world. There is nothing like a buckling spring keyboard. Cherry and other mechanical switches are better than bubble domes, but pale in comparison.