Monthly Archives: May 2016

The Power of the Dark Side

First let’s cut to the chase: in-room far-field frequency response measured at the listening position using 1/3 octave warble tones, measured with a Rode NT1-A mic, corrected for mic response


  • The red line is what you hear – near perfection!
  • The solid blue line is with room treatments, but without EQ
  • The dotted blue line is without room treatment

In short, you can see that room treatment (huge tube traps and copious use of thick RPG acoustic foam) made a huge difference. Then EQ finessed that to something near perfection.

Aside: this FR curve makes me wonder why people often say Magnepans don’t have good bass. Mine are near-flat to 32 Hz (and you can hear 25 Hz) with a level of taughtness, speed and clarity that few conventional speakers can match. A subwoofer can go lower, which is great for movies and explosions, but most lack the accuracy and refinement needed for serious music listening.

Now, for the details:

I’ve been an audiophile since my late teen years, long before my income could support the habit. As an engineer and amateur musician I always approached this hobby from a unique perspective. The musician knows what the absolute reference really sounds like – live musicians playing acoustic instruments in the room. The engineer believes objectivity – measurements, blind listening tests, etc. – is the best way to get as close as possible to that sound.

Part of this perspective is being a purist, and one aspect of being a purist is hating equalizers. In most cases, EQ falls into one of 2 categories:

  1. There are flaws in the sound caused by the speakers or room interactions, and instead of fixing them you use EQ as a band-aid. This flattens the response but leaves you with distortions in the phase or time domain, like ringing.
  2. You don’t want to hear what live acoustic music really sounds like, you prefer a euphonically distorted sound and use an EQ to get it.

Equalizers are the dark side of audio. Powerful and seductive, yet in the end they take you away from your goal: experiencing music as close as possible to the real thing. Recently I traveled to the dark side and found it’s not such a bad place. Share my journey, if you dare.

I had my audio room here in Seattle dialed in nicely after building big tube traps, thick acoustic foam and careful room arrangement based on repeated measurements. However, it still had two minor issues:

  1. A slight edge to the midrange. From personal experience I describe it as the sound I hear rehearsing on stage with the musicians, rather than being in the 2nd row of the audience.
  2. The deepest bass was a bit thin, with 30 Hz about -6 dB. I have a harp recording where Heidi Krutzen plays the longest strings, which have a fundamental around 25 Hz. I could hear this in my room, but it was a subtle whisper. It would be nice to hear that closer to a natural level.

My room treatments made a huge improvement in sound (and I have the measurements to prove it). But I don’t know of any room treatment that can fix either of these issues. The sound was very good both objectively (+/- 4 dB from 35 Hz to 20 kHz at listener position) and subjectively, and I enjoyed it for years. Then I got the LCD-2 headphones and Oppo HA-1 DAC. As I listened to my music collection over the next year (a couple thousand discs, takes a while), I discovered a subtle new dimension of natural realism in the music and wanted to experience that in the room.

Since my upstream system was entirely digital, equalization might not be as terrible as any right-thinking purist audiophile would fear. I could equalize entirely in the digital domain, no DA or AD conversion, before the signal reaches the DAC. And since the anomalies I wanted to correct were small, I could use parametric EQ with gradual slope, virtually eliminating any audible side effects.

That was the idea … now I had to come up with an action plan.

After a bit of Googling I found a candidate device: the Behringer DEQ2496. Price was the same on B&H, Adorama and Amazon, and all have a 30 day trial, so I bought one. The DEQ2496 does a lot of things and is complex to use and easy to accidentally “break”. For example, when I first ran the RTA function, it didn’t work. First, the pink noise it generates never played on my speakers. After I fixed that, the microphone I plugged in didn’t work. After I fixed that, the GEQ (graphic equalizer) settings it made were all maxed out (+ / – 15 dB). Finally I fixed that and it worked. All of these problems were caused by config settings in other menu areas. There are many config settings and they affect the various functions in ways that make sense once you understand it, but are not obvious.

NOTE: one easy way around this is before using any function for the first time, restore the system default settings, saved as the first preset. This won’t fix all of the config settings; you’ll still have to tweak them to get functions to work. But it will reduce the amount of settings you’ll have to chase down.

In RTA (room tune acoustic?) mode, the DEQ2496 is fully automatic. It generates a pink noise signal, listens to it on a microphone you set up in the room, analyzes the response and creates an EQ curve to make the measured response “flat”. You can then save this GEQ curve in memory. You have two options for flat: Truly flat measured in absolute terms, or the 1 dB / octave reduction from bass to treble that Toole & Olive recommend (-9 dB overall across the  band). This feature is really cool but has 2 key limitations:

  1. It has no built-in way to compensate for mic response. You can do this manually by entering the mic’s response curve as your custom target response curve, but that is tedious.
  2. It provides only 15 V phantom power to your mic. Most studio condenser mics (including my Rode NT1-A) want 48 V, but aren’t that sensitive to how much voltage they get and work OK with only 15 V. But you always wonder how much of the mic’s frequency response and sensitivity you lose when you give it only 15 V. Perhaps not much, but who knows?

The GEQ settings the DEQ2496 auto-generated were too sharp for my taste, so I looked at the FR curve it measured from the pink noise signal. This roughly matched the FR curve I created by recording 1/3 octave warble tones from Stereophile Test Disc #2. Since both gave similar measurements, I prefer doing it manually because I can correct for the mic’s response, and my digital recorder (Zoom H4) gives the mic full 48 V phantom power.

So the curves match: that’s a nice sanity check – now we’re rolling.

Using the DEQ 2496, I created parametric EQ settings to offset the peaks and dips. This enabled me to use gentle corrections – both in magnitude and in slope. I then replayed the Stereophile warble tones and re-measured the room’s FR curve. The first pass was 2 filters that got me 90% of the way there:

  • +4 dB @ 31 Hz, 1.5 octaves wide (slope 5.3 dB / octave)
  • -3 dB @ 1000 Hz, 2 octaves wide (slope 3 dB / octave)

These changes affected other areas of the sound, so I ran a couple more iterations to fine tune things. During this process I resisted the urge to hit perfection. Doing so would require many more filters, each steeper than I would like. It’s a simple engineering tradeoff: allowing small imperfections in the response curve allows fewer filters with gentler slope. Ultimately I ended up with near-perfect frequency response measured in-room at the listening position:

  • Absolute linearity: from 30 Hz to 20 kHz, within 4 dB of flat
  • Relative linearity: curve never steeper than 4 dB / octave
  • Psychoacoustic linearity: about -0.8 dB / octave downslope (+3.9 dB @ 100 Hz, -3 dB @ 20 kHz)

The in-room treble response was excellent to begin with, thanks to the Magnepan 3.6/R ribbon tweeters. Some of the first EQs impacted that slightly, reducing the response from 2k to 6k, so I put in a mild corrective boost.

Subjectively, the overall before-after differences are (most evident first):

  • Midrange edge eliminated; mids are completely smooth and natural, yet all the detail is still there.
  • Transition from midrange to treble is now seamless, where before there was a subtle  change in voicing.
  • Smoother, more natural bass: ultra-low bass around 30 Hz is part of the music rather than a hint
  • Transition from bass to lower midrange is smoother and more natural.

In other words, audiophile heaven. This is the sound I’ve dreamed of having for decades, since I was a pimpled teenager with sharper ears but less money and experience than I have now. It’s been a long road taken one step at a time over decades to get here and it’s still not perfect. Yet this is another step toward the ideal and now about as close as human engineering can devise. The sound is now so smooth and natural, the stereo stops reminding me it’s there and enables me to get closer to the music, which now has greater emotional impact. And it’s more forgiving of imperfect recordings so I can get more out some old classics, like Jacqueline DuPre playing Beethoven Trios with Benjamin Britten and Arthur Rubinstein playing the Brahms F minor quintet with the Guarneri.

Throughout this process, I could detect no veil or distortion from the DEQ2496. The music comes through completely transparently. I measured test tones through the DEQ2496 in both pass-through and with EQ enabled; it introduced no harmonic or intermodulation distortion at all. That is, anything it might have introduced was below -100 dB and didn’t appear on my test. This is as expected, given that I’m using it entirely in the digital domain – no DA or AD conversions – and my EQ filters are parametric, small with shallow slope.

While I was at this, I created a small tweak for my LCD-2 headphones. Their otherwise near perfect response has a small dip from 2 to 8 kHz. A little +3 dB centered at 4.5 kHz, 2 octaves wide (3 dB / octave, Q=0.67) made them as close to perfect as possible.

Overall, I can recommend the DEQ2496. Most importantly, it enabled me to get as close to humanly possible to perfect sound. That in itself deserves a glowing recommendation. But it’s not a magic box. I put a lot of old fashioned work into getting my audio system in great shape and used the DEQ2496 only to span that last %. Like any powerful tool, the DEQ2496 can be used for evil or for good. So to be fair and complete I’ll list my reservations:

  • The DEQ2496 is not a magic band-aid. You still need to acoustically treat and arrange your room first to fix the biggest problems. After you do that, you might be satisfied and not need the DEQ2496.
  • The DEQ2496 is complex to use, creating the risk that you won’t get it to work right or you’ll get poor results.
  • To use the RTA feature you’ll need an XLR mic with wide, flat frequency response.
  • I cannot assess its long term durability, having it in my system for only a few days. Many of the reviews say it dies after a year or two,  but they also say it runs hot. Mine does not run hot, so maybe Behringer changed something? Or perhaps mine runs cooler because I’m not using the D-A or A-D converters. It does have a 3 year manufacturer warranty, longer than most electronics.

Housing in San Francisco

Kudos to Eric Fischer for a detailed analysis of Housing in San Francisco.

He did a regression and found 3 key features that correlate with housing prices:

  • Housing Supply: how much housing is available on the market
  • Salaries: how much are people in the area earning?
  • Employment: how many people in the area are employed?

Interestingly and surprisingly, the trend of rents over time was quite steady unaffected by the introduction of policies like rent control. The data & regression suggests that housing follows the basic laws of supply & demand just like other commodities.

Android 6 / Cyanogenmod 13 – Not Yet Ready for Use

Update: not so fast. Today I picked up my phone and certain icons were missing from the home screen. apps stored on the SD card randomly, intermittently disappear from the home screen (even without rebooting, just waking from sleep). And on booting, I get a system modal error dialog and their icons disappear. The apps are still installed and I can run them if I wait a few moments after booting. Turns out it has the same problem my tablet did before. Adoptable storage is still not working as designed.

On top of that, WiFi mysteriously stopped working and could not be enabled, even after rebooting. And the MAC address reverted to 02:00:00:00:00. Looks like CM 13 is not yet ready to use on the Galaxy Note 2. I reverted to my CM 12.1 backups and may try CM 13 again in a few months.

A while ago, I had this to say about Android 6 adoptable storage. Android 6 is great, but this feature just didn’t work. No big deal, I simply kept using the SD card the same old way I’ve been using it for years.

That was a custom port of CM 13 to my old 8″ Galaxy Tab 3. Recently, Cyanogenmod released nightly builds of version 13 for my Galaxy Note 2. Yeah, it’s an old phone. I’ve been using the same phone for almost 4 years. But why not? It’s a fantastic phone, still works like new with great performance and amazing battery life.

I decided to give adoptable storage another try, this time with an official build (albeit a nightly build). Long story short: it works perfectly having none of the problems I encountered before.

  • The SD card mounts to /storage/emulated/0. This is the same location where internal storage used to be mounted.
  • Because of this, apps like the AOSP camera store their data on the SD card, even when set to “internal”. They’re storing data to the same location, not realizing the SD card is mounted there.
  • Same with the system Downloads directory – nice!
  • Apps set to prefer external storage install to the SD card, as well as additional data they download. All automatically.
  • From the system Apps menu, you can change some apps between external and internal storage.
  • For generic shareable folders like music and videos, I’ve had no problems with permissions.
  • I’ve had no problems with Folder Sync and other apps that read & write the same files across several apps.
  • When you plug the phone into a computer via USB, only 1 directory shows up. It’s the SD card. Previously, 2 directories showed up: internal and external.
  • File Managers like Solid Explorer still detect the SD card and use it correctly.

Simple advice:

  • Use a high quality class 10 or faster card. Cards are so cheap now, the speed and reliability is definitely worth it.
  • Use external storage for apps that accumulate lots of data: mapping apps like Sygic and Droid EFB that download entire states or areas, Evernote which can store GB of data locally, Folder Sync which syncs Dropbox and Box to your local device, Titanium Backup, etc.

Overall, it works automatically and seamlessly, simplifying storage space on the phone. I have not had to do any tweaks.

Ubuntu 16 Has Arrived – But Wait!

Ubuntu 16 was released a few weeks ago, the latest version and a LTS release, which means long term support (5 years). All even numbered releases are LTS. I’ve been running Ubuntu since version 10 and updated three machines to Ubuntu 16. A year or two ago I switched to the XUbuntu variant because I don’t like the Unity interface and XFCE is faster and lighter on CPU and RAM. My advice is to stick with Ubuntu 14, if that’s what you’re already running. At least for now.

First, if you have a laptop that needs support for power management, you need the version 4 Linux kernel and must already be running Ubuntu 15. Just keep running it. If you have a desktop, you’re probably running Ubuntu 14, which is a solid release and still supported.

Second, Ubuntu 16 has few practical improvements or upgrades that you might notice. The only difference I’ve noticed is that the Openconnect VPN script is fixed; Ubuntu 15 required a route command after connecting; Ubuntu 16 is fixed and does not. Ubuntu 14 never had this bug.

Third, the Ubuntu 16 upgrader is broken and crashes, so if you try to update you’ll have to fix and complete it manually.

Fourth, Ubuntu 16 has a serious bug: a memory leak in the Xorg process. Previously it used 50 – 100 MB of RAM. On Ubuntu 16 it slowly but constantly grows, after a couple of days reaching a couple of GB, until the system starts swapping. You need to log out to kill the Xorg process and start a new one. This bug occurs only on my desktop using open source video drivers. The other desktop with Nvidia binary drivers, and laptop with Intel HD graphics do not have this bug.

Details: I updated two desktops and a laptop, all 64-bit. One desktop has a generic video card using open source drivers. The other has an Nvidia Quadro K600 using the Nvidia binary driver from the Ubuntu repo. The Laptop is a 2015 Thinkpad Carbon X1 with Intel HD graphics. All three were running Ubuntu 15.10, fully up-to-date, before upgrading, which was running great on all of them.

In all cases, I ran do-release-upgrade from a command prompt. It crashed – didn’t fail, but actually crashed – about halfway through, leaving my machine reporting itself as Ubuntu 16, but with apt-get in a broken state. To complete the install, I ran the following sequence of commands:

apt-get -f install
apt-get update
apt-get dist-upgrade
apt-get autoremove

I repeated this sequence until apt-get stopped reporting errors. This completed the install – at least I think so. All the repos seem to have updated, the system is running kernel 4.4.0, system reports itself as Ubuntu 16 and is running fine.

It’s nice to have VPN work without needing an extra route command. But unless you have a burning need to get on Ubuntu 16, I advise waiting for Canonical to fix the upgrader before installing.

How Strong are Small Airplanes?

The FAA defines 3 categories for small airplanes:

Normal: all standard private and commercial flight maneuvers up to 60* bank angle. Must withstand 3.8 G or more.

Utility: additional flight maneuvers like spins and > 60* bank angles. Must withstand 4.4 G or more.

Acrobatic: any maneuver or bank angle not prohibited by the POH. Must withstand 6.0 G or more.

All certified GA airplanes meet the Normal category, many (like the Cessna 172) meet the Utility category, and some meet Acrobatic. With 3.8 G as the minimum, this means airplanes are built very strong.

You don’t really know how strong the airframe is because the G rating is a minimum. It can handle that G load under normal operation. Certainly it can handle more, but how much more is unknown. If you exceed it, you’re the test pilot – that’s bad, don’t do that.

Being certified Utility doesn’t necessarily mean the airplane can perform any maneuver exceeding 60* of bank. For example many aircraft certified Utility are not approved for spins. Prohibitions like this are listed in the POH.

Airplanes certified for multiple categories may not always satisfy all categories. For example the Cessna 172 is certified Utility only when gross weight is under 2,100 lbs. and CG is forward of a certain point. Otherwise, it’s certified Normal.