Capo 1.2 for iOS released today

• Chris Liscio

I'm so very happy (read: relieved) to say that Capo 1.2 for iOS has landed on the App Store. This update took far longer than I'd like, because it was really tough to get it right.

See, slowing audio properly requires a lot of CPU power. On a desktop Mac, there's plenty of power to spare these days. Unfortunately—this is not the case in the mobile world. Because Capo is both graphics- and audio-intensive, it eats up a lot of CPU power.

When version 1.0 released, Capo was eating up about 80% CPU while playing audio in the foreground (i.e. with graphics visible) on a 3rd Generation iPod touch, and maybe 75-80% on an iPhone 4. The CPU/RAM improvements on the iPhone 4 were offset by the increase in pixel density pushing the GPU a little harder. C'est la vie.

If I wanted to do anything else with the app, I'd have to fit it into approximately 20% of leftover CPU.

Capo's effects are both graphics and CPU-hungry, so I had my work cut out for me to add them in the next version. This was especially challenging, because I wanted to keep the live-updating spectrums in the effects control backgrounds as I have on the desktop version. Go big or go home, right?

Luckily for me, there were optimizations added to the slowing engine I use, as well as some further optimization work on my part. That 20% margin increased a little bit more, so I was able to shoehorn an initial implementation of the effects UI into the mix.

The initial approach simply sucked—it caused various skips in the audio, and the graphics were very choppy. It was especially noticeable when you'd scroll the effects controls while they were enabled.

The first culprit was my effects implementations. I built my own Equalizer, but it was taking about 15% of the app's runtime according to Instruments. I had to take care of this, and re-wrote the tight loop using NEON intrinsics. That took it down to 1.5-2%—sweet!

I also had to re-write my FFT-based vocal reduction to use the Accelerate version of FFT rather than the implementation I previously used. I don't recall the specifics, but I think the result was approximately 10x faster—the performance optimization team at Apple really kicked ass with their FFT implementation!!

Unfortunately, there was still about 30% of runtime being allocated to my live-updating spectrum drawing. I initially implemented the spectrum drawing using Core Animation and Quartz. That had to be thrown out, and I instead re-built the live spectrum views using OpenGL ES 2. A few custom shaders and other black magic made short work of that—reducing the runtime to about half of what it was before—15-18%. It cost me the prettier anti-aliased curves, but the speed boost was worth it.

In the end, I'm proud of what I shipped today. I know that the progress isn't as quick as I'd hoped on the iOS side of Capo, but I think I truly nailed this effects implementation. Because of this extended effort, Capo will continue to be the best tool for learning your music by ear on your iPhone, iPod, or iPad.

I can't wait to find out what else I can shoehorn into that "last 20%" in the future…