Design before you “minify”

In a fit of frustration the other evening, I posted this on Twitter:

I’m not really a curmudgeon, I just play one online. And apparently folks are watching my show because that one-off rant received a lot more “likes” and “retweets” than I was expecting.

But some were confused by what I meant. So I’ll explain.

That vulgar epiphany came to me while I was measuring the embarrassing sloth of this website 1 and examining the sluggish behavior of quite a few others. Obviously I shouldn’t cast the first stone, but there’s a lot of unnecessary bloat elsewhere out there, too.

I’m not saying that you shouldn’t use tactics like minification, resource concatenation, server-side compression, etc. to improve performance.

But have a strategy for performance first. Have a design. Consider whether you need all those libraries you’re tempted to include. Consider whether you need to write even more JavaScript, CSS, JSON and Christ-knows-what-all to “improve” the user experience.

Maybe leveraging those Content Delivery Networks will let you get away with it. But maybe they won’t.

Then again, what the hell do I know? I’m just an old Web browser guy. So I’ll leave you with this quote, sometimes attributed to Albert Einstein, that I kept in my .plan file back when that was a normal thing to have around:

Everything should be made as simple as possible, but no simpler.

I’m just trying to get people to think a little bit more before they deploy. I certainly wish I had here.


  1. I’m currently over-burdened with a relational database, a resource-hungry theme, complicated plugins and other dynamic functionality I’ll probably never use. So I’m seriously considering a return to just static HTML.

Interviewed by a twelfth grader

So, this happened. I was interviewed by David Silverman this week. Nice kid. And considering all the high-quality exchanges from other people on his site — some by friends of mine — I’m really impressed that David is still just in high school. He’s going far.

David had some really clever questions, too. I wasn’t expecting, “You are trapped on a desert island…” But it’s not like he was the Spanish Inquisition, either. I was allowed to answer via email so I could craft the responses myself from my comfy chair. I’m just particular that way.

All in all, a good experience and much better than my last phone interview. Thanks, David.

The move to WordPress and the traffic that followed

On Tuesday evening, I converted this self-hosted site from a collection of simple, static HTML files to a full WordPress content management system.

And on Wednesday it received 93,112 page requests. Not the most my site has handled in a single day, but still atypical. All without any downtime, too.

Now, moving to WordPress had nothing to do with all the traffic. That was just a happy testing and validation accident.

The traffic surge all started with a post to Hacker News by someone I’ve never met. And I didn’t even notice the event until 8:45 that morning when their post made it all the way to the front page.

The funny thing is, this was a link to something I’d published and (mostly) forgotten about on Sunday. It was three days old by the time it got any real attention. Go figure.

Then sometime early Wednesday afternoon, John Gruber linked to the same page on Daring Fireball. Rene Ritchie piled on later with a link from iMore (and some damn good commentary). And other folks started linking to it as well.

Through it all, my server just kept humming along, not showing any significant memory load either.

What enabled all this reliability from a little WordPress blog? Just some common sense, really. But I can thank Guy English for at least one suggestion.

A few weeks ago when I first considered moving to WordPress, I noticed that Guy was using DreamHost for his blog, kickingbear. Since I also use DreamHost (they’re good people) and Guy’s blog was a WordPress site, I was curious if he had done anything special to survive his last Fireballing by our friend John Gruber.

The big difference in Guy’s configuration from mine was him using a virtual private server (VPS) instead of shared hosting. Good idea, I thought. Sure, it’s a little more expensive but there’s much more isolation that way, preventing the Apache HTTP server from getting too distracted.

Guy also suggested installing WP Super Cache, a plugin for WordPress. But I already made that a requirement. I’m not grotesquely stupid. Nobody should ever run a self-hosted WordPress site without some form of static page caching.

The other strategy I applied was a content delivery network (CDN) to offload stylesheets, scripts and images. This was easy since I was already using MaxCDN for my static site and WP Super Cache has built-in support for rewriting internal resource URLs to point at CDNs instead.

One thing I didn’t try was moving from FastCGI to XCache for PHP. I simply don’t have enough experience to know if that’s an advantage for WordPress, or the configuration tricks in making such a transition. But if you do, please comment on this post or contact me. I’m not above admitting ignorance or asking for help.

While 93,112 page requests seems like a large number for one day, it’s really only a little faster that one page per second on average. And the number of simultaneous users probably never got over 500.

A more stressful day on the site is still needed to shake things out and assuage all my fears about reliability. But this is a good start.

And a real incentive to write more.

The best “The Force Awakens” review

Not mine. I think that honor should go to this piece by my friend and podcasting partner, Matt Drance.

You should read it. Well, you should read it if you’ve already seen “Star Wars: The Force Awakens” because it contains some rather significant spoilers. Caveat lector and all that.

Anyway, I like Matt’s review so much, in fact, that I wish I could have written it. But I’m just not that insightful.

Which is why we canceled today’s recording of a “Review” episode on the film. Because Matt couldn’t make it. And having him is essential. Really.

Sorry about that. I know I teased it yesterday, but we will reschedule. In the meantime, read Matt’s review.

I forgot about “Jessica Jones” so it’s three podcasts in one day

Last night I wrote about having two podcasts published on the same day. Hell, it wasn’t even a humblebrag. Pretty blatant egotism, really.

But it turns out there were actually three!

The latest episode of “Review” (subscribe here) debuted yesterday with me, Georgia Dow and, of course, Rene Ritchie discussing the awesome new “Jessica Jones” series which arrived on Netflix last month.

If you haven’t seen “Jessica Jones,” abandon your family and watch now. Well, maybe they could watch with you. If they’re worthy.

Anyway, my sometimes podcasting partner Philip Mozolak alerted me that our new episode was available last night. And, stupid me — being distracted with getting this site up and running on WordPress — well, I thought that particular show had been out for quite awhile. After all, we recorded it in early December.

Seriously, I don’t keep track of all these podcasts. I don’t. Well, not the ones I’m on because, duh, I’ve heard them already.

That’s Rene’s job. And he certainly does all the hard work of organizing, recording and editing most of them. He’s the big boss. I guess he just picked yesterday to clean house on the publishing part.

So, now I’ve got a personal milestone. Three shows in one day. And you thought I was insufferable already. Thank you, Rene, for making that possible.

Two podcasts in one day

Looks like Christmas (or Festivus) came early this week. At least for people who like to hear me drone on and on.

First up, a new episode of “Debug” (subscribe here) with Nitin Ganatra. This is our fourth appearance together where Guy English and Rene Ritchie trick us into alcohol-fueled rants about our former employers. Actually, it’s not like that at all. We do know how to hold our liquor by now.

This time we talked about managing teams, employee retention and other such pointy-haired activities. I really appreciate Guy choosing that topic and then inviting me on the show. Lots of fun too, mixing it up with those knuckleheads again.

Then there’s a new episode of “Melton” (subscribe here) with my co-host Rene Ritchie. Yeah, him again. And Rene must be working overtime because we recorded it just this afternoon. I hadn’t even sent him my side of the audio before he just published the whole damn thing. You’re on fire, Rene!

And I could go for a triple this week because I’m recording another show tomorrow with Rene, Guy, Matt Drance and Georgia Dow. May the Force be with us because I think you can all guess what that podcast will be about.

Powered by WordPress, proudly or not

I just saw “Star Wars: The Force Awakens” so I thought now’s a good time to finally make that move to the dark side.

Which means it’s not you. My website really did change significantly today. And converting it to HTTPS last week was only the beginning.

Yes, I’ve made a few jokes about WordPress in the past. (Like the opening in this post.) Thankfully Matt Mullenweg still follows me on Twitter. He’s very forgiving. Although the rest of you might not grant me absolution now that I’m no longer using Magneto, my own static website generator.

But I really don’t care about the loss of geek cred. I have plenty and I just want to write. WordPress, it seems, goes out of its way to empower that desire. Seriously.

While the free-range, handcrafted, artisanal nature of the HTML previously here afforded me a certain self-righteous smugness, a static generator can be pain in the ass to use every day. And though I liked the command line Magneto required, I didn’t want to spend all my time there with it.

What I really needed was a publishing system easily accessible from anywhere — even mobile devices — to quickly create and deploy content. Which is the whole point of having a blog that people want to read.

And I’ve been longing to compose and publish my posts within Safari, my Web browser of choice. I don’t know if you’ve ever tried WordPress — or tried it recently — but I think it has the best in-page rich text editor out there. And it’s completely extensible.

Sure, you may have to huff a little PHP to do that, but you’d be surprised how much of the WordPress front end is just plain JavaScript now. Including a new Open Source desktop app, Calypso, written almost entirely in that language.

Times are changing. And did I mention that whole WordPress-powers-25%-of-the-Web thing?

Anyway, I wanted to move away from the static and toward the dynamic. Especially with a responsive site design.

Now, I could have just written the CSS and JavaScript myself, media queries and such, to make the site not suck on the iPhone. But, it turns out I’m old and lazy. Plus, the WordPress team has done the really hard part — not just the writing — but the testing of a responsive design on just about every platform out there.

So what you’re looking at now is a “child” theme built to inherit most of its appearance and behavior from one of the bundled designs in WordPress. It still has it’s own personality, but it’s a quick way to get up and running without making a huge investment in a college fund.

Sure, I also wrote a few plugins, but I’m not trying to tweak every little thing that WordPress does by default. I don’t what to go down that path again because the last time I tried that I wound up writing my own anal-retentive system instead. And I’ve learned the hard way that blogging is not about the HTML tags surrounding the words that you type.

Of course, I’ll be the first to admit that a WordPress site requires constant vigilance to keep it working properly. There are several tools available to help with that, but forced participation is a good thing if it draws me back here to write. Because we all know that neglect has been this site’s only real problem.

By the way, although I’m not using Magneto now, it was essential for migrating my site to WordPress. I took all my same content and just repurposed it through plugins and a script — within Magneto — to generate a WordPress-compatible, RSS-style import file. I suspect many other static website generators could do the same.

All of the original post URLs should be the same on this new site. My apologies for changing the RSS feed but the old link should redirect to the new one.

Let me know if you see anything grotesquely wrong. In the meantime, I’ll keep typing.

What the new video compression strategy from Netflix means for Apple and Amazon

Last week, several folks on Twitter pointed me to this technical post from Netflix about their new video compression strategy. While not yet implemented, it promises to save bandwidth while improving quality for some content.

And the article is very nearly a nerdgasm for a transcoding geek like myself. I’d still like to see more details about the exact rate control mechanism they’re using and actual encoder arguments but, hey, you can’t have everything.

The tl;dr of it all is simply that Netflix plans on scaling bitrates up and down based on the complexity of their video. So, slightly higher bitrates for busy action blockbusters and possibly lower bitrates for relatively static, flat cartoons.

Basically what we’ve all been doing for years with variable bitrate (VBR) encoding. But they’re trying to control that variance a lot more than an encoder like x264 typically allows. In fact, as near as I can tell, Netflix still plans on encoding everything with a constant bitrate (CBR), but they want to be really particular about the target number.

To do that, Netflix will transcode every one of their videos a bazillion times at different resolutions and at different bitrates, finally selecting the smallest one for a particular title that doesn’t suck visually. Seriously, their algorithm for all of this is quite clever.

And the new Netflix proposal will likely succeed. After all, they have a server farm the size of a small country to do all those iterations.

Since the rest of us don’t have that kind of hardware, the rate control system used in my video_transcoding project might be more appropriate.

Anyway, besides all the geekery, what struck me about this whole plan by Netflix is that Apple and Amazon will likely go down the same path. For competitive reasons, if nothing else.

They all have the same server farms. Owned by Amazon, no doubt. And there aren’t any technical hurdles. It’s just more computation.

At least Apple and Amazon will likely do this for streaming. But I’m not sure that’s true for sales of digital video downloads.

Let me explain.

When Apple first opened the iTunes Store to sell music, those audio files were provided at 128 Kbps in AAC format using Apple’s own encoder.

And that encoder was quite good, but back then it was only used for constant (CBR) and average bitrate (ABR) output. So a track that was advertised being 128 Kbps was very likely encoded at or very near 128 Kbps. You got what you paid for.

Later, Apple did away with audio DRM and upped the bitrate to 256 Kbps. For nearly the same price. It was awesome. And we all remember the awesomeness of it.

Apple also developed a new version of their audio encoder with a true variable bitrate (VBR) mode. And that new mode produced just as good if not better quality audio than the CBR and ABR schemes. Often at much lower bitrates, too.

But I suspect that was a problem.

You see, it would probably be difficult to sell those VBR files — some of which were quite a bit lower than 256 Kbps and a few even lower than 128 Kbps — because customers might perceive a loss of value.

I think this is why Apple developed a new encoding mode they call Constrained VBR. It has all the benefits of the regular VBR mode, but it just doesn’t dip the bitrate too low. In a way, it acts like the old ABR mode, occasionally wasting space for less complex audio.

Of course, for some tracks the Constrained VBR output is larger than 256 Kbps. In fact, all of the songs on Taylor Swifts’s “1989” are larger than 256 Kbps. I bet you’re thinking, “Wow! More value for my money!” (And maybe, “WTF? Gramps listens to Taylor Swift?”)

But there are quite a few audio files in the iTunes Store that could probably be a lot smaller with no perceived loss of quality if Apple used that original VBR mode to do the encoding.

I would bet money that Amazon ran into this same conundrum with the unconstrained VBR mode of the LAME MP3 encoder which they use. And this might explain why some of Amazon’s files are in CBR format, artificially boosting their size.

Anyway, Netflix is talking about the bitrates for their 1080p videos soon being as low 2000 Kbps for the simple stuff. That’s down from the 4300-5800 Kbps range they’re using now. And I’m sure they can do that on the low end without any perceivable loss of quality while streaming.

But can Apple and Amazon sell 1080p videos — averaging about 5000 Kbps now — at bitrates as low as 2000 Kbps — less than half that average size — without a perceived loss of value?

I don’t know. It’s hard to predict because consumers… well… we’re fucking stupid.

Or maybe HTTPS isn’t so hard

Minutes. Maybe even seconds after I published that last post… Voilà! My new static IP address propagates through DNS, HTTPS works perfectly, and I sound like a cranky old guy. Again.

They tell me comedy is all about timing.

Anyway, I pushed out the new content changes and, of course, they worked perfectly too. Dammit! You really can switch to HTTPS in less than half a day. Even if you’re an idiot like me.

But I had help. Props to DreamHost and MaxCDN for how easy their systems are to use. I recommend them both.

Moral of this story? Remain calm and don’t fear the encryption.

Moving to HTTPS is hard

Since it’s the third anniversary of my weblog here today, I was hoping to make the switch over to HTTPS and mark the occasion. Alas, trying to do everything in one day turned to be incredibly naive and optimistic.

Especially when you don’t start the process until late afternoon.

You probably think I’m an old hand at HTTP-to-HTTPS migration but, to be honest, this is the first time I’ve attempted it. While it’s true that I call myself a Web geek, I’m usually referring to the client side of that particular nerdity. Remember, we are a diverse species.

But I have made some progress. With my service provider, DreamHost, doing most of the heavy lifting, of course.

So far, I’ve switched over from shared hosting to a virtual private server. This wasn’t strictly necessary for HTTPS, but I have other evil plans on the drawing board that’ll fit better with that configuration. Stay tuned.

Provisioning the server and copying everything to it took a few hours and then required its new IP address to propagate through DNS, which took a few more.

Stupidly, I forgot that you really need a static IP address to make HTTPS not suck, so I had to acquire one of those while I ordered the SSL certificate.

The good news is that the certificate is now installed. The bad news is that the sever is refusing to connect via HTTPS on port 443. The DreamHost folks suspect this is because the new static IP address hasn’t propagated yet through DNS. But we won’t know for sure until that happens anyway.

So, again, the next step is… waiting. Pro tip: get your static IP address setup first if you’re moving to HTTPS.

One bonus with watching this particular kettle boil so often and for so long is that it’s given me time to prepare all the content changes to the site for HTTPS support. Like, you know, redirects. And if I’m lucky, I’ll find out tomorrow whether those changes worked.

If they do, then it’s a mad dash to the Google Search Console and adding the HTTPS version of my site to their system before my page ranking drops off the radar.

Really, what could be easier than all of this?