Sunday, December 29, 2013

Please Stop the jsPerf.com Abuse!


According to many of the tests on jsperf.com, jQuery is slow. Really slow. The jQuery team gets bug reports from web citizens who've discovered these egregious flaws via jsperf and want them fixed. Now, jsperf.com can be a great tool when used knowledgeably, but its results can also be misinterpreted and abused. For example:

Why doesn't jQuery use DOM classList for methods like .addClass()?

There are at least three good reasons.
  1. It doesn't make any practical performance difference given its frequency of use.
  2. It complicates code paths, since classList isn't supported everywhere.
  3. It makes jQuery bigger, which makes it load slower for everyone, every time.
But this jsperf shows classList is much faster! How can you say that?

Well it's possible that test is running afoul of microbenchmark issues, and not measuring what it is supposed to measure due to the increased sophistication of today's JavaScript compilers. This is a big problem with a lot of the jsperf tests out there, they are just not measuring anything useful, or at least not measuring what their creator thought they measured.

But let's assume that in this case, the JavaScript is actually doing just what the test case creator intended to do, and none of those microbenchmark oddities are occuring. The two test cases are not quite the same because the API behaviors are different:
  • The jQuery case works correctly for zero, one, or more elements in its collection. The classList case only works with one element selected, and will throw an error if no elements are selected.
  • The jQuery case can add or remove multiple classes at once. The pure DOM case will not, since classList does not support it.
  • The jQuery case runs on browsers that don't support classList. This includes the Android 2.3 browser (about 25 percent of all Android) and Internet Explorer before version 10 (about 36 percent of all IE).
Since jQuery provides wide browser support and a richer API for class operations, any attempt to use classList inside jQuery would still need supporting code around it to deal with older browsers plus the cases of multiple elements and multiple classes. That's more code to be downloaded and parsed each time any browser (including those without classList) loads a page with jQuery. That starts our space-vs-speed performance tradeoff in a hole that we may never dig out of.

As for the speed, let's look at how long it takes to do this block of operations, ignoring that they consist of things you'd never do all at once such as adding and then immediately removing the same class. Looking at the per-block execution times on Chrome, the jQuery API takes only about 16 microseconds (just 1/1000 of our 16 millisecond frame length) to do those class operations. Sure, the native methods do it roughly four times faster, but are these 12 additional microseconds too much considering the extra features jQuery offers?

Now the classList advocate might say, "What if I call this code 1,000 times? Now it's taking 12 milliseconds and eating up most of my frame budget!" Typical real-life code would only manipulate the class property a few times a second at most, usually in response to user interaction. If you're really manipulating 1,000 of any DOM element in JavaScript and expecting that to fit into a 16-millisecond chunk of work, you're likely running into all sorts of performance issues anyway. That's a problem with algorithms and design, unlikely to be magically solved by shaving a few microseconds off jQuery's class manipulation methods.

After just about any class change, the browser needs to recalculate styles and redraw at least part of the page. The cost of those operations will usually outweigh the cost of manipulating the class property, but they're not reflected in jsperf because the rendering engine runs on the same thread as the benchmark. The browser doesn't get a chance to re-render the page until after the timed block of code is complete. But even if it did, the trivial size of this document (just one test element) isn't typical of the workload the browser faces in recalculating styles and layouts.

In fringe cases where jQuery is too slow, you're in luck! jQuery doesn't place "Do Not Enter" signs around DOM methods. You can freely mix many DOM methods with jQuery. That's why event handlers have the target DOM element, not a jQuery collection, as their this keyword. In your checkbox event handler, you're free to use this.checked instead of $(this).prop("checked") -- and you should, it's shorter! Go ahead and use this.classList.add() directly if you don't care about old Android or old IE and have strong emotions about wasting microseconds.

Knuth said, "We should forget about small efficiencies, say about 97 percent of the time: premature optimization is the root of all evil." The classList case is one of those small efficiencies that makes little difference to the performance of real code. There is no good reason for jQuery to blindly optimize exclusively for the CPU performance dimension that jsperf measures, particularly when it has a cost in code size and complexity. But the programmer's mentality often won't let go of these issues because, darn it, jQuery should use the native API regardless.

Bottom Line: Don't obsess on speeding up code that isn't slow. As tempting as it seems, jsperf.com is usually not the right tool for identifying browser bottlenecks in application code. Ultimately, the best way to make web pages and apps fast is to use tools like WebPageTest, followed by a profiling session in your favorite browser tools to pinpoint slow JavaScript. Don't waste your development time on the 97 percent.

Thursday, December 12, 2013

Amazon: You Know Better

For the past few days, Amazon has followed me around the Internet with this one advertisement for a rolling suitcase.

That's because the wheel bearings on my old rolling suitcase are just about shot, they're all screechy and wobbly.
It's been a good suitcase and traveled over the world, but it's just worn out.
So I went over and bought a nice suitcase from Amazon, using my Prime membership.

But I didn't get that suitcase, I got a different one. Amazon knows perfectly well which suitcase I got, it was delivered today, and it's nicer than this one in my opinion. Yet Amazon continues to show me an ad for that suitcase as if to say, "Are you sure you made the right decision?"

I imagine that the infrastructure that Amazon uses to determine what product to show is pretty awesome technically. Yet Amazon is doing significantly worse than if it just showed a random product from some category other than rolling suitcases, and they should know that. My purchase process for any substantial product consists of two steps:
  1. Research product choices and prices.
  2. Buy the product

If you catch me between 1 and 2, the ad could make a difference. Once I've reached step 2? Forget it, you're wasting your time and money.

So rather than grabbing an opportunity to sell more product, Amazon is running the risk of creeping me out by reminding me that they are following me everywhere. I actually don't mind the concept of customized ads, but this implementation is a complete waste of ad space.