Thursday 2 February 2012
The Problem With Counting Browser Features
css3test.com is doing the rounds. I get almost exactly the same results in Firefox trunk and Chrome Nightly (64% vs 63%). This gives me a chance to explain why this sort of testing tends to be bad for the Web, without sounding whiny and bitter :-).
The root of the problem is very simple, and explained right at the top of the page:
Caution: This test checks which CSS3 features the browser recognizes, not whether they are implemented correctly.
Thanks for the disclaimer, but it doesn't eliminate the problem. The problem is that whenever someone counts the number of features supported by a browser and reports that as a nice easy-to-read score, without doing much testing of how well the features work, they encourage browser developers to increase their score by shipping some kind of support for each tested feature. Because we have limited resources, that effectively discourages fixing bugs in existing features and making sure that new features are thoroughly specced, implemented and tested. I think this is bad for Web developers and bad for the Web itself.
If Web authors reward Web browsers for superficial but broken support for features, that's what they'll get.
Instead, I would like to see broad and deep test suites that really test the functionality of features, and people comparing the results of those test suites across browsers. Microsoft does this, but of course they tend to only publish tests that IE passes. We need test suites with lots of tests from multiple vendors, and Web authors too. (Why haven't cross-browser test results for the official W3C CSS 2.1 test suite been widely published?)
I don't want to come across as harsh on css3test.com. The site is lovely, it has that disclaimer, and it goes a lot further than, say, html5test.com in terms of testing the depth of support for a feature. So it actually represents good progress :-).
Comments