Thursday, 23 February 2012

Movie Overdose

We arrived in London safely and just a little behind schedule. No matter how frustrating aspects of air travel can be, I never cease to be amazed and grateful at how easy and cheap it is to travel halfway around the world in a little over twenty-four hours.

As usual I spent most of the time watching movies and some TV.

  • The Debt: Interesting.
  • In Time: Intriguing premise, decent execution.
  • Boy: Quite good.
  • Network: Exceedingly dull and overwrought.
  • Man On Fire: OK execution of routine material.
  • Mural (weird Chinese film with female spirits who don't understand men (this is a genre)): Difficult to evaluate.
  • The Night Watch: Not bad.
  • A Shot In The Dark: Tedious (like all the other Clouseau films).
  • The Big Sleep (1946): Brilliant. More great dialogue than all the other movies put together.

This movie binge reminded me how wrong mass media is. Based on movies, you would conclude that the most common occupation is "professional assassin", that outbreaks of violence and murder tend to be ignored by the authorities, that there are no happy marriages, and that there are no Christians (barring the odd exorcist or villain). I fully understand that movies focus on the abnormal because it's more interesting, and that most people can distinguish fiction from reality, but I'm too much of a pessimist to imagine that our unconscious assumptions are perfectly firewalled from our movie and TV diet.

Friday, 17 February 2012

Upcoming Travel

Our whole family is going to Europe soon. We're planning to leave Tuesday night (Feb 21) and return March 17. I will be mostly not working, and mostly offline, but expect to be checking email regularly and able to respond to some review requests and the occasional crisis. I'm planning to visit the new Mozilla London office on Tuesday February 28 and the Paris office on March 9 so I should get some work done on those days.

Sunday, 12 February 2012

Foo Camp, ECOOP, And Conferences

I've just attended another marvelous Foo Camp in Warkworth. As always, it was a great opportunity to meet people doing interesting things who are mostly in New Zealand. One interesting attendee this year was Dave Dobbyn, who I didn't get to talk to, but I was very pleased to hear him introduce himself as "follower of Jesus".

I do feel a bit drained after each Foo Camp. It's not so much the late nights, but the effort required to talk to people I don't know very well. Wandering around trying to figure out who might want to talk to me or which conversation group I should try to break into, I feel socially inept. This is all fine, since talking to people who are doing strange but cool things I know nothing about is one of the best things about Foo. I do, however, find other conferences easier --- conferences that are related to Mozilla, or academic computer science conferences where I can expect almost everyone to share my interests.

Speaking of computer science conferences, I've just finished reviewing my share of the papers for ECOOP 2012 --- I'm on the program committee. It was quite a lot of work but it has been very interesting. It's encouraging to see how much research attention is being given to Javascript and browser-related problems nowadays.

Just recently, the PLDI 2012 accepted papers list came out. I was very pleased to see that Brian Hackett's paper on his Javascript type inference infrastructure was accepted. It's great to see cutting-edge technology actually shipping in browsers being recognized by the academic community.

I'm also glad to see "Race Detection for Web Applications" was accepted. For a long time it's been obvious that Web applications make unwarranted assumptions about DOM events firing in a particular order, and when those assumptions are violated you often get nasty nondeterministic bugs. This is the cause of a lot of the "random orange" test failures we see on the Mozilla test farm. For a while I've thought it might be possible to adapt well-known techniques for dynamic data race detection in multithreaded programs (where I did research, once upon a time) to find these bugs in single-threaded, but nondeterministically event-driven, Web applications --- and nagged researchers to look into it. Now Manu, Julian, Martin and Boris have looked into it and have shown that this can work. I hope I can take credit for suggesting the problem to Manu; however, an under-appreciated fact of CS research is that good ideas are a dime a dozen and the hard work is all in problem selection, implementation and evaluation. So congratulations to Manu et al. for breaking new ground and hopefully kicking off something that will result in more robust Web applications and less "random orange"!

To put the icing on the cake, I am very lucky to have been invited to give a keynote at ISMM 2012. It's in Beijing in June, colocated with PLDI and ECOOP over a whole week! I'm really looking forward to attending these conferences and catching up my former (and some current) colleagues, and actually seeing the presentations of the above papers.

Friday, 10 February 2012

Alternatives To Supporting -webkit Prefixes In Other Engines

Since news spread that the non-Webkit browser vendors are moving towards supporting some -webkit-prefixed properties, lots of people are pushing back with alternative suggestions, for example Daniel Glazman. Many of these are good suggestions, but here's why they won't work for the properties like transitions, animations, and transforms already in wide use:

  • Evangelize Web developers to not use prefixed properties on production sites. This is swimming upstream against trends in Web development and market forces. At every Web dev conference I've been to, on almost every Web dev site (including browser vendor's own sites), Web developers are encouraged to use vendor-prefixed properties. Web developers who refrain will lose business to Web developers who don't. Ergo, not going to happen.
  • Evangelize Web developers to use prefixed properties with "all browsers' prefixes" (sometimes automatically through preprocessing tools or even CSS extensions). This would help in some ways, but despite us browser vendors and a large chunk of the Web dev community pushing this approach for a long time, even with tool support, Web devs don't do it consistently enough. Even when we specifically contact big, clueful sites we have a good relationship with (like Google) to get this fixed, we often don't succeed. So I don't think one more passionate (angry?) round of evangelism is going to make the difference. Of course evangelism can't help with sites that are no longer actively developed.

    This approach has a big downside, too: it can severely limit the usefulness of prefixes in the first place. As soon as one browser implements a feature with given syntax, Web content can start assuming all other browsers use the same syntax and semantics, which forces those browsers to behave exactly as if the feature had been deployed unprefixed in the first place --- especially when Web developers include the unprefixed syntax as a "fallback" in case prefixed support is removed. Even if we could succeed at the evangelism, with this approach we might as well have not used prefixes in the first place.

  • Teach Web developers not to rely on vendor prefixes by dropping support for them once standard versions exist. Webkit people have already said they won't do this. Furthermore, it encourages Web developers to go the "all prefixes plus standard name" route for everything they do, which makes prefixes pointless, as explained above. Doesn't help with deployed content anyway.
  • Ignore the problem and hope that it goes away without us having to do anything distasteful. Um.

Some of the suggestions for things we could do in the future are good, like disabling experimental features in browser releases, but they won't help us out of the current situation. Maybe I'll discuss them in another post.

Note that trying alternatives that end up failing will have a huge cost to the open Web: it will delay the onset of a multi-vendor mobile Web, which is critically needed for the open Web to survive. I hate having to support -webkit prefixes, but "desperate evangelism while the problem gets worse, followed by supporting -webkit prefixes" would be even worse.

Meanwhile, to the people who think the solution is for Mozilla and others to just "work harder" at evangelism, or implementation, or standards work --- I cast aspersions in your direction. You have no idea how hard we work. I personally have never worked harder in my life, and I know that's true for many other Mozilla people. We hire more people, but the workload increases even faster. Meanwhile, Apple won't even hire a single full-time standards editor to help their proposed CSS properties reach standardization, which is a large part of what got us into this mess in the first place.

Friday, 3 February 2012

The Problem With Counting Browser Features

css3test.com is doing the rounds. I get almost exactly the same results in Firefox trunk and Chrome Nightly (64% vs 63%). This gives me a chance to explain why this sort of testing tends to be bad for the Web, without sounding whiny and bitter :-).

The root of the problem is very simple, and explained right at the top of the page:

Caution: This test checks which CSS3 features the browser recognizes, not whether they are implemented correctly.

Thanks for the disclaimer, but it doesn't eliminate the problem. The problem is that whenever someone counts the number of features supported by a browser and reports that as a nice easy-to-read score, without doing much testing of how well the features work, they encourage browser developers to increase their score by shipping some kind of support for each tested feature. Because we have limited resources, that effectively discourages fixing bugs in existing features and making sure that new features are thoroughly specced, implemented and tested. I think this is bad for Web developers and bad for the Web itself.

If Web authors reward Web browsers for superficial but broken support for features, that's what they'll get.

Instead, I would like to see broad and deep test suites that really test the functionality of features, and people comparing the results of those test suites across browsers. Microsoft does this, but of course they tend to only publish tests that IE passes. We need test suites with lots of tests from multiple vendors, and Web authors too. (Why haven't cross-browser test results for the official W3C CSS 2.1 test suite been widely published?)

I don't want to come across as harsh on css3test.com. The site is lovely, it has that disclaimer, and it goes a lot further than, say, html5test.com in terms of testing the depth of support for a feature. So it actually represents good progress :-).