Tuesday, 20 January 2009

Tips For Getting Patches Landed

The checkin-needed queue is quite large these days. Here are a few tips for getting your checkin-needed patches landed quickly:


  • Produce your patch using "hg export" so it includes a commit message and your user identification. This not only makes the committer's job easier, it also means you're more likely to get the right attribution.
  • Make sure it's clear in the bug which patch(es) need to be committed.
  • Periodically check that it applies cleanly to the trunk and update the patch if it doesn't. If the patch is rotting because it's been stuck at checkin-needed too long, yell at someone.
  • If you've run tests locally (reftests, crashtests, mochitests, etc), and you say so in the bug, people are more likely to check your patch in quickly. You don't have to have run them all (few people can run them on all platforms anyway) but more tests -> better. This is especially important if previous versions of the patch have been backed out due to test failures.
  • Work on giant patches that fix huge problems --- I try to check those in quickly because they rot fast. And I'll love you for it.

Now, having said all that, it is our (Mozilla's) responsibility to be diligent about helping people get their patches landed. After all, it's their hard work just to get to the point of having a reviewed patch and we should respect that and be willing to clean up their patches and work around little mistakes or inefficiencies.

I'm enjoying hg's ability to bundle up a large set of patches to be committed as a unit. I often commit at night NZ time, which is very late US time, so I often get a set of build slaves of my very own and reasonably quick turnaround on builds and test runs for the whole bundle of patches. By being careful not to commit similar patches in the same bundle, it's usually quite easy to tell what's guilty if something goes wrong.



Friday, 16 January 2009

Post-3.1 Plans

While people are working hard to nail down layout bugs for the Firefox 3.1 release, there's also exciting work happening on the trunk for post-3.1.


  • Benjamin Smedberg landed a big patch to make nsIFrame stop pretending to be reference-counted. It also replaces QueryInterface with a similar API called QueryFrame that doesn't need IIDs. This is a nice cleanup and simplification.
  • Brian Birtles and Daniel Holbert have massively cleaned up and simplified the SMIL patch so that it's ready for landing. This is just the basic SMIL infrastructure --- considerably more work is required to implement all the features of SVG Animation --- but it's a great base to build on. SMIL support will be configured off by default; I think we should enable it when we've got enough implemented that we wouldn't feel embarrassed about shipping it.
  • Robert Longson and Craig Topper have been doing a bunch of SVG cleanup. In particular, they've been removing a lot of the existing XPCOM-heavy datatypes for SVG values such as animateable strings and viewboxes in favour of lightweight objects that implement the SVG DOM interfaces via tearoffs. This lays the groundwork for making all these attributes actually animateable using SMIL.
  • Fred Jendrzejewski has some patches that get rid of all our usage of deprecated types nsStringArray and nsCStringArray, in favour of nsTArray<nsString> and nsTArray<nsCString>, improving consistency with the rest of our code and significantly more efficient as a bonus!
  • Jeremy Lea has a big patch ready to land that makes nsRect and nsIntRect be different types. It also separates nsPoint/nsIntPoint, nsSize/nsIntSize and nsMargin/nsIntMargin. This gives us some useful compile-time type checking to avoid mixing up integers (usually pixels) with our subpixel "app units" (1/60 of a CSS pixel).
  • Boris Zbarsky has some nice changes for table-driven frame construction that shrink the code a lot and may eventually lead to performance wins.
  • Jonathan Kew has implemented a Core Text backend so we can use that new API on Mac instead of ATSUI.
  • Neil Deakin has a major focus rewrite in the works.

(There's more going on --- this is just what sprang to mind while I wrote the post.) We have more big architectural changes and features planned, which I'll talk about when they're further along.

For Gecko 1.9 (Firefox 3) we did a ton of architecture work, such as switching to cairo, and used most of our energy fixing bugs from those changes, so we didn't have much left to really exploit the new architecture. In Gecko 1.9.1 (Firefox 3.1) we were able to put in a lot of new engine features that were suddenly easy (e.g., text-shadow, border-image, SVG filters for HTML), but we didn't spend much time doing big architecture cleanups. I think that in Gecko 1.9.2 we're swinging back a bit to increase our investment in architectural improvements. But thanks to much better tests, and more developers, we should be able to keep the feature pipeline busy as well. I'm excited that we seem to have reached the point where we can get a lot of things done at once.



Wednesday, 14 January 2009

Mountain Fun

Disappointingly there was no major eruption, so we're back! On Sunday we drove down to National Park, on Monday we did the Tongariro Crossing, on Tuesday we went up Mt Ruapehu in the chairlifts and then drove back to Auckland, stopping at Huka Falls and the Craters of the Moon thermal area on the way.

The Crossing is one of New Zealand's most famous walks, a 6-8 hour trek across Mt Tongariro, one of the active volcanoes of the North Island's central volcanic plateau. The weather was mixed --- it rained most of the second half of the walk --- but the trip was still spectacular. One particularly special feature was that early in the walk we had a very clear view all the way to Mt Taranaki.

Below, left to right are Michael Ventnor, Chris Double, me, Brian Birtles, Matthew Gregan and Karl Tomlinson. (Chris Pearce had a pressing engagement in another country.) The photo was taken at Blue Lake, courtesy of Matthew Gregan (here's his Flickr set).



Emerald Lake in the rain, with steaming fumaroles on the slopes behind it:


The Ketetahi Stream was in flood due to the rain; the water's gray color is due contamination by volcanic minerals by hot springs up the mountain.

KetetahiStream.jpg

Intern at play on the upper slopes of Mt Ruapehu:


Mt Ruapehu yesterday morning:



Sunday, 11 January 2009

Scheduled Downtime

The entire Mozilla Auckland office will be offline Sunday to Tuesday for an offsite "team-building exercise".

Since I organised it, of course it involves volcanoes. If there's a major eruption in the North Island in the next few days, ramp up MoCo hiring immediately.



Wednesday, 7 January 2009

Invalidation Reftests

I've written before about how awesome reftests are. Now they're even more awesome! Until now reftests have always worked by taking a snapshot of the entire window at the end of the test and comparing that to a snapshot of the reference page. However, there's a fairly large class of bugs which we call "invalidation bugs" --- they only show up when you redraw part of a page, and when you draw the whole page everything's fine. These are usually either bugs in figuring out which part of the page needs to be redrawn after some dynamic change, or bugs in the optimizations that try to skip drawing elements that don't intersect the area being redrawn. We had no way to write automated tests for these bugs. Now we do!

Reftests have long supported a feature called "reftest-wait". Tests whose root element has class "reftest-wait" do not finish when their load event fires. Instead, the reftest system waits for the "reftest-wait" class to be removed from the root element, then it takes the snapshot and moves to the next test. This lets a test finish loading and then change something dynamically to see if the layout and rendering after the incremental change is correct. I've extended "reftest-wait" so that we take a snapshot after the load event has fired and then as further incremental changes happen, areas of the snapshot are updated corresponding to the areas actually repainted in the window. (This is implemented in the reftest harness using MozAfterPaint.) Implementing this actually exposed a few invalidation bugs just using the reftest-wait tests in our existing test suite.

There's some subtlety around the timing of incremental changes. Invalidation and repainting can happen asynchronously, so it's possible that the load event causes a pending repaint of the entire window and a test script running after the load event might make an incremental change that is not really tested because a full repaint of the window happens anyway. To make sure incremental updates are reliably tested, the reftest harness takes responsibility for making sure invalidation and repainting are completely flushed out after the load event has fired. Then reftest fires a "MozReftestInvalidate" event at the test document's root element; this is the cue for the test to perform its dynamic updates to be sure their repainting will be properly tested.



Tuesday, 6 January 2009

Right And Wrong

Last week I noticed that the new Tollroad Web site was not using SSL, so user account details such as PINs and credit card numbers are transmitted in the clear, vulnerable to being intercepted by third parties. I sent an email to the contact address and got a stock reply; then I followed up again and got a less-stock reply that they'd "look into it". In today's Herald there's a story about the same issue.

Let's be clear: Brett Dooley is completely wrong. The site is insecure. They do not need to "reassure" the public, they need to fix the site. If it's true that "all the banks set up for website transactions had "verified and certified all our banking arrangements"". then either the "banking arrangements" excluded the site's form submission system, or the banks are fools.

It's very annoying that the Herald article presents it as a "he said, she said" difference of opinion from which no conclusions can be drawn --- presumably in some desire for "balance". The reporter could have and should have called out Dooley on his false statements.

What's especially bad is that incidents like this undermine the security of the entire Internet. Whenever people are told that it's OK to transmit sensitive information like credit card numbers through an insecure channel without the "browser lock", they're being trained to respond positively to phishers and other attacks on SSL site verification.

This is an unfortunate blunder, because everything else about the Puhoi toll road project seems extremely well done. Instead of requiring some kind of transponder device in cars, they just take photos of your license plate and charge your account automatically. If you don't have an account you can use your cellphone to pay the charge up to three days after passing through, or you can visit a kiosk (away from the toll area itself) and pay cash. Overall it's a nice fully automated, low overhead solution.

Update Looks like the Tollroad people have seen reason.



Thursday, 1 January 2009

Wikipedia

Wikipedia's constant pleading for money is annoying, especially the latest round of "A Personal Appeal From Jimmy Wales". It would honestly be less annoying if they just stuck real text ads there that I already know how to ignore without feeling guilty about it. And given the amount of traffic that goes to Wikipedia --- especially the monetizable sort of traffic of people investigating things --- I can only assume they'd haul in a huge amount of cash. Are they afraid of that, or what?