Saturday, 27 December 2008

Ship's Blog

We just got back from our Christmas boat trip with extended family. It was fantastic. I've created a custom Google Map showing the places we visited. The weather was mostly very good although on Tuesday it rained a fair bit (especially Tuesday night) and strong south-easterly winds kept us holed up in Bon Accord Harbour. Christmas Day and Boxing Day, on the other hand, were sheer magic. (Visibility was amazing; from Sullivan's Bay we could clearly see the tip of Castle Rock above the horizon --- 44 nautical miles away (that's 50 miles, 80 km). A Christmas Eve cruise up the Mahurangi River right up to Warkworth was another highlight. We did lots of walks, ate lots of food, swam, fished, rowed, mucked around on the beach, and played games. I hardly thought about Web browsers at all.


weka

The grounds of Mansion House are have a lot of a fairly tame wekas and peacocks.


Beehive Island

Beehive Island seen from Kawau with the storm gathering behind it.


Castle Rock

Taken from Sullivan's Bay in the evening of Christmas Day, Castle Rock is the peak jutting out above the horizon to the right of Pudding Island in the foreground.


Sullivan's Bay

Sullivan's Bay on Boxing Day --- there were lots of day trippers on Christmas Day, but it was a lot quieter the next day, although there were still a lot of campers about.

Saturday, 20 December 2008

Offline

I'm going to be offline until December 27. If you need anything from me, rent a boat and search for me somewhere in Kawau Bay.

From then I plan to be sporadically online until January 5, when I should be back operating at full power.



Sunday, 14 December 2008

New Game, Old Rules

This summary is not available. Please click here to view the post.

Monday, 8 December 2008

Reftests

Before I get into what this post is really about, let me heap praise on reftests. David Baron came up with the idea of writing automated tests for layout and rendering where each test comprises two pages and the test asserts that the renderings of the two pages are identical. This works much better than comparing test pages to reference images (although you can use an image as a reference if you want), because you can easily write tests that work no matter what fonts are present, or what the platform form controls look like, or what platform antialiasing behaviour is, and so on. There are almost always many ways to achieve a particular rendering effect in a Web page, so it's very easy to write reftests for parsing, layout, and many rendering effects. There are also tricks we've learned to overcome some problems; for example, if there are a few pixels in the page whose rendering is allowed to vary, you can exclude them from being tested just by placing a "censoring" element over them in both the test and reference pages. In dire circumstances we can even use SVG filters to pixel-process test output to avoid spurious failures. Sometimes when there are test failures that aren't visible to the naked eye (e.g. tiny differences in color channel values), it's tempting to introduce some kind of tolerance threshold to the reftest framework, but so far we've always been able to tweak the tests to avoid those problems and so I strongly resist adding tolerances. They should not be needed, and adding them would open a big can of worms.

The reftest approach may seem obvious but it definitely isn't, because other browsers don't seem to use it. I don't know why. Comparing against reference images only makes sense in very limited circumstances. Dumping an internal representation of the page layout and comparing that against a reference makes life difficult if you want to change your internal representation, and skips testing a lot of the rendering path. Reftests don't even depend on which engine you're using --- you can run most of our reftests in any browser. One argument that has been made against reftests is that someone might introduce a bug that breaks the test and the reference in the same way, so tests pass. That is possible, but if feature X regresses and all reftests pass, that just means you should have had a test specifically for feature X (where the test page uses feature X but the reference doesn't).

Anyway, the problem at hand: quite frequently we fix bugs where a particular page is triggering pathological performance problems. For example, we might switch to a slightly more complex algorithm with lower asymptotic performance bounds, or we might be a little more careful about caching or invalidating some data. Unfortunately we don't have any good way to create automated tests for such fixes. Our major performance test frameworks are not suitable for this because these pages are not so important that we will refuse to accept any regression; we just want to make sure they don't get "too bad". Also we don't want to have hundreds of numbers that must be manually compared, so it's not clear how to automatically choose baseline numbers for comparison.

One crazy idea I've had is performance reftests. So you create a test page and a reference, and the test asserts that the test page execution time is within some constant factor of the reference's execution time. In this case we would definitely need to introduce a tolerance threshold. One problem is that a slow machine getting temporarily stuck (e.g. in pageout) would easily cause a spurious failure. So perhaps we could measure metrics other than wall-clock execution time. For example, we could measure user-mode CPU times and assert they match within a constant factor. We could instrument the memory allocator and assert that memory footprint is within a constant factor. I'm not really sure if this would work, but it would be an interesting project for someone to experiment on, perhaps in an academic setting.



Places I've Been


Google's Street View is cool, especially now that it covers New Zealand.

Places I lived:


  1. 5612 Fair Oaks St, Pittsburgh (first 2 years of grad school --- with Andrew, David, Scott and Herbie)
  2. 350 South Highland Ave, Pittsburgh (next 2.5 years of grad school --- with Andrew, David, Scott, Herbie, Ted, and David M!)
  3. 5726 Beacon St, Pittsburgh (remaining 2.5 years of grad school --- my wife didn't want to move in with those guys)
  4. 89 N Broadway, White Plains (first 6 months at IBM)
  5. 81 Holland Ave, White Plains (3 more years at IBM)

Where I live now was actually missed by Google's Street View pass over Auckland. Fine with me.

Places I worked:


  1. Wean Hall, Carnegie Mellon University, Pittsburgh (visible at the end of the driveway)
  2. IBM Research, 81 Skyline Drive, Hawthorne, NY (Street View hasn't covered this yet)
  3. Novell, 39 Market Place, Viaduct, Auckland (pan right for the incomparable Sunshine Chinese restaurant)
  4. Mozilla, 83 Nelson St, Auckland
  5. Mozilla, 8 Kent St, Newmarket, Auckland (current location)


Mozilla NZ

The last couple of weeks were quite exciting in the Mozilla Auckland office: interns Michael Ventnor and Brian Birtles started, and new hire Jonathan Kew was visiting. Michael has been working on bugs related to text-shadow and box-shadow, and is looking for more GTK integration projects. Brian will be working on SMIL. Jonathan is working on text and fonts; his current project is implementing Core Text integration for Mac while fixing some blocker bugs on the side. Everyone else is busy with their usual stuff; Karl just landed downloadable fonts support for GTK/Pango on trunk.

On Friday we went out for a Christmas-y dinner bash at Al Dente.


Mozilla NZ dinner


West Coast

After getting back from Waiheke we picked up Jonathan Kew and dragged him out west to the Piha area. I can't let people visit Auckland and miss out on the spectacular wild west coast. We strolled around on the beach, which was predictably crowded by NZ standards, being the first really warm Saturday of the summer. But we also did the Mercer Bay loop, a favourite 1-hour walk of mine along the clifftops just south of Piha --- no-one else in sight. You just have to be careful taking photos, since last year (or so) someone ill-advisedly stepped outside the fence and fell off the cliff.


View over the ocean from the Mercer Bay track


Waiheke

A little over a week ago my wife and I had a few days on Waiheke to celebrate our tenth wedding anniversary. We rented a car on the island and stayed in an apartment at Onetangi. The weather was fabulous and we had a great time. We didn't do much, but one day we walked from Cowes Bay Road to Waikopua Bay and then around the rocks to Man-O-War Bay and looped back along the road. The view from the eastern end of Waiheke across to Coromandel was fantastic. The second day we walked from Oneroa around the coast past Island Bay to Matiatia and then back to Oneroa. The first part of that walk is all vineyards, most of the rest is spectacular rocky coastline. That loop would be a good day trip since you can start and end at the Matiatia ferry and stop in Oneroa for refreshments. We ate out at Charley Farley's in Onetangi (good), Te Motu Vineyard (good, but surprisingly we were the only guests the whole time we were there), and the Lazy Lounge in Oneroa (also good).

View of Man-O-War Bay from the Cowes Bay Road



View of the 'island' in Island Bay


Wednesday, 26 November 2008

Miscellany

I'm going offline from Wednesday afternoon to Saturday --- 10th wedding anniversary.

I'm excited that FF3.1 should be released well in advance of IE8; of course, we have serious competition from other directions :-). The layout blocker list for 1.9.1 is currently at 51 bugs. However, 23 of those bugs are fixed by patches that are ready to land as soon as the tree reopens after beta2. Several other bugs have patches awaiting review. So I think we're in quite good shape heading towards the final release. On the other hand, video/audio currently has 15 blocker bugs --- partly because the spec keeps changing. We're definitely going to go down to the wire in that area. But overall I'm feeling more comfortable heading into the end-game than I ever have before. I think a few factors have contributed to that: great test infrastructure over the entire development cycle, no huge architectural changes in this cycle, and more development manpower.

We had a big discussion over the last couple of weeks about whether to restrict <video> and <audio> elements to playing media from the same domain by default (with Access Controls to let servers opt in to greater permissiveness). The result was to allow it, with a few caveats.

Karl Tomlinson has CSS @font-face "src:url()" downloaded fonts working with GTK/Pango/Freetype, and should be landing right after beta2. (@font-face "src: local()" isn't done yet but should be soon.) The integration is deep; ligatures, kerning and other shaping features are supported to the full extent Pango allows. It's also integrated with fontconfig although the best way to interface with fontconfig is not obvious.

One semi-controversial issue is what to display while a font is downloading. Should we display nothing, as Safari/Webkit does, or should we display text from the fallback font (e.g. the font used by a non @font-face-supporting browser)? There are arguments both ways.

It's summer so that means interns are arriving (hi Michael!). We've also got Jonathan Kew visiting from the UK. Jonathan's working on integrating Core Text into Gecko; Apple promises performance wins, so it'll be interesting to see how that works out. In any case we'll need it sooner or later since key parts of ATSUI won't be available in 64-bit Mac. It shouldn't be hard to support both ATSUI and Core Text, even switching at runtime depending on the OS version. I'd like to get that into Gecko 1.9.2.

There's a lot more interesting font and text work to do. We're running into limitations of platform shaping APIs. For example, we want to expose CSS properties that let authors control the use of Opentype features, e.g., to control the kinds of ligatures that are used --- already in Firefox 3 we've found that there's no ligature setting that satisfies all authors. Another problem with platform APIs is that we have performance-critical optimizations that depend on assumptions like "fonts don't perform contextual shaping across ASCII spaces" and "insertion of line breaks doesn't affect shaping". These assumptions could be wrong, so we need to be able to tell when they're wrong and turn them off when necessary (and only when necessary) ... but platform shaper APIs don't give us this information.

It seems like a promising way to go is to invest in Harfbuzz, first to use with GTK instead of going through Pango, then to ship and support on Windows and Mac as well. The second step is not trivial since we will have to make it play well with platform rasterizers (GDI and Quartz) without significant performance overhead. But it should let us solve the above problems and create new opportunities, e.g., the ability to add support for whatever Opentype features we desire. For various reasons we'd want to continue supporting Core Text and Uniscribe for fonts and scripts that Harfbuzz doesn't support well.

There are lots of other exciting things to work on for 1.9.2. SMIL, SVG fonts and SVG images (<img> and CSS background-image) are high on the list, along with low-hanging CSS3 features like background-size, multiple backgrounds, maybe text-overflow. I think we'll see more work on deep architectural improvements than during the 1.9.1 cycle, although some of those will probably be too deep to actually make 1.9.2, assuming we keep it tight like we have for 1.9.1 --- I think we should!

One deep project is memory management. I fantasize about a world without refcounting and the cycle collector, having JS objects and C++ objects living in the same heap, with concurrent mark and sweep collection. I don't know how realistic that is but I hope we find out.

Amid all this, one thing that focuses me:

From everyone who has been given much, much will be demanded; and from the one who has been entrusted with much, much more will be asked.


Thursday, 6 November 2008

The Essence Of Web Applications

One theme for Web platform improvements over the last few years has been stripping away the limitations that have characterized Web applications compared to desktop applications. So we support drag-and-drop, we compile JS to machine code, we offer fancy graphics, we let Web apps run offline, we give them writable local storage. A natural question is: what, if any, are the essential differences between Web apps and "desktop" apps? Will they converge completely?

I don't think so. Here are features of Web apps that I think are essential:


  • URL addressing You access a Web app by loading a URL. You can link to Web apps from other Web pages, you can save URLs in bookmarks or desktop shortcuts, and you can pass around URLs to your friends.
  • Always sandboxed Users are not able to make good trust decisions, so apps must always be sandboxed. One of the themes of Web platform evolution is finding ways to design APIs and browser UI so that we can safely grant access to platform features to untrusted apps. For example, Web apps can't be granted general filesystem access, but we can satisfy most needs with per-domain persistent storage and and the trusty <input type="file"> control to grant an app access to particular files.
  • Browser chrome Web apps are used in the context of a browser. Forward and back buttons are available (and usually work!), providing a universal navigation UI across applications. Bookmarks are available, status is available, history is available, reload, stop, tab organization, etc. A lot of designers wish they could escape from the browser chrome, but sandbox requirements make that unrealistic, and the comfort of common browser UI should not be disregarded.
  • Open box Web apps have internal structure that is exposed to the browser and to tools based on the browser. This enables tools like Greasemonkey to reach into and manipulate Web content. It enables browser features like text zoom and "find in page". It enables search engines. It allows for user style sheets and other kinds of end-user customization of the Web experience. It also creates an environment of pseudo-open-source-software, where authors can learn how other apps work, even if they can't modify or reuse that code directly.


Tuesday, 4 November 2008

Grumble Grumble

I've been hacking up an implementation of interruptible reflow. Laying out large pages is sometimes a source of unacceptable delays. It would be a better user experience if, during a long layout, we could detect that there's pending user input, get out of the layout code, handle the user input, and then resume the layout. In principle this isn't that hard now that we use dirty bits to track what needs to be laid out. It's simply a matter of making sure the right dirty bits are set as we bail out of reflow, and then making sure we enter reflow again eventually to complete the unfinished work. We also have to make sure we accurately distinguish "interruptible" reflows from "uninterruptible" reflows. For example, if a reflow is triggered by script asking for the geometry of some element, that's uninterruptible since returning with some work undone would give incorrect results and break things.

I have a patch that basically works, at least for pages that are dominated by block layout. But I've run into a severe problem: I don't know how to detect pending user input in Mac OS X or GTK :-(. On Windows, for about 20 years there's been a function called GetInputState which does exactly what we need. OS X and GTK/X11 just don't have anything like it. I've tried Appkit's nextEventMatchingMask; it sometimes processes some events, which is unacceptable. X11 doesn't seem to provide a way to peek the event queue without blocking; the only nonblocking event queue reading APIs always remove the event from the queue if they find one.

OS X and GTK suck, Windows rules. Prove me wrong, fanboys!



Monday, 27 October 2008

American Tidbits

It's been a crazy few days. I pulled into Boston late Wednesday night. On Thursday I hung out at MIT. I visited my friend's quantum computation lab; lasers and liquid helium make science so much more interesting. I gave a talk about Chronicle and Chronomancer to a pretty good-sized CSAIL audience. I had the honour of being hassled by Richard Stallman for suggesting that there was synergy between Chronicle and VMWare's record-and-replay features. (VMWare, as non-free software, is apparently never the solution to any problem.)

On Friday morning I took the train to Stamford and then visited the IBM Hawthorne lab to talk about the future of the open Web platform. My talk was too long so I sped up and skipped the demos (contrary to my own point that visual gratification is the driving force behind platform evolution). Still, it went well and I enjoyed catching up with a lot of my old colleagues.

On Saturday I was at another friend's wedding. It was too much fun hailing friends I hadn't seen for years and watching the multi-second transition from unrecognition, to recognition, to shock and exclamation. I left the party around 3pm and arrived at my hotel in Mountain View 13 hours later.

When I'm in the Bay Area I get my Sunday fix at Home Of Christ 5. I go because a few very good friends go there, but also because "Home Of Christ Five" is the coolest name for a church ever (I'm not sure why). It's a very Silicon Valley church; they meet in a converted office building in an industrial park in Cupertino, right next to an Apple satellite. And this morning, the pastor compared the Christian struggle with sin to Boot Camp.

[Irony: a very attractive woman had just sat down next to me, and I was trying very hard to ignore this fact, when the pastor said "Now, turn to the person next to you and ask them if they struggle with sin!" You gotta be kidding me, Lord!]

Anyway, the HOC5 congregation is very friendly to newcomers. I know this because I only go there once or twice a year so naturally no-one ever remembers me and they're very nice every time :-).

I've watched some TV at times. The election coverage is appalling. Most of the "commentary" is clearly pushing one candidate or the other. Most of it's negative. I watched for hours and learned nothing about any significant differences between McCain and Obama's actual proposed policies. Most reporting is actually meta-news on the campaign itself, or even meta-meta-news on the media's coverage of the campaign. The worst is when pundits eat up screen time bemoaning the lack of meaningful coverage --- HELLO! Even the comic coverage isn't funny.

Another thing I noticed is that the news shows have so much animated rubbish --- scrolling tickers, bouncing icons, rotating stars. Larry King even has periodic full-screen zooming stars take over the screen, blotting out the actual picture, occurring seemingly at random while people are talking. It's impossible to concentrate on the actual content of the show (such as it is). What is the purpose of this?

Finally, in-flight movie summary:


  • The Happening OK.
  • The Forbidden Kingdom OK, but what a waste. Lose the white kid.
  • Get Smart OK.

Thank goodness I didn't pay for those. Better luck on the way home, I hope.


Tuesday, 21 October 2008

The Tragedy Of Naive Software Development

A friend of mine is doing a graduate project in geography and statistics. He's trying to do some kind of stochastic simulation of populations. His advisor suggested he get hold of another academic's simulator and adapt it for this project.

The problem is, the software is a disaster. It's a festering pile of copy-paste coding. There's lots of use of magic numbers instead of symbolic constants ("if (x == 27) y = 35;"). The names are all wrong, it's full of parallel arrays instead of records, there are no abstractions, there are no comments except for code that is commented out, half the code that isn't commented out is dead, and so on.

It gets worse. These people don't know how to use source control, which is why the comment code out so they can get it back if they need it. No-one told them about automated tests. They just make some changes, run the program (sometimes), and hope the output still looks OK.

This probably isn't anyone's fault. As far as I know, this was written by someone who had to get a job done quickly who had no training and little experience. But I think this is not uncommon. I know other people who did research in, say, aeronautics but spent most of their time grappling with gcc and gdb. That is a colossal waste of resources.

What's the solution? Obviously anyone who is likely to depend on programming to get their project done needs to take some good programming classes, just as I'd need to take classes before anyone let me near a chemistry or biology lab. This means that someone would actually have to teach good but not all-consuming programming classes, which is pretty hard to do. But I think it's getting easier, because these days we have more best practices and rules of thumb that aren't bogus enterprise software process management --- principles that most people, even hardcore hackers, will agree on. (A side benefit of forcing people into those classes is that maybe some will discover they really like programming and have the epiphany that blood and gears will pass away, but software is all.)

There is some good news in this story. This disaster is written in Java, which is no panacea but at least the nastiest sorts of errors are off-limits. The horror of this program incarnated in full memory-corrupting C glory is too awful to contemplate. I'm also interested to see that Eclipse's Java environment is really helping amateur programmers. The always-instant, inline compiler error redness means that wrestling with compiler errors is not a conscious part of the development process. We are making progress. I would love to see inline marking of dead code, though.



Monday, 20 October 2008

October Travel

On Wednesday I'm taking off to the US for about 10 days. First I plan to visit Boston to see a few friends and give a talk at MIT about Chronicle on Thursday. Then I'm heading to New York on Friday where I'll give a talk at IBM Research about Web stuff (3pm, 1S-F40). On Saturday I'm at a friend's wedding. On Saturday night I fly back to California for a platform work week. Hopefully that week I'll also be able to attend the WHATWG social event in Mountain View.

It's going to be somewhat tiring and I probably won't be very responsive online until I get to California, but I should be quite responsive from then on --- especially if you manage to corner me in meatspace!



Sunday, 19 October 2008

Invalidation

Whenever content or style changes, Gecko has to ensure that the necessary regions of the window are repainted. This is generally done by calling nsIFrame::Invalidate to request the repainting of a rectangle relative to a particular frame. Each of these operations is nontrivial; these rectangles have to be translated up the frame tree into window coordinate space, which is tricky if there's an ancestor element using CSS transforms, or there's an SVG foreignObject ancestor. Ancestor with SVG filter effects can even cause the invalidation area to grow in tricky ways. I've always been surprised that Invalidate doesn't show up in profiles very often.

Worse than the performance issue, though, is that these Invalidate calls are smeared through layout and consequently there are lots of bugs --- mostly where we invalidate too little and leave "rendering turds" in windows, but also where we invalidate too much and do unnecessary repainting that slows things down. Part of the problem is that Gecko's invariants about who is responsible for invalidating what are complex and, in some cases, just plain wrong. However the problem is also fundamental: invalidation is in some sense duplication of information and code, because we already have code to paint, and in principle you could derive what needs to be invalidated from that code.

So, I've been tossing around the idea of doing just that. We already create frame display lists representing what needs to be rendered in a window. So in theory we can just keep around a copy of the display list for the window; whenever we need to repaint we can just create a new display list for the window, diff it against the old display list to see what's changed, and repaint that area.

There are a few problems that make it not so easy. First, we need to know when we may need to repaint the window. In most cases that's fairly easy, we should do a repaint after every reflow or style change with a "repaint" hint. A few other cases, such as animated images, would need to signal their repainting needs explicitly. We'd have to deal with maintaining the "currently visible" display list in the face of content it refers to being deleted. We'd also need to update that display list to take account of scrolling.

The big question is performance. Display list construction is typically very cheap, but in pathological cases (when huge numbers of elements are visible at once), it can be slow, so small visual changes to such pages could get significantly slower than they are now. On the other hand, when most of the page area is changing this scheme should be faster than what we today, because the costs of invalidation will go away and we have to build a display list for the whole window at each paint anyway.

Another option is to do some kind of hybrid scheme where we make a little effort to keep track of what's changed --- perhaps just the frame subtree that was affected, possibly via dirty bits in the frames themselves --- and use that to bound the display list (re)construction.



Tuesday, 14 October 2008

SVG Bling Update

For those who don't follow the Web-Tech blog --- you should. But anyway, support for SVG filter, clip-path and mask on non-SVG content landed on Gecko trunk a while ago and is in Firefox 3.1 beta 1. Also, I've proposed these extensions to the SVG WG for standardization.

Even more exciting is that Boris Zbarsky did an awesome job of implementing external document references --- I'm not sure if that made beta 1, but it will definitely be in beta 2. This means that all code that uses nsReferencedElement to track which element is referenced by a given URI/fragment-ID now automatically supports referring to external resource documents --- i.e. URIs of the form foobar.xml#abc. And Robert Longson has done a great job of migrating our last remaining SVG URI-ref users to use nsReferencedElement --- that is, markers and textPath --- so as of today, all the places where we support SVG referring to elements by ID support referring to elements in external documents as well as the current document. (It also means they're all "live for ID changes" and safe to use with incremental loading of SVG documents.)

The combination of these features is particularly cool because it means you can now apply SVG filter/clip-path/mask in regular HTML (non-XHTML) documents by placing the effect definitions in an external SVG XML file.

We're pretty much done for new features in Gecko 1.9.1 at this point. Looking forward post Gecko 1.9.1, we will be able to build on the external resource document loader to support SVG fonts (including via CSS @font-face) and SVG images (for CSS background-image etc, and HTML <img>). They should be a top priority for Gecko 1.9.2 or whatever it ends up being called.

At this point most of my "bling branch" has landed, except for two features: SVG paint servers (gradients and patterns) for non-SVG content, via CSS background-image, and the "use any element as a CSS background-image" feature. I'm not sure what to do with them. The former probably should land at some point, but it's not a high priority for me at the moment --- maybe I'll roll it into SVG background-image support, since they're closely related. For the latter, my current thinking is that some uses are adequately served with a CSS background-image referencing an SVG pattern containing a <foreignObject>, and other uses really demand an API that lets you specify a particular DOM node to render (e.g. to mirror a particular element in a particular IFRAME).

For that case, I think the way to go is to to create a new element --- some sort of viewPort element that acts like a replaced element and renders the content of some other element. It would have an attribute href that lets you declaratively specify a URI to the element to render, but it could also have a setSource(node) API so that you can give it a specific DOM node to mirror. You could even have an allowEvents attribute that lets events pass through the looking-glass... Right now MozAfterPaint and canvas.drawWindow are the best way to do effects like that, but they're not optimal. (Although there are uses for MozAfterPaint that the putative viewPort element would not satisfy, such as paint flashing/logging for debugging tools.)



Hating Pixels

Drawing an image on the screen should be a simple operation. However, in a browser engine, things get complicated because there are a number of subtle requirements involving subpixel layout, scaling, tiling, and device pixels. We've had lots of bugs where visual artifacts appear in sites at certain zoom levels, and we've fixed most of them but the code got really messy and some bugs were still not fixed. So several days ago I sat down and worked out what all our known requirements for image rendering are. Then I worked out an approach that would satisfy those requirements, and implemented it. As is often the case, the implementation revealed that some of my requirements were not strong enough. The resulting patch seems to fix all the bugs and is much much simpler than our current code.

The problem at hand is to render a raster image to a pixel-based device at a specified subpixel-precise rectangle, possibly scaling or tiling the image to fill the rectangle. We control this by specifying two rectangles: a "fill rectangle" which is the area that should be filled with copies of the image, and an "initial rectangle" which specifies where one copy of the image is mapped to (thus establishing the entire grid of tiled images). There may also be a "dirty rectangle" outside of which we don't need to render. There are several requirements, the first three of which are actually more general than just image rendering:


  1. Horizontal or vertical edges (e.g., of background color, background image, border, foreground image, etc.) laid out so they're not precisely on pixel boundaries should generally be "snapped" during rendering to lie on pixel boundaries, so they look crisp and not fuzzy.
  2. All edges at the same subpixel location must be snapped (or not snapped) to the same pixel boundary. This includes multiple edges of the same element, edges of ancestor/descendant elements, and edges of elements without an ancestor/descendant relationship. Otherwise, you get nasty-looking seams or overlaps.
  3. Any two edges separated by a width that maps to an exact number of device pixels must snap to locations separated by the same amount (and in the same direction, of course). As far as possible, we want widths specified by the author to be honoured on the screen.

    In Gecko, we achieve the first three requirements by rounding the subpixel output rectangle top-left and bottom-right corners to device pixel boundaries, ensuring that the set of pixel centers remains unchanged.

  4. When content is rendered with different dirty rects, the pixel values where those rects overlap should be the same. Otherwise you get nasty visual artifacts when windows are partially repainted.
  5. Let the "ideal rendering" be what would be drawn to an "infinite resolution" device. This rendering would simply draw each image pixel as a rectangle on the device. Then image pixels which are not visible in the ideal rendering should not be sampled by the actual rendering. This requirement is important because in the real Web there's a lot of usage of background-position to slice a single image up into "sprites", and sampling outside the intended sprite produces really bad results. Note that a "sprite" could actually be a fixed number of copies of a tiled image...

    (This may need further explanation. Good image scaling algorithms compute output pixels by looking at a range of input pixels, not just a single image pixel. Thus, if you have an image that's half black and half white, and you use it as a CSS background for an element that should just be showing the half-black part, if you scale the whole thing naively the image scaling algorithm might look at some white pixels and you get some gray on an edge of your element.)

  6. The exact ratio of initial rectangle size in device pixels to image size in CSS pixels, along each axis, should be used as the scale factors when we transform the image for rendering. This is especially important when the ratios are 1; pixel-snapping logic should not make an image need scaling when it didn't already need scaling. It's also important for tiled images; a 5px-wide image that's filling a 50px-wide area should always get exactly 10 copies, no matter what scaling or snapping is happening.
  7. Here's a subtle one... Suppose we have an image that's being scaled to some fractional width, say 5.3px and we're extracting some 20px-wide area out of the tiled surface. We can only pixel-align one vertical edge of the image, but which one? It turns out that if the author specified "background-position:right" you want to pixel-align the right edge of a particular image tile, but if they specified "backgrond-position:left" you want to pixel-align the left edge of that image tile. So the image drawing algorithm needs an extra input parameter: an "anchor point" that when mapped back to image pixels is pixel-aligned in the final device output.

It turns out that given these requirements, extracting the simplest algorithm that satisfies them is pretty easy. For more details, see this wiki page. Our actual implementation is a little more complicated, mainly in the gfx layer where we don't have direct support for subimage sampling restrictions (a.k.a. "source clipping"), so we have to resort to cairo's EXTEND_PAD and/or temporary surfaces, with fast paths where possible and appropriate.

Note: this algorithm has not actually been checked in yet, so we don't have battle-tested experience with it. However, we have pretty good test coverage in this area and it passed all the trunk tests with no change to the design, as well as new tests I wrote, so I'm pretty confident.



Sunday, 12 October 2008

Coromandel

I took a couple of days off and took off to Coromandel with the family --- we needed to do something with the kids during the school holidays, and I haven't been out that way for a very long time.

On Thursday we drove to Clevedon, then to Kawakawa and around the coast to Waharau Regional Park for a short walk. We carried on around the Firth of Thames, taking a lunch stop near Thames, then continued north to Coromandel town. The road from Thames to Coromandel town is mostly right next to the shore and very pretty. (See the first picture below.)

We stopped at Coromandel town for a bit and then crossed the peninsula to Whitianga where we stayed in a motel, right across the road from the beach. Despite being the school holidays, the whole place was very quiet. I guess it's still a bit early in the season, or maybe it's the dreaded Financial Crisis. Anyway, it was lovely.

On Friday we drove around to Hahei and did the walk to Cathedral Cove (see second picture below). There's nothing to say about that that hasn't already been said --- amazing etc. Kayaking around it would be a great way to go but the kids aren't up to that yet. Had lunch on Hahei beach, then went back to Whitianga, took the ferry to the south side of the harbour and walked to the pa site on Whitianga Rock, then past Maramatotara Bay, up the hill and along the ridge to Flaxmill Bay and back to the ferry.

It's neat how the eastern and western Coromandel coasts have their own flavours. The eastern coast has the Pacific outlook and the best beaches, but the western coast has a bit more character.

This morning we drove back along the southern route, crossing the Coromandel Ranges near Tairua. We came pretty much straight back --- only took about two-and-a-half hours, so with a leisurely start we still got back to Auckland in time for yum cha. I love living here :-).


East coast of Coromandel, looking south



Looking north from the Cathedral Cove carpark along the coast


Friday, 3 October 2008

Interesting Developments In Program Recording

A few interesting developments tangentially related to Chronicle have emerged in the last week or so.

VMWare developers have announced Valgrind-RR, syscall-level record and replay for Valgrind. The idea is to isolate and record all sources of nondeterminism --- syscall results, CPU timer results, thread scheduling, etc --- so you can play back the execution deterministically to get the same results, but in Valgrind with other Valgrind tools turned on. This is great stuff. It actually complements Chronicle very well, because you could run your program first with Valgrind-RR, with less than 10X overhead (going by their numbers), and then rerun under Chronicle with higher overhead but guaranteed to get the same execution. So this would make Chronicle more useful for interactive programs.

VMWare has also announced improved replay for their VM record-and-replay functionality. That's cool, but what's especially interesting is that their Valgrind announcement hinted at possible future integration of Valgrind replay with VM recording. That's really the ultimate scenario for Chronicle: record your app in a VM at less than 2X overhead, then replay under Chronicle instrumentation at your leisure for an awesome debugging experience. You could even parallelize the replay under Chronicle to reduce that overhead by throwing hardware at it.

Another piece of excitement is that a partial port of Valgrind to Mac has been announced. I haven't tried it myself, but people say it can run Firefox opt builds and is close to being able to run Firefox debug builds. This means that at some point I'll probably be able to get Chronicle running on Mac!



Tuesday, 30 September 2008

Chronicle Update

I don't have any big-ticket items rushing the beta 1 freeze on Tuesday PDT, and I'm temporarily not slammed with reviews for other people's big-ticket items, so I'm investing a little time in Chronicle and Chronomancer, partly because other people keep trying to use them.

Today I updated Chronicle to Valgrind 3.3.1. This makes it work on more recent Linux distributions and fixes a few bugs. I also changed the Chronicle build system and file layout so that all the Chronicle support programs are built as part of Valgrind's build system. This simplifies the build and means that "make install" will now install Chronicle's programs somewhere useful. The steps for building and running Chronicle are now simpler and hopefully more robust.

Another thing I'd like to do is to make it easier to get started with Chronomancer. There are two main usage scenarios I want to address:


  1. User has no existing Eclipse project and wants to debug from a saved database
  2. User has no existing Eclipse project and wants to run their app under Chronicle and debug the results

For the first case I need a discoverable command to browse to a database file in the file system and spawn chronicle-query on it automatically. For the second case I need a discoverable command that prompts for a command line. The latter is tricky though since applications that interact with the console would get stuck unless I implement some kind of Eclipse console. It might be better supported by setting up a proper Eclipse run-target, or even having the user use CDT to set one up and then piggybacking that. Hmm. For now I think I should focus on the first situation since it's the one I care about the most and it's the most broadly applicable.

I'm also hoping to write some Gecko-specific debugging extensions. It would be interesting to visualize frame trees and find a way to somehow integrate their history at the same time.



Monday, 29 September 2008

Dear Nat Torkington

... and other attendees of the 2008 New Zealand Open Source Awards:

Ridley Scott, not James Cameron, directed Gladiator.

I just had to set the record straight. My claim that Nat was mistaken met considerable controversy at my table.



Friday, 26 September 2008

New Zealand Open Source Awards

I'm very grateful to have received an award for "Open Source Contributor" in last night's New Zealand Open Source Awards.

This kind of event --- with a fairly heavy business and government presence --- was rather foreign to me, but nevertheless the evening was a lot of fun.

In accepting, I mentioned that I'm already well rewarded for my open source contributions, and thanked all the people who labour away tirelessly on useful projects without much reward or recognition.



Monday, 22 September 2008

Whatipu

On Saturday our family went back out to Whatipu, out on the west coast near the Manukau Harbour entrance, for the first time in years. It was a marvelous spring day and we just had to go somewhere to spend all day outside. On the way there we took the short detour to Mt Donald McLean. The view there is amazing --- all the way back to Auckland to the west, with Rangitoto and the Sky Tower clearly visible, and to the south all the way to Mt Karioi near Raglan and even --- on an extremely clear day, which this wasn't --- to Mt Taranaki.

At Whatipu itself we checked out the sea caves to the north, then went to the beach itself for a walk around. The beach, dunes and swamp are all so immense. So are the headlands marching off into the distance on the southern side of the Manukau harbour. We also climbed the southern ridge to visit the Signal House lookout --- a great walk through scrub and manuka forest to another great vantage point overlooking the beach.

View from Mt Donald McLean of waves breaking on the sandbar at the entrance to the Manukau harbour:


Sandbar at the entrance to the Manukau Harbour with waves breaking on it

Headlands along the coast to the north of Whatipu:

Headlands


Sunday, 21 September 2008

Whiny Expats

I get grumpy when some NZer based overseas presumes to lecture his former compatriots about how to run the country. In the same vein I got extremely grumpy about the "lost generation" controversy several years ago. If these people cared, they'd put their money and energy where their mouths are, come back to NZ and actually do something positive instead of just absconding with their skills and upbringing and then hectoring those who remain here to make a contribution.

Their plans always seem to be self-serving and/or wrong, as well. At least the plans related to gestating high-tech industry, which I know something about, are invariably wrong.

I also get annoyed by the cringey way the media reports this stuff. It's classic gloomocrat material.

It's especially galling to be lectured from Russia. I hope no-one sees the Russian situation as a model for New Zealand's future!



Monday, 15 September 2008

CSS Transforms

We landed support for CSS Transforms over the weekend. This is our implementation of a spec spec proposed by Apple that allows authors to apply affine transforms (e.g. scaling and rotation) to the rendering of Web content. This feature is useful and usually can't be achieved using SVG and <foreignObject>, because wrapping HTML content in an SVG wrapper tends to mess up CSS layout and other things...

This is almost entirely the work of one intern, Keith Schwarz. We've had a great bunch of interns this (northern) summer and Keith's work was really outstanding. Not only was there are lot of work to do in the style system and rendering pipeline, but also various places in the code had to be tested and modified to be transform-aware. Not only that, but we wanted to make sure that -moz-transform wasn't just hacked in, so we made Keith create general infrastructure for handling transformed content that can cover -moz-transform, SVG <foreignObject>, and whatever else is needed in the future. So we now have an nsIFrame::GetTransformMatrix API to express transformations within the frame hierarchy, that we can build on whenever we need that information.

There are still a few loose ends. Transforms applied to windowed plugins just don't work (unsurprisingly). Combobox dropdowns and other popups such as XUL menus aren't transformed --- fixing that is possible, using transparent widgets, but would be a ridiculous waste of time in my opinion. If someone mad wants to contribute a patch, I'd review it though :-). There are issues with the construction and placement of native widgets that can cause transient rendering glitches --- they should go away with compositor. There are definitely performance issues because with a lot of the transforms you'll fall off all graphics fast-paths. A big chunk of those problems can only be solved with hardware-accelerated graphics.

Update Here's the source file for the demo.



Friday, 12 September 2008

Auckland Web Meetup Summary

A quick summary of tonight's meeting, before I forget...

The Microsoft program manager went first. There was projector trouble at the start, which was unfortunate because it led to his talk running over time which meant he didn't get to demo probably the best IE8 features, per-tab crash recovery and the privacy features.

The first part of his talk was about standards and CSS 2.1, but this part was short. I think that's mainly because this is just not his personal focus area, although part of the problem is that IE8's not doing a lot of exciting new stuff here. It's certainly great if they can improve the interop situation, but that doesn't create excitement in a talk.

Most of the talk was about UI improvements. He spent a lot of time on the Accelerators feature, which to be quite blunt is no big deal. Web Slices looks a bit more interesting. The Visual Search or whatever they call it --- per-site incremental searches showing results in a dropdown as you type --- looked cool.

There was a short demo of their new developer tool. It's incredible how much it looks like Firebug. Frankly I was surprised they haven't done more here --- they haven't even matched Firebug yet, it seems. But we still need to invest more here to maintain our lead and for the sake of the platform.

Our talk seemed to go pretty well, although it's always hard for me to tell when I'm speaking. The demos looked good --- we used a trunk build with just the addition of John Daggett's CSS downloaded fonts patch, so Acid3 was visually correct, and I demoed SVG effects for HTML which landed today (more about that another time). People sounded impressed, but it's always kind of embarrassing to show demos I designed myself to a gang of Web designers, since my graphic design skills are ... limited.

Our talk was far lower-level and more technical than the IE talk and we didn't talk at all about UI work. In fact, when I started showing actual CSS rules I suddenly got a feeling we might have pitched it wrong. However during the demos people asked to see the source, so I guess we were OK :-). (That also let me say "at Mozilla we like to show you the source", which I'm proud of coming up with off-the-cuff!)

Chris Double talked about JS performance, which was perhaps the trickiest part of our talk. We had a slide comparing Sunspider performance for the latest available browser builds. He talked about how performance is rapidly improving in most cases and politely ignored the "anomaly", although the audience obviously picked it up.

Chris finished by showing off his 8080 video game emulator in Javascript and <canvas>. It's a great way to end since it combines HTML5 features with fast JS performance to show what Web apps can do now. It will be even better when some <audio> bugs are fixed on trunk so the sound effects work.

Overall I was pleased. As I'd expected, this encounter was fairly easy since we have a strong story, especially for Web developers.

Update I've uploaded my slides and made some of the demos available: (which I hereby place in the public domain)


They should all work in trunk Firefox builds. (The more complex SVG effects demos now also work on trunk: clipPath, mask, filter.) I've added -webkit and -o rules for the CSS3 bling and border-image demos, but I haven't tested them in those browsers.


Tuesday, 9 September 2008

Some Boring CSS/Acid3 Stuff

I landed the fix to bug 243519 last night. This fixes our treatment of the root element in CSS. The main thing it does is make the viewport (or more precisely, the "initial containing block"), not the root element, the default containing block for absolutely positioned elements. This was the last layout bug affecting the display of Acid3. The last remaining visual Acid3 bug is support for CSS @font-face downloadable fonts (which is well under way!).

As a side effect, since it was pretty easy to do I enabled absolute, fixed and relative positioning of the root element. This is optional according to CSS but since Opera and Webkit do it, it's the right thing to do for interop. I haven't really thought of a use for it though. As a bonus we also got -moz-columns working on the root element, hooray!



Friday, 5 September 2008

Auckland Web Meetup

Chris Double and I will be speaking at the next Auckland Web Meetup on Thursday September 11 --- one week away. We'll be talking about Firefox 3.1 and the many new features and improvements it brings to Web developers.

When I sent the blurb to John Ballinger a few weeks ago, I added "Plus Tracemonkey, the world's fastest JS engine." I'm glad that that's still true! (Yeah, I know Sunspider is just one benchmark, but still.) Now I hope it's still true next Thursday :-).

Microsoft evangelists will be giving a presentation on IE8 at the same meeting. The whole meeting should be very interesting and a lot of fun --- especially for us, since I think that Firefox 3.1 has a strong story over IE8, especially for Web developers. I'm really looking forward to this.



Thursday, 4 September 2008

Chrome

A few people I know have asked me what I think about Google's Chrome browser.

Technically, it looks good on paper. There are some interesting architectural problems they haven't solved yet, especially with the process separation model, especially with regard to windowless plugins, and also Mac. These are problems that will be encountered by anyone doing process separation so it will be interesting to see how that goes. V8 seems overhyped when you take into account the JS work being done by other browser vendors.

I'm not sure how the competitive landscape is going to play out. Mozilla's in a strong position now and the immediate future looks great. We just need to stay focused, keep making smart decisions, and keep shipping great software.

Overall, I'm actually really excited. No matter who gains and who loses, there's no doubt that this innovation and investment and energy is great for the Web (especially when it's delivered in free software).

(Admittedly, there are moments when I half-wish for a nice quiet job where I'm not competing against the world's biggest tech companies and I don't have to worry about the future of the Internet. But that would be so boring. Having a job I'm passionate about is worth the stress.)



Monday, 1 September 2008

Kitekite Falls

On Saturday we took advantage of a predicted break in the weather to head out to the west coast (an hour's drive each way) --- we hadn't been out there for a while. At Piha there was a stiff breeze and some spots of rain, so not a lot of people about, but nonetheless we had a great time. There was big surf and the offshore wind was whipping the tops off the waves most spectacularly. We mucked around on the beach at the foot of Lion Rock and then went back to the Glen Esk Road carpark to walk to Kitekite Falls.

I don't recall ever doing that walk before, and I was most impressed. It's a lovely walk, just half an hour each way beside the Glen Esk stream, and the falls are spectacular. I can't believe we've been to Piha so many times and not seen them! The track was in great condition even after all the rain we've had. The only problem was that to close the loop you have to cross the stream at the base of the falls. It's narrow so I think normally you can step or jump, but the water was running high so we had to cross on a judiciously placed log. It did add some excitement.


Piha beach, nearly deserted



Kitekite Falls


Wednesday, 20 August 2008

Disruption In Progress

It's embarrassing to repost something found in Slashdot, but here we go. First ASUS started shipping Splashtop, which lets you boot Linux and some apps from flash memory for instant-on Web surfing and media playing without having to launch Windows. Now Dell's in the game, this time with SLED10. I hope Dell's responding to market demand for this. It's an excellent path for Linux to eat Windows desktop share in a disruptive way. And of course, since they're shipping Firefox, it's effectively OEM distribution and market share for us. Benefits are flowing both ways; if Linux didn't have a decent Web browser --- or we hadn't held the Web open enough for non-IE browsers to be viable --- then this couldn't happen. Hopefully our branding helps them too. Fun times.

One more idea though --- if Firefox is already on the machine, and users are using it for instant-on, wouldn't it make sense to have it installed in Windows as well, for a more consistent user experience? Think about it, OEMs!



Tuesday, 19 August 2008

Analyzing Deer

I've been a Christian for nearly 16 years now and sometimes I'm concerned I'm finding it a bit stale. Bible studies, sermons, reading --- sometimes I feel like I've heard it all before. This is dangerous since I'm not nearly the saint I ought to be --- a lot of what I've heard I need to hear again, and do a better job of applying!

I can see several ways to try to grapple with this. One is to take on fresh challenges, and that's happening a little bit. With growing kids you can't really avoid it :-). Another way is to vary the routine and do things differently.

One little experiment I'm trying with a few friends, as a new topic of study, is to dig into some popular Christian songs, ancient and modern, and understand what they're really about. It's easy to sing along with your brain somewhat disengaged, taking it all far too lightly, or in some cases missing the point altogether (especially with the older hymns where the meaning has not carried over well into modern English), so I think it might be useful (and fun) to dig deeper for a change.

For example, "As The Deer" is quite popular:

As the deer pants for the water, so my soul longs after you.

You alone are my heart's desire and I long to worship you.

You alone are my strength, my shield, to you alone may my spirit yield.

You alone are my heart's desire and I long to worship You.

It plays as an uplifting, joyful song of worship. But the first line is straight from Psalm 42, and the context of the whole psalm is actually very harsh --- "My bones suffer mortal agony as my foes taunt me, saying to me all day long, 'Where is your God?'". This is not a happy Bambi* deer situation; the psalmist is at rock bottom, the "panting for water" is sheer desperation, someone in the desert at the end of their rope. Thinking of it that way definitely gives the song a different feel.

The rest of the song doesn't get any easier. "You alone are my heart's desire" --- who loves God so much more than anything else that they can honestly sing that? Not me. Most days I'm considerably more animated by a desire to fix Gecko bugs than by love of God. Now, the easiest way for me to make that line true would be to play it cool and disengage emotionally what's around me, but that's definitely not the right idea; we're supposed to love God more, not others less. In fact, as far as I know, the most direct way to where I need to be is to be clobbered by a huge tragedy or crisis, which is presumably how we got Psalm 42. (That would help with the "stale" problem too.) But I'm not too keen on that, so, er, I'll try taking the long way around, thanks!

Of course, songs are not authoritative, so part of the job is evaluating where the songwriter might have got it wrong. Bummer if your favourite song turns out to be heresy. "As The Deer" seems OK.

* Yes, I'm aware that Bambi itself is not the happy deer situation pop culture remembers. It's rather ironic.



Friday, 15 August 2008

The Coming Battle Over Web Fonts

Bill Hill has posted an article about "font embedding" on the Web, pushing Microsoft's "Embedded OpenType" format as the only way to do font embedding that's acceptable to font foundries. The theme is that "font linking" as implemented by Safari (and soon Gecko and Opera) is likely to get people sued and would lead to widespread font theft, and EOT is a much better approach that is endorsed by Microsoft and other font vendors. Chris Wilson followed up.

One huge problem with Microsoft taking this position is that Silverlight does not support EOT but does support direct font linking. It is very hard to take Microsoft's EOT arguments seriously when their own Silverlight division is obviously not convinced. It is especially galling for Microsoft people to practically demand that other browsers adopt EOT, and scold Apple, while Silverlight is not mentioned. It's plain hypocrisy. We need to keep pressing Microsoft hard on this issue.

Another big problem is that EOT is oversold. It is nothing more than trivial obfuscation of the font file plus some metadata asking the client to limit the domains in which the font file can be used. EOT proponents tout its font-subsetting features, but plain OpenType font files also support subsetting. The Ascender/Microsoft advocacy site suggests "EOTs are bound to a specific web page or site", which overstates the case; it's trivial for someone to unpack an EOT to obtain an unrestricted font, or to modify the domain list, or to modify a client application to ignore the domain restrictions.

The only protection of value is the ability to prevent site A from directly linking to fonts on site B, and that can also be obtained with bare font files, simply by imposing a same-origin restriction on font linking. (Even this won't be a big deal in practice; no commercial site is going to link to fonts on a random server which could be replaced with obscene glyphs at any time!) Safari 3.1 doesn't impose such a restriction but I'm in favour of it for Gecko. (We could support Access Controls for fonts so site B could permit less restrictive linking.)

The strongest argument for supporting EOT is that font foundries will sue Web authors who serve bare font files but they won't sue EOT users. That's a weak argument, since in the absence of meaningful practical differences it boils down to "font foundries are stupid" and I hope they aren't. But if that turns out to be the case, we may end up having to support EOT. Ick.

Another thing about this discussion that bugs me is the "embedded" vs "linking" terminology. "Embedded" sounds more tightly bound to particular documents than "linking", and therefore "better" ... but of course you do link to EOT files and the difference is illusory --- especially if we have a same-origin restriction on @font-face.

Update Chris Wilson comments "[Silverlight] needs the fonts to be in a package starting with SL 2.0, and that package would need to be opened (i.e. you can’t just link to fonts on other people’s systems".

Silverlight has always imposed a same-domain restriction for font loading, and I've never claimed otherwise. As for the package requirement, in March, that wasn't true. I haven't been able to determine whether direct font linking as described in that post was removed in June's beta 2 (apparently the idea you should be able to create Silverlight apps using a text editor has fallen by the wayside), but it's not mentioned in the list of breaking changes in beta 2 (or here). So I dunno. Even if they've removed that, you can probably still use FontSource and WebClient to load a bare font file. Anyone want to experiment?

Anyway, requiring fonts to be placed in the application XAP file (which is actually just a ZIP file!) is a negligible increase in protection over the same-domain restriction Silverlight already had --- I don't see the point.



Monday, 11 August 2008

Eureka

What do you say when you engage a stranger in conversation, say at church, and want to get beyond the superficial? There are the standard utterly boring questions "what do you do?", "where do you live?", and the one I hate the most, "how are you?". Upon receiving the standard reply "good", I feel compelled to ask "why?", which probably tells people immediately that I'm an annoying freak, but I think it's good to be open about that.

Anyway, I've discovered a brilliant and simple alternative: "Tell me about yourself!" It's a most efficient way to find out what people think is important about themselves, and far more interesting than the standard questions. Yesterday a person told me straight away that he likes fishing.

Nerds For Jesus: bringing you original social-skills research since 1992.



Tuesday, 5 August 2008

My Summit

The best part of the summit for me was spending six hours on Whistler Mountain getting sunburned and scared out of my wits by the chairlift to the peak. But the Mozilla stuff was good too. I especially enjoyed meeting a lot of people I'd never been face-to-face with. In fact, I got a chance to talk to everyone on my "must meet" list except for two who didn't make it to the summit, and Michael Monreal. I'll probably embarrass myself by forgetting face-name mappings for most of them, but that's just the way my brain is.

The sessions were mostly pretty good. A little more structure and planning ahead might have helped, but I'm not sure. I wish we'd had a chance for a more systematic overview of development directions; the content/JS/layout/gfx meetings weren't quite enough because there are efforts that don't fit into those boxes and they fell through the cracks. Possibly we spent too much time talking about what we want to do and not enough time assigning people to work on it; however, as an event for volunteer contributors, maybe the summit is less appropriate for that than a Mozilla employee event.

I like the directions our GC and JS compilation work are going.

Staying up till 3am on Tuesday night in Moz Cafe getting video landed was too much fun.

I need to work on being less self-centered, I was thinking far too much about impressing people.

I need to be disciplined and start working on Compositor as soon as possible. We should keep running with short release cycles, but that means some of us are going to have to forgo working on features for the next cycle.

Let's have the next summit in Europe. There were so many Europeans present it probably wouldn't cost much more.



Monday, 4 August 2008

Why Ogg Matters

Matt Asay doesn't understand why shipping Ogg Vorbis and Theora in Firefox is important. The answer is simple. Our goal is to enable unencumbered, royalty-free, open-source friendly audio and video playback on the Web. Shipping Vorbis and Theora will achieve that for over 100M Firefox users --- not everyone yet, but a good start! To reach the rest, we will keep turning people into Firefox users, and pressure Apple, Microsoft and other vendors to support Vorbis and Theora. Vendor pressure must come from content providers dedicated to making compelling content available in free formats (coupled with a superior playback experience in Firefox). Wikimedia has stepped up and hopefully others will follow.

In fact, we'd love to be able to ship open-source codecs for H.264 and VC-1, but that can't happen until the MPEG LA's patents expire, or MPEG LA decides to give up its patent licensing fees, or software patents are struck down by the US Supreme Court (and possibly other jurisdictions). It would be unwise to wait.

Let me provide a mini-FAQ covering some of the other questions that have been asked:

Isn't Theora inferior to H.264, so no-one will use it? Theora isn't bad on an absolute scale --- look at some demos to see for yourself. There is ongoing work to improve the encoder so it's even better. Even if it's slightly lower quality than H.264 at some bit rates, it's still going to be very useful to people who favour free formats on principle, or who need an open-source solution, or who want a solution that Just Works across platforms without plugins, or who just want a solution without licensing fees --- for example, if you just want a convenient way to use a video clip in a Web app. Look at modern bank ATM interfaces, for example, to get an idea of what people could be doing in Web apps.

Since people can already play Vorbis and Theora in the browser by downloading a plugin, why is having them in Firefox important? Because the value to content providers and the pressure on other vendors depend entirely on these codecs being available to a lot of users --- and most users don't download codec plugins.

This is a great example of why Mozilla and Firefox are important. The Web needs a high-market-share browser vendor committed to free software and open standards across the board.

Will you get your pants sued off? We've taken legal advice. I don't know if we will talk about the results, but our actions speak loudly enough. Cutting Ogg support remains as a last-resort option.



Thursday, 31 July 2008

Mixed News

Whistler seems to have been cut off from Vancouver. If this had happened on Thursday night, we'd have been rescheduling 400 flights. As it is, it's still unclear whether the road will be open by the time people have to leave on Friday. Apparently the alternative ways out are float plane or a six-hour road trip. This could be interesting.

In other news, Mike Shaver announced that we intend to ship Ogg Vorbis and Theora codecs for the HTML5 <audio> and <video> elements in Firefox 3.1. This is a huge step and I'm very proud that Mozilla is willing to take this on. I had the privilege of checking in and enabling Chris Double's Ogg patch last night, so this is enabled in the nightly builds. Check it out!

Update Apparently the road will be closed for five days. So we'll be bussing out the long way around. I'll probably have to leave at 11am for my 9pm flight...

Update #2 Apparently I'm leaving at 8am. 8 hours on the bus, 5 hours at the airport, 13 hours on the plane...



Thursday, 24 July 2008

SVG Filter Performance Improvements In Gecko 1.9.1

The first batch of work from my bling-branch to land on trunk is improvements to SVG filter performance. I didn't want to make filters apply to HTML content but totally suck performance-wise.

I chose to focus on testcases that use filters to make drop shadows, since that's a very common usage pattern. In particular I wanted to test scrolling of those pages, since people tend to notice slow update on scrolling more than an initial slow paint. I created a simple benchmark for this.

The first major piece of work was to micro-optimize the Gaussian blur inner loop. I tried a lot of experiments, some of which paid off and others which didn't. I ended up speeding it up by about 10%, not as much as I'd hoped, but I did eliminate the use of a huge lookup table which should save memory.

The next approach was to optimize Gaussian blur so that when the input surface only has an alpha channel (i.e. the color channels are all 0), we don't do any work for the color channels. This happens when the source is "sourceAlpha", as it is for typical shadow effects. First I did some major refactoring of the filters code so that various bits of metadata can be propagated around the SSA-converted filter primitive graph, instead of having a dynamic "image dictionary". Then the actual optimization was easy. This made us another 25% faster.

As part of the refactoring I reduced the usage of intermediate surfaces --- we free a filter primitive output image as soon as we finish processing the last filter primitive that uses it as an input. This wasn't intended to improve performance but it did, by about 5%.

The next idea was to only run filter computation over the minimum area needed to correctly repaint the damage area, when only part of the window needs to be repainted --- important for scrolling, since when scrolling typically only a small sliver of the window is repainted. This is a bit tricky since filter primitives may need to consume a larger area of their input than their output, e.g., a blur may require the output area to be inflated by the blur radius to find the input area required. But I'd already implemented this knowledge for Firefox 3, to limit the size of the temporary surfaces we were allocating when a poor filter region was given by the author. It was just a matter of introducing damage area information into the mix. This gave us a 140% speedup! (By "speedup" here I mean the increase in the number of iterations of the test you can run in a given time limit.) In general this is a really good optimization because it means, for most filters, the time required to draw the filter is proportional to the size of the visible part of the filter, not proportional to the size of the filtered SVG objects. At this point I declared victory on the initial use case...

The final idea was to address a slightly different testcase. When only a small part of an image changes, but there's a filter applying to the whole image, we'd like to only have to recompute a small part of the filter. This is similar to the previous paragraph, and requires forward propagation along the filter primitive graph of bounding boxes of changed pixels. My fix here improved performance on that testcase by 70%.

There's still a lot more that could be done to improve filter performance. There are three obvious approaches:


  • Use CPU vector instructions such as SSE2
  • Perform run-time code generation to generate optimized code for particular filter instances
  • Use the GPU

You really want to support all three. You definitely need some sort of RTCG to perform loop fusion, so that instead of doing each filter primitive as a separate pass, you can minimize the number of passes over memory. If your code generator supports vector types and intrinsics, then it's easy to give it vector code as input and generate de-vectorized code if the CPU doesn't have the right vector instructions. And if you're super-cool you would allow the code generator to target the GPU for filter fragments where that makes sense.

However, at least as far as Gecko is concerned, this additional work will have to wait until filter performance rises in priority. (At that point hopefully we'll be able to reuse the JIT infrastructure being developed for JS.)



Tuesday, 22 July 2008

Wellington

Last week, during the school holidays, our family took a trip to Wellington. It's a pretty good winter destination, with plenty of indoor activities, especially the Te Papa museum. The weather wasn't too bad so we also got outside; notably we completed the "Southern Walkway" from Oriental Bay to Island Bay --- a significant achievement to have two small children walking for five hours. They're definitely following in my footsteps (so to speak).

Te Papa was good, but I have to say (speaking as a devotee of geological spectacle) that their Earthquake Room isn't quite as good as the Auckland Museum's Volcano Room. We went on the tour of the Beehive and other government buildings, which was considerably more interesting than I expected (and led by a man with a strong US accent, curiously). Overall Wellington was lots of fun.

We flew down but took the Overlander train back to Auckland. The train is slow --- took us over 12 hours --- but a great experience for all. The views were magnificent even though it rained much of the time and we did not encounter any snow, and the great volcanoes of the plateau were shrouded in cloud. That's two out of two times our family's been to the plateau and failed to see them, but we'll keep trying! Probably the thing to do is to wait for a big snowfall and clear conditions and jump on the train the very next day.

Here's a picture of Houghton Bay near the end of the Southern Walkway. It got a bit wet and windy near the end of the day but --- thank the Lord --- it wasn't cold.


Houghton Bay


Wednesday, 16 July 2008

ROC Scheduled Maintenance - Wednesday 16/7/2008 to Sunday 20/7/2008

It's the school holidays and our family's going down to Wellington for a few days for fun. We plan to fly down and take the train back, hopefully scoring great views both ways of the wintry volcanic plateau. (One of my more enduring early memories is seeing that area from the train, covered in snow.) In Wellington we plan to visit the Te Papa museum, which none of us have ever seen, and there should be plenty of other fun things to do.

I am going up to visit Victoria University briefly to talk to some people there. That should be a lot of fun too. Technically it might count as work, but it's the only work I'll be doing, since I will not be taking my laptop nor any other device I would use for Internet communication, so don't expect any response from me along the usual channels. I believe I'm pretty much on top of things at the moment, so hopefully no-one will be inconvenienced.

OK I probably lied in the last paragraph --- I won't be able to completely stop thinking about browser engines. Sigh. Definitely a sign of spiritual weakness.



Wednesday, 9 July 2008

Using Arbitrary Elements As Paint Servers

The latest feature in my bling branch is the ability to use any element as a paint-server CSS background for another element.

There are a few motivations for this feature. Probably the biggest usage of the canvas "drawWindow" API extension in Mozilla is to create thumbnails of Web content. The problem is, drawWindow necessarily creates "snapshot" thumbnails. Wouldn't it be cooler if there was an easy way to create live thumbnails --- essentially, extra live viewports into Web content? Now there is. It should be pretty easy to add this to S5 to get slide thumbnails, for example.

Another feature that's popular these days is reflections. With element-as-background, plus a small dose of transforms and masks, reflections are easy. A while ago Hyatt introduced a feature in Webkit to use a <canvas> as a CSS background; that falls out as a special case of element-as-background.

So here's an example:


An HTML element with a rotating canvas background

And here's the markup:

<!DOCTYPE HTML>
<html>
<head>
<style>
p { background:url(#d); }
</style>
</head>
<body style="background:yellow;">
<p style="width:60%; border:1px solid black; margin-left:100px;">
"Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud
exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute
irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla
pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia
deserunt mollit anim id est laborum."
</p>
<canvas id="d" width="50" height="50"></canvas>
<script>
var d = document.getElementById("d");
var iteration = 0;
function iterate() {
++iteration;
var ctx = d.getContext("2d");
ctx.save();
ctx.clearRect(0, 0, 50, 50);
ctx.translate(25,25);
ctx.rotate(Math.PI*iteration/180);
ctx.fillStyle = "lime";
ctx.fillRect(-10, -10, 20, 20);
ctx.restore();
setTimeout(iterate, 10);
}
iterate();
</script>
</body>
</html>

Unlike SVG paint servers, elements-as-backgrounds have an intrinsic size. Staying consistent with my earlier work on SVG effects for HTML, I define the intrinsic size as the bounding box of the border-boxes for the element.

As with SVG paint servers, an element-as-background is subject to all CSS background effects, including 'background-repeat' as in the example above.

Of course, the first thing any self-respecting computer scientist will think of when they read about this feature is, "can I exploit infinite recursion to create a hall-of-mirrors effect or sell exploits to gangsters?" No. We refuse to render an element-as-background if we're already in the middle of rendering the same element-as-background.

The builds I linked to in my previous post contain this feature. I've uploaded the above example and the reflection demo.

The next thing I have to do is to write up a spec proposal for all this work and get it discussed by the CSS and SVG working groups. Based on that feedback we'll figure out the best way to deliver this functionality in Gecko. Unfortunately, the approach of "make existing syntax applicable in more situations" is not amenable to using vendor prefixes to isolate experimental features.