Sunday, 27 January 2008

Different Approaches To Compability Modes

Some people are comparing IE's support for compatibility-mode-switching to Firefox's "quirks mode" and suggesting that they're very similar in principle. In fact, they are not.

Firefox's "quirks mode" is defined as "standards mode, except for a fixed set of documented 'quirk' behaviours". This means that almost all bug fixes apply to both standards mode and quirks mode. Quirks mode is actually getting more consistent and more standards-compliant all the time! (Of course it can never be 100% standards compliant.) Every new feature we add is available in both modes. The code we need to implement quirks mode behaviours is basically fixed and does not grow significantly over time.

IE's compatibility modes are defined as "whatever IE version X did". They are not supposed to change over time so the code required to implement all of them grows substantially with every IE release. Bugs in them are never fixed (except for crash and security bugs, I guess). I'm not sure if new features will be added to modes other than the very latest mode, but since it would be a lot of extra work and potentially destabilize compatibility, I'm guessing not.

Everyone has their reasons and I don't need to debate which way is better. But the philosophy, implementation and consequences of these approaches are very different.



Thursday, 24 January 2008

Travel

I'm leaving today to travel to California for Mozilla's "work week" next week. I've arriving early because on Friday I'm visiting UC Berkeley to talk to people there about browsers and stuff. In particular I'm giving two open talks: one on Chronomancer at noon, and one on "Inside Firefox" at 2:30pm.

BEYOND TIME-TRAVEL DEBUGGING

"Omniscient debugging" proposes to record a complete program execution and let developers query the recording to debug their programs. Because all program state over all times is immediately available, omniscient debugging can directly support the basic debugging task of tracing effects back to causes. This technology is about to break through into the mainstream, thanks to improved implementation techniques, virtual machines, increases in disk capacity, and increasing processor core count. I will describe the design and implementation of "Chronicle", a prototype which shows that omniscient debugging of large, real-world applications such as Firefox is feasible on commodity hardware. In fact, the real challenge is to design a UI that can take maximum advantage of omniscience. I will demonstrate a prototype UI ("Chronomancer") and argue that "time travel debugging" is too limited a vision; superior debugging experiences can be obtained by integrating information across times into a single view.

INSIDE FIREFOX

Web browsers have become a primary application platform, arguably more important than traditional client operating systems. They are also a key security frontier, a vigorously competitive market, and an crucial front in the battle for free software and open standards. They're hot. I'll survey the architecture of Gecko, the browser engine that powers Firefox, and discuss how it has evolved to support the changing needs of Web applications and to address security and performance requirements. I'll discuss new directions for Web applications such as advanced graphics, offline execution, faster script execution, and parallel computation, and how we're improving Gecko to support them. I will talk about the huge engineering challenges we face and how we are addressing them --- and where we need help. There will be as many demos, rants and anecdotes as time permits.

Update The Chronomancer talk is apparently at 380 Soda Hall.



Wednesday, 23 January 2008

Slipping The Ball And Chain

I argued in my last post that implementing IE's <meta> tag for opt-in engine selection puts an extremely heavy burden on browser development in the long term. Furthermore, I just don't see the need for it in Firefox. I meet up with Web developers a few times a year, plus I am exposed to a lot of bug traffic, and I always ask the developers I meet whether they have problems with Firefox breaking their sites. So far I've not met one who rated this as an important issue. I'm not saying we don't, or that site breaking is unimportant; I work very hard to fix reported regressions. I do think our users don't clamour for cast-iron compatibility the way IE users apparently do. There are a few possible reasons:


  • Lack of intranet penetration. Anecdotally, intranets are full of unmaintained, hairy Web content. Public sites with lots of users have high traffic and can justify maintenance; no-one cares if unmaintained low traffic sites drop out of site. Not so with intranet sites. Since we have pretty low market share in intranets, we don't see the problems there.
  • Setting developer expectations. We have always revved our engine on a regular basis and never promised, nor delivered, total compatibility. Developers understand this and have set their expectations accordingly.
  • Better historical adherence to standards. I think it's fair to say that IE's standards-breaking bugs have been a lot more severe historically than ours have, since the Firefox resurrection. So when we fix our bugs to become more standards compliant, that has a much lesser effect on Web sites.

What's remarkable is that we've not been hit by compatibility concerns even though up to and including our latest shipping product, we had no serious test automation! Thanks to all the test automation work during the Firefox 3 cycle, we should be even better at compatibility in the future.

It seems clear that for now we have no market need for drastic multi-engine compatibility, and therefore there's no need to even consider the pain it would cause. One could argue that by slaving themselves to the needs of the corporate intranet, IE is actually being hobbled for the mass market.

People have raised the "archival format" issue ... how do archaeologists decipher the late-90s Web far in the future. I honestly think that for total compatibility the best approach is virtual machines running the software of the age. As I mentioned in my last post, even the best side-by-side-engine efforts can't actually guarantee total compatibility. I don't think this should be a goal for Firefox. Maybe if there was nothing else left to do...



&lt;META HTTP-EQUIV="X-BALL-CHAIN"&gt;

The IEBlog predictably announces that Web developers will have to use a <meta> tag or HTTP header to get IE to treat a page with post-IE7 standards compliance. Obviously a lot of people are going to be upset about this. I'm actually just puzzled. I see the business argument for taking this approach in the short term, but in the long term, it seems to impose a crippling burden on IE development.

The logical way to use this tag is to ship multiple engines and use the tag to control which engine is used to render each document. You "freeze" each engine after its release, avoiding making any changes to it because each change is potentially going to regress some site. Sounds simple and appealing, but there are huge problems:


  • Footprint. You're shipping a lot more code, and it grows a lot with each release. If the user browses a mix of pages, you'll actually execute a lot more code too. Good luck competing in the mobile space when you ship half a dozen engines and your competitors only need one.
  • Cross-version interactions. These engines have to talk to each other. I can have a document in one mode with IFRAMEs in other modes. These documents can even touch each other's DOMs, including layout-related DOM APIs. Architecture changes in a new engine might be constrained by, or impact on, the design of earlier engine releases. This raises the question of whether the DOM implementation, JS engine and other components are actually duplicated. If they are, problems multiply, but if they aren't, you can't guarantee compatibility.
  • Maintenance burden. The truth is that you can't ever actually freeze those old versions completely. If a security bug or crasher bug is found in any one of those engines, it must be fixed in each engine it occurs in. Those fixes can create compatibility problems, so your "compatibility guarantee" turns out to be a mirage. But you have successfully multiplied the cost of security fixes and testing those fixes by the number of engines you're supporting.
  • Attack surface. Each engine represents exposed attack surface. Sure, there's overlap in the code so you can't just add up the vulnerabilities, but each engine adds to your attack surface.

So does Microsoft have some magic technology that alleviates these problems? Beats me. I can imagine a tool that could find common code and merge it automatically, avoiding accidental behaviour changes, but that doesn't really help much. It'll be interesting to see how it plays out.

One Aaron Gustafson says "I, for one, hope other browser vendors join Microsoft in implementing this functionality." For the reasons above, and other reasons, I seriously doubt Firefox will be interested. I'll talk more about this in a follow-up post.



Subpixel Layout And Rendering

John Resig has discovered that Gecko does subpixel layout and rounds coordinates to device pixels for rendering.

He's right that in some sense, when you have to render a CSS layout to a screen with discrete pixels, all the options are imperfect and browsers are choosing different imperfect options. However I think we need to explain the big picture a little better.

Gecko intentionally supports subpixel layout because for high resolution output devices, especially printers but also high-DPI screens that will become more common, one CSS "px" should be mapped to many device pixels, so you can in fact do sub-CSS-pixel rendering. For those devices, rounding layout units to CSS pixels is actually throwing away information and giving you a strictly worse layout than Gecko will give. For example, try printing John's example in FF3 beta. You should see that in the printout (or generated PDF), no rounding has occurred and each child DIV looks identical.

Because we think this is important and we don't want layout to vary unnecessarily across devices, we do subpixel layout on all devices. When we have to draw to a regular-DPI screen, we then have to round the edges of drawn objects to the nearest screen pixel. This explains the results John sees. Note that our approach of rounding at drawing time is optimal for avoiding gross layout changes due to rounding; it limits the impact of rounding to moving object edges by one pixel in some direction. It avoids gross layout changes like IE moving a DIV to the next line, or Safari leaving a 2px strip vacant at the end of the line. Thus I believe our approach is better than the alternatives in important ways.

The preceding paragraph is actually a slight oversimplification. We do have to do some rounding during layout simply due to the fact that computer arithmetic has limited precision. So during layout we round measurements to the nearest 1/60th of a CSS pixel. This number was chosen so that common fractions of a CSS pixel can be represented exactly --- for gory details, check out the great "units" bug and my comments about its landing. Note that rounding to 1/60th of a CSS pixel is far more benign than rounding to CSS pixels; 1/60th of a CSS pixel is approximately 1/5760th of an inch, not something most people are going to worry about!

In practice, we have seen very few Web compatibility issues caused by this scheme. Web authors should just not worry about the rounding we do, and should not attempt to round coordinates themselves. The new getBoundingClientRect and getClientRects APIs in FF3 can return fractional coordinates, just go with the flow. Feel free to position elements at those fractional boundaries, they will line up visually. If you insist on consistent rendering down to the pixel, then the only way to go is to specify px values for everything, including line-heights, and avoid percentage units. Or better still, use SVG. Or a PNG.



Tuesday, 22 January 2008

GTK Printing

Michael Ventnor has integrated the GTK print dialog and spooling infrastructure into Gecko; for details and screenshots see his blog post. This is a big improvement in printing for Linux users for Firefox 3.

I'm very pleased because it's one of the features Nat wanted me to do while I was at Novell that I didn't get to.

I made Michael go through many iterations of code review and improvement, but it's probably not perfect yet, so test it and file bugs!



Thursday, 17 January 2008

Auckland Mozillans At Play

Today the Auckland Mozilla team did a kayaking trip down the Puhoi River. We had tasty lunch at the "Art Of Cheese" cafe and then paddled rented kayaks first upstream a bit and then down to Wenderholm on the coast. The weather was superb --- warm and sunny. The Puhoi valley is very peaceful. We've had very little rain in the last month so the grass wasn't as iridescently green as usual, but it's still a beautiful area.


River Valley


Teamsrc="http://www.ocallahan.org/blog/FiveOnRiver.jpg" width="800" height="600"/>


Auckland Mozillans

This new office should save some money:

Art of Cheese Hut

Pictured: Matthew Gregan, Karl Tomlinson, Michael Ventnor, Chris Double, Chris Pearce.

Wednesday, 9 January 2008

String Theory

Strings are perhaps the most important data type. They're probably the second most used after fixed-width integers, but they consume far more memory. Measurements of large Java apps such as Eclipse and Websphere would often show 30% or more of the live heap being consumed by strings. Not only are strings important, there are also many degrees of freedom for designing string representations and APIs, and you see large projects making a variety of choices, even though the impact of those choices is not widely understood. I believe strings are hugely understudied and there's a lot of interesting PhD-level research that could be done on strings. Tragically, strings (like email and parsing) are considered "old hat" and unlikely to attract top-class research attention. Anyway, let me jot down a few thoughts that have been festering for a long time...

UTF-16 is the devil's work. Once upon a time the Unicode consortium promised that 16 bits would be enough space for all the characters of the world's languages. Everyone believed it and, since 16 bits is not too much more than the 8 bits people were already using for Latin text, everyone designed their "Unicode" APIs to work with 16-bit characters, because that was simpler than variable-width representations. Later, though, the consortium realized they'd made a wee mistake, 16 bits wasn't going to be enough. No-one wanted to change all their APIs and using 32 bits per character is a hefty penalty for Latin text so everyone redefined their 16-bit units to mean "UTF-16 code units", i.e., some characters are represented with two code units ("surrogate pairs"). The problem is, UTF-16 thus has basically the same complexity as a good variable-width format such as UTF-8, but uses double the memory of UTF-8 for Latin text.

No-one really needs charAt. One of the major touted advantages of a uniform character size is that it makes charAt(index) trivially efficient. However, almost all the code I've ever seen that indexes into strings is actually iterating through strings (or using APIs that return indexes into strings, which could just as easily use iterators). Implementing efficient bidirectional iterators for UTF-8 is trivial. You only really need fast charAt for random access into strings, and the only code I know of that needs it is Boyer-Moore string search --- but that can be easily implemented efficiently for UTF-8 just by searching over UTF-8 code units instead of Unicode characters.

UTF-8 rules. UTF-16 is nearly as complex as UTF-8. UTF-16 is actually worse to code for because surrogate pairs are extremely rare, much rarer than multibyte UTF-8 characters, so you get code paths that are not well tested or code where surrogate pairs are not handled at all. For Latin text UTF-8 uses only half the space (note that even with non-Latin languages, programs still manipulate lots of Latin text internally), and you have the excellent property that ASCII text is valid UTF-8. In performance-critical code (such as our DOM implementation) you often see optimizations that switch between UTF-16 or "UCS-1" (text that's all characters 0-255, 8 bits per character), with code duplication for the "8-bit path" and the "UTF-16 path". With UTF-8 none of this is necessary, which is really good --- code duplication is evil, it requires new families of string types, forces programmers to make choices about string representations, and requires conversions that are potentially costly in code and time. UTF-8 is also easier to migrate to from ASCII, you just keep your 8-bit characters and redefine their meaning. The great irony is that platforms that implemented Unicode support late, such as the Linux kernel and glib/GTK+, saw all this and correctly chose UTF-8, while the early Unicode adopters (and those who must interoperate with them) are saddled with UTF-16 baggage.

charAt isn't going away. Tragically a lot of APIs, including JS APIs required for Web compatibility, rely heavily on charAt (although fortunately regexps are getting used more and more). Because it's actually indexing a UTF-16 buffer, charAt has the horrible specification "returns the n'th code unit in the UTF-16 representation of the string", and of course most code doesn't even bother trying to handle surrogate pairs correctly. Nevertheless, we're stuck with it, so we need an efficient implementation of UTF-16 charAt over a UTF-8 representation. The most obvious approach is some simple index caching: cache the results of charAt operations in (string, UTF-16 index, UTF-8 index) triples (e.g. by storing a UTF-16 index and a UTF-8 index in each string, or using a hash table). The charAt implementation can then look up the cache; "near matches" in the UTF-16 index in either direction can be used to derive the new charAt result. For simple code that's just incrementing or decrementing the index, this will work pretty well. (You'll need a little bit of hacking to handle cases when the UTF-16 index is in the middle of a surrogate pair, but it doesn't look hard.)

You could make that work even better with some compiler wizardry. You want to identify integer variables that are being used as UTF-16 indices, and associate with each such variable a "shadow" UTF-8 index for each string that the variable indexes. These are a lot like regular loop induction variables. Then you add code to keep these variables in sync and use the UTF-8 indices for the actual string operations. If you do a good job you could get rid of the UTF-16 indices entirely in many cases. This approach extends to other APIs like "find" too.

There are a lot more choices to make in a string API beyond just the character representation. For example:


  • Should strings be mutable or immutable?
  • Should mutable strings be append-only or randomly mutable? (I don't see a need for random-access mutation although many APIs support it)
  • Should strings be able to share buffers (for fast substring operations)?
  • Should strings be able to span multiple buffers (for fast concatentation)?
  • Should you intern strings (i.e., force strings with the same contents to share a single buffer)? If so, when and how?
  • How should string buffer memory be managed?
  • How should thread safety be handled?
  • Should string buffers be null terminated, or have an explicit length field, or both?

I don't have strong opinions on those questions, although I think it would be very interesting to try just having a single string type that was UTF-8, mutable but append-only, no buffer sharing, spanning, or interning, compile-time (i.e. template) configurable with a certain number of characters directly in the string (nsAutoString style), falling back to the heap, no built-in thread safety, null terminated with an explicit length in bytes. Such a string would probably use a (length, buffer size, buffer pointer, 0-or-more-auto-bytes) format, and it would be fun to try optimizing the auto-buffer case by using the buffer size and buffer pointer memory as character data.

Experiments I'd like to see:


  • What percentage of memory is used by strings in Gecko (or whatever your favourite program is)?
  • How much of that memory is UTF-16 code units with a high zero byte?
  • How much memory would be saved (or wasted) if we used UTF-8?
  • What statistical patterns are there in API usage? Is there a clear need for different string types that optimize for different patterns?
  • Measure the code, time and space impacts of varying the above decisions!

The "memory" measurements should probably be memory-time products. For Gecko, you'd want to look at a variety of workloads, e.g., including a CJK-heavy page set.

Update I've updated the post to use the term "code units" where I had incorrectly mentioned "code points". Also, I want to mention that most strings manipulated by Gecko are small and so constant factors matter: simplicity of implementation is important for performance and for programmer understanding. The ability to stack-allocate small strings is also important.



Friday, 4 January 2008

The Dark Side Of The Moon

Slashdot links to a story of unknown veracity about Microsoft redesigning parts of their site to require Silverlight, presumably as a ploy to encourage Silverlight downloads/installs. What's interesting to me is how much credibility Silverlight gets for being cross-platform thanks to Moonlight, Miguel and co's open-source implementation of Silverlight.

So how does boosting Microsoft's chances of platform domination in the next generation serve the interests of free software? I really have no idea. Miguel can't be so naive as to imagine that Microsoft will allow their control of the Silverlight and .NET platforms to ever be eroded. (Sure, a subset of the .NET documentation got an ECMA stamp, but Miguel sees clearly that .NET evolution happens behind Microsoft's closed doors (ditto OOXML).)

It should be obvious how bad that would be for free software and competition in general. But in case it isn't: Microsoft gets to evolve the platform to suit its strategy, its constraints and its implementations, and no-one else does. Microsoft's products are always more up to date than any other implementations. And Microsoft's products are always the de facto reference implementations, so competitors have the extra burden of reverse engineering and implementing Microsoft's bugs and spec-ambiguous behaviours that authors depend on --- and even so, competitor implementations are by definition always less compatible. This is not theory, we've seen it happen over and over again. Microsoft even has a phrase for it --- "keeping the competition on a treadmill".

This is why true multi-vendor standards are so important, despite their inconvenience --- giving that power to any single company is dangerous. (Although if the dominant implementation is free software, the transparency of source code and the possibility of forking greatly reduce the dangers.)

There's also an interesting video codec issue here. Silverlight is a vehicle for pushing Microsoft's VC-1 codec. Microsoft is making that available for Moonlight as a binary blob with very restrictive licensing. Those people who like Moonlight because they can run it on BSD or some weird hardware platform are in for a surprise; in practice Moonlight will always be exactly as cross-platform as Microsoft wants it to be. I'm sure Microsoft would love to have their patent-protected codec ascendant and shoring up their platform lock.

A lot of these arguments apply to Mono itself of course, and have been thrashed out for years. But Mono at least had the promise of bringing Microsoft's existing captive developer community to Linux and free software platforms. Now Miguel is helping Microsoft enter a market where they aren't currently strong. I like and respect Miguel (this post was hard for me to write) but this strikes me as a very poor strategic move.



Thursday, 3 January 2008

Christmas Excursions

We've been out and about over the Christmas break. The weather has been fantastic. We spent nine days up north. We visited Matheson Bay:

Matheson Bay

The pohutukawas are stunning this year.

The walk from the beach up by the creek is short but excellent.

MathesonsCreek.jpg

Back in Auckland things are amazingly quiet, probably because everyone else is still out of town! Yesterday we went out to the Hunua Ranges for a walk and a picnic. (Until yesterday I'd only been there once or twice, barely remembered, which is silly given what a huge park it is within an hour of Auckland.) We visited the falls first:

HunuaFalls.jpg

Then we did the Cossey-Massey loop track up to the Cossey Dam and back. The ferns are particularly amazing. Apparently more than half of NZ fern species are present in the area.

HunuaFerns.jpg

One day I'd like to get into tramping and camping. That's the only way to explore the more remote parts of the Hunuas.