Wednesday, 21 December 2005

Port Waikato

One of the great things about living in New Zealand is having easy access to a variety of astonishing landscapes. I took a day off today and we decided to go on a bit of a road trip. On the spur of the moment I decided to go to Port Waikato, where the Waikato River (NZ's largest) meets the sea.

It's less than 1.5 hour's drive from Auckland city. Along the way you see plenty of amazing, yet typically NZ, green, rugged farmland mixed with bush. There's a point on the motorway as you come over the Bombay Hills where the plains and hills of the Waikato region open up before you which is quite breathtaking. I was tempted to keep on going for a few hours, to cross the plains and ascend the far side of those hills to the central volcanic plateau, and see again the magnificent Lake Taupo and the great volcanoes, but that will have to wait. We turned west and followed the river out to the coast.

We missed the turnoff to the town of Port Waikato and drove on for a while mesmerized by the limestone country. The location used for "Weathertop" in The Fellowship Of The Ring is around here.


Along the way we encountered a flock of sheep being driven along the road. The farmer whistled his dog to move the sheep away from our car so we could drive on. It was brilliant.


We eventually got back to Port Waikato. The beach is much like the west coast beaches closer to Auckland --- ironsand, sea spray, chaotic surf and a distinctly wild feel. Behind, there's a huge expanse of picturesque sand dunes. We entered them from the other side away from the beach and headed towards the beach, but we eventually had to turn back not knowing how far from the beach we were. In that forbidding place, and carrying two kids much of the time, I wasn't in the mood to push our luck.


On the way home we diverted to Somerville Shopping Court in Howick, a complex of a few dozen Asian shops and restaurants. We've been working our way through the restaurants gradually over the last year. This time we went to a Taiwanese cafe which served cheap, tasty and slightly odd food ... unfortunately I didn't catch the name! All in all, a wonderful day. We are very blessed.

Friday, 16 December 2005

Frame Display Lists And Next Steps

After extensive testing, I've submitted the frame display list patch for review. It's huge so that might take some time... The results are pretty good; it seems to be a little faster than today's code in Trender benchmarks, the code is smaller and cleaner and much more correct and extensible and better documented. It will also make it relatively easy to fix longstanding issues like caret drawing and visual merging of multiple outlines for the same element. It fixes two acid2 bugs (which I think leaves two left, both of which should be fixed by the reflow branch).

The question is where to go from here. I know where I want to get to:

  1. No view system
  2. Only toplevel windows, popups, and embedding widgets have native widgets.
  3. ... except for plugins, which have platform-specific solutions, but at least initially on each platform we'll hang each plugin off the top-level widget, wrapped in its own container specifically so that we can clip the plugin to an arbitrary rectangle.
  4. Optimize window updates (repaints, scrolls, plugin geometry changes) as follows:

    1. First, compute the area that will be need to be repainted. This includes areas under plugins that will be moved/resized.
    2. Render that area into an offscreen buffer.
    3. Perform the scroll and/or plugin widget changes.
    4. Copy the repaint area from the offscreen buffer to the screen.

    This should keep flicker to a minimum. Control over this process should actually be moved down into toolkit-specific code because platforms need different approaches here. The above is merely the most generic and something that can be implemented with few changes to our cross-platform nsIWidget API, and therefore is a good first step.

Getting rid of child widgets in Gecko content will eliminate a bunch of nasty per-platform code and associated platform-specific bugs, plus improve performance. In the short term it will break accessibility, at least on Windows.

Removing widgets from subdocuments and scroll areas can't happen until the plugin work happens (otherwise plugins will stop being clipped when they're outside the scrolling area ... eww!). Removing views will be easier when widgets are removed (otherwise we'll have to port a bunch of widget management code to the frame system). The plugin work will create significant flicker until the optimized window update is implemented, but the optimized window update would be a lot easier after the widget removal. Also it would be good to do optimized window update after view removal, otherwise we'll have to implement it in the view system and then reimplement in the frame system soon afterward.

So it's a bit of a conundrum. Currently I think the best approach is to do the plugin work first, then widget removal, then view removal, then optimized window update. We can live with flickery plugin scrolling on the trunk for a while. The main argument against this approach is that we might get to the end and discover that optimized window update hasn't really fixed flickery plugins, but I think we'll just have to chance our arm. We really need to move to platform-specific solutions to get plugins rendering into offscreen buffers, anyway.

Night Of The Living Threads

Every so often someone encounters a temporary hang in the Firefox UI or an extension and declares that the solution is to use more threads. They are wrong.

The standard threads-and-locks shared-memory programming model that everyone uses today just sucks. Large numbers of possible interleavings make it difficult for programmers to reason about, and therefore code is prone to deadlocks and race conditions ... catastrophic bugs that are often hard to reproduce, diagnose, and fix. Threads-and-locks forces you to violate abstraction and/or design very complicated specifications for how synchronization works across modules. It is very difficult to get good performance; locking schemes don't scale well, and locking has considerable overhead. Maurice Herlihy says it all better than I can.

Many efforts have been made over decades to design better programming models for threads, including one that I was recently part of at IBM. You can make life better by restricting the possible interleavings or providing transactional semantics. These are promising but are not yet available in forms we can use. Even when they become available, they still add complexity over a single-threaded programming model, and fail to solve some important problems. For example, safely killing a running, non-cooperating thread is terribly difficult to get right in every system I know of.

It's important to recognize that threads solve two different problems:

  • Allowing asynchronous execution so one long-running activity does not block another activity
  • Allowing concurrent execution of multiple activities, to take advantage of multiple CPU cores

The problems that people complain about today in Firefox are entirely of the first kind. These problems can be solved without using threads. Here are some specific examples:

  • Loading a big page hangs the browser while we lay out the page. The solution here is to make it possible to interrupt the reflow, process UI and other activity, and then resume layout. This should not be very hard because we support incremental reflow already.
  • The UI hangs while some extension does I/O. The solution here is for the extension to use asynchronous I/O.
  • The UI hangs while we instantiate a plugin. The solution here is to put plugins in their own processes and communicate with them over a channel which we don't block on.

The second problem, taking advantage of multiple CPU cores, is not so easy to get around, because threads are exactly what CPUs provide (today). Therefore we will end up using multiple threads inside Gecko --- carefully, only where it makes sense because we can get significant performance benefits without great complexity; e.g., page drawing should be fairly easy to parallelize. But I will fight tooth and nail to avoid exposing threads to Web/extension developers.

Thursday, 15 December 2005

The BBC Gets It Wrong

The BBC has made a terrible hash of their article on the latest IE update. They make it appear that Firefox was in just the same situation as IE, which is completely untrue --- even fully updated IE users have been vulnerable for many days while exploit code has been circulating, but fully updated Firefox users have had the fix for nearly six months, long before any exploit code circulated. Grrrrr!

Friday, 9 December 2005

Web Standards, Mozilla Extensions And Other Ranting

I belatedly read a few blog comments complaining that Firefox is corrupting the Web by implementing nonstandard tags like <canvas> and prematurely implementing CSS3 columns.

Bollocks. You can't make good standards without implementation experience and you don't know you have a good implementation until a lot of people have used it. We take what safeguards we can to avoid poisoning the Web with a nonconformant implementation. In particular with CSS3 columns, we currently only honour -moz-column-* properties. When the standard is settled and we implement it well enough, then we'll start supporting the standard column-* properties. In the meantime anyone using -moz-column-* knows they're off the standards map. This approach will help us get a solid columns spec --- and conformant implementations --- faster.

With <canvas> the situation is a little different. It may not be blessed by the W3C (yet) but it has everything else going for it: a good spec developed in the open; interoperable, independent implementations from three vendors (two of which are open source); and it doesn't interfere with any existing standard. In short it's a fine quasi-standard.

A larger point is that work like CSS3 columns and <canvas> is required to move the Web forward. If the Web does not move forward --- if we spend all our energy refining existing specifications with fixed functionality, and developing perfect implementations of them --- then it will be replaced by something that is moving forward, which won't be based on standards, and then all our magnificent standards won't do us any good at all.

On another tack, Jim Ley writes:
I'm not convinced by canvas, IE has had 2D and 3D drawing API's and the ability to redirect HTML to an image since IE4, no-one used them, the use cases aren't really there, it's just gimicky effects, and single threaded javascript is way too slow, and bad an authoring environment for creating anything but toys.

That's an interesting suggestion. I could quibble with the facts (IE has a 3D drawing API?) but I'd rather counter with the observation that we've been able to do AJAX-style applications since before IE4, so why did it only explode over the last couple of years? I think for a long time we laboured under preconceived ideas about what Web apps could and should be. It took some adventurous coding from people like Google for everyone to raise their ambitions. I hope that with these new Web features people raise their ambitions some more.

Friday, 2 December 2005

Means And Ends

For me, Firefox is a means, not an end. I want the world's information and applications to run on a broad variety of platforms, beyond the control of a single organization. A competitive, standards-based, cross-platform Web is the best way towards that, and Firefox is doing a lot to help us get there, so every day I'm pleased to be doing my part. With its market share, its brand, its standards compliance, its expanding feature set for Web developers, and its open-source nature and non-profit masters (which guard against Gecko ever becoming the basis for some new abusive monopoly), Firefox seems to be the ideal vehicle right now. But nevertheless Firefox is only a means and other means could serve.

Therefore I am very glad that the other major standards-focused, Web-focused browsers --- Opera, Safari, and Konqueror --- are fine products doing the right things. In fact, if KHTML were to exceed Firefox in every way that matters and become the dominant browser, I would not shed one tear. (I speak hypothetically; sorry KHTML fans, but that won't happen anytime soon!) I would remain satisfied knowing that the growth of Firefox has played a big part in reversing the trend of the Web becoming IE-only, and that we have thereby held the door open for other browsers and platforms.

But the door could yet close again, so back to coding!

Tuesday, 15 November 2005


This morning started like any other morning. I got up and read the Bible, this time I'm in Luke 6 reading about what a great guy Jesus was, yadda yadda. Then things took a turn...
Looking at his disciples, he said:

"Blessed are you who are poor,

for yours is the kingdom of God.

Blessed are you who hunger now,

for you will be satisfied.

Blessed are you who weep now,

for you will laugh.

Blessed are you when men hate you,

when they exclude you and insult you

and reject your name as evil, because of the Son of Man.

Rejoice in that day and leap for joy, because great is your reward in heaven. For that is how their fathers treated the prophets.

But woe to you who are rich,

for you have already received your comfort.

Woe to you who are well fed now,

for you will go hungry.

Woe to you who laugh now,

for you will mourn and weep.

Woe to you when all men speak well of you,

for that is how their fathers treated the false prophets.

Errrr. To be honest I identify more with the latter half. Hmm, surely Jesus had some hidden subtext that lets me off the hook. Let's plough on.
But I tell you who hear me: Love your enemies, do good to those who hate you, bless those who curse you, pray for those who mistreat you. If someone strikes you on one cheek, turn to him the other also. If someone takes your cloak, do not stop him from taking your tunic. Give to everyone who asks you, and if anyone takes what belongs to you, do not demand it back. Do to others as you would have them do to you.
If you love those who love you, what credit is that to you? Even 'sinners' love those who love them. And if you do good to those who are good to you, what credit is that to you? Even 'sinners' do that. And if you lend to those from whom you expect repayment, what credit is that to you? Even 'sinners' lend to 'sinners,' expecting to be repaid in full. But love your enemies, do good to them, and lend to them without expecting to get anything back. Then your reward will be great, and you will be sons of the Most High, because he is kind to the ungrateful and wicked. Be merciful, just as your Father is merciful.

Oh COME ON, this is just unrealistic. You expect me to willingly take damage from bad guys, to get hurt? .... Wait, don't answer that. ... Maybe it's just a suggestion....
Why do you call me, 'Lord, Lord,' and do not do what I say? I will show you what he is like who comes to me and hears my words and puts them into practice. He is like a man building a house, who dug down deep and laid the foundation on rock. When a flood came, the torrent struck that house but could not shake it, because it was well built. But the one who hears my words and does not put them into practice is like a man who built a house on the ground without a foundation. The moment the torrent struck that house, it collapsed and its destruction was complete.

Ah ... er ... ow! ow ow ow! Gaaah! Help!

When I think of "the wise man build his house upon the rock" I usually think of the charming Sunday School ditty, but this morning it gives me a feeling of being mugged. Maybe how the disciples felt as they said "Lord, increase our faith!"

Monday, 7 November 2005

Gloomocracy IV

A friend pointed me at a particularly egregious example of gloom-centric reporting.

Paragraph two gives an initial summary of the study, saying it "showed more mistakes happened in the care of New Zealanders than were made in German or British health services". Paragraph three follows with more bad news. Some other bad-news anecdotes follow.

Finally in paragraph eight we hear more about the study. It turns out that the study covered six countries: the above three plus the USA, Canada and Australia. Indeed, the NZ error rate was lower than in the USA, Canada and Australia. Then we're also told that NZ, along with Australia, Germany and Britain, had easier access to doctors than in the USA and Canada.

This is a classic example of picking out the negative details and writing a story around them. Well done, Eleanor Wilson, our dour citizens silently acknowledge you.

Friday, 4 November 2005

Frame Display Lists

I'm reworking how frame painting works in Gecko.

Currently painting is a two-phase process. The view manager creates a display list of "views" (roughly corresponding to positioned elements and other elements that need complex rendering) that intersect the dirty region. This display list is sorted by z-order according to the CSS rules. Then we optimize the display list, removing views that are covered by other opaque views. Then we step through the display list from bottom to top and paint the frames associated with each view. Frame painting handles some more z-ordering issues using "paint layers". E.g. a background layer contains block backgrounds, a float layer contains floats, and a foreground layer contains text. We traverse each frame subtree three times, once for each layer.

This approach has some problems. We don't implement CSS z-order perfectly because of the way z-order handling is split between views and frames. It's unnecessarily complex. We paint some unnecessary areas because we only handle opaque covering at the view level; if a frame without a view is opaque and covers content underneath it, we don't take advantage of that. It's not as fast as it could be because of the three traversals of each relevant frame subtree. Painting outlines properly with this scheme would require an additional paint layer, requiring four traversals. We also need to fix a bug by making elements with negative z-index paint in front of the background of their parent stacking context ... this would require a fifth paint layer or some other hack. And it's not very extensible to handle new quirks the CSS WG might throw at us.

So I'm working on a patch to replace all this with a display list of frames --- actually, parts of frames, since different rendered parts of a frame can have different z-positions. I call these parts "display items". Because a frame subtree may distribute display items to different z-levels in the parent, we actually pass in a list of display lists, each corresponding roughly to a paint layer. Once we've built a complete display list we can optimize it and finally paint the visible items.

One nice thing about this approach is that we can use it for mouse event handling too. To figure out which frame the cursor is over we just build a display list for a single point and analyze it from the top down. We can get rid of all the GetFrameForPoint(Using) code and we get a guarantee that event handling stays consistent with painting.

This approach also lets us organize our painting --- er, display list construction --- more flexibly, because we don't have to specify what gets painted in the exact order it will be painted. For example currently we have a function PaintTextDecorationsAndChildren that paints overlines and underlines, then paints child inlines, then paints the strikeout, because that's what CSS requires. But it simplifies the code if the text-decoration code can be separated from the painting of children. I replace this with a function DisplayTextDecorations which creates overline, underline and strikeout display list items (as needed) and lets the caller put them in the right z-position, so this is separated from painting the children.

Of course this approach also lets us fix the bugs I mentioned above with outlines and negative z-index children. I also hope the performance will be better than what we have today, because we only need one traversal of the visible parts of the frame tree. We do a bit more work at each frame, but that should have decent memory locality.

One potential problem with this approach is the storage required by the display list. It's very short-lived --- it only exists while we do a single paint --- but potentially each frame could create a few display list items, and each item is about 24 bytes (more for 64bit). Think 10,000 visible frames in a very complex page and we're talking about 500K memory. That actually sounds OK to me --- except for Minimo, but Minimo devices probably won't be able to get so many frames on their small screens :-). Another way to think of it is that at worst, the display list might momentarily approximately double the space used for a document. In any case --- and here I invoke my last blog entry about optimization --- I think I can see how to compress the display list as we build it, to reduce the memory overhead substantially, though I won't be doing that until we really need to.

BTW over the years many people have asked for a Gecko API so that embedders can deconstruct exactly what's visible on the page in terms of content elements and the geometry and z-ordering of their boxes . This frame display list would be a good basis for that.

Oh, this also will make it easy and clean to implement SVG foreignobject the way I did for my XTech demo. I'm looking forward to having that in trunk cairo builds.

One complexity of CSS painting is that compositing ('opacity'), z-ordering and most clipping are done according to the content hierarchy, not the CSS box containment hierarchy. (E.g., an absolutely positioned box clips a fixed-position child, even though it's not the containing block for the fixed-position child.) This has been a source of many difficult bugs and some fragile, hairy code to figure out exactly what clip rect applies to a frame, when we were painting by traversing the view and frame hierarchies (and it's still wrong in tricky edge cases today). Now I'm able turn this around and take a reasonably simple approach: paint (er, build display lists) recursion always follows the content tree. This means we paint out of flow frames (absolute/fixed position elements and floats) when we encounter their placeholder frames. This helps eliminate a lot of existing code complexity.

Tuesday, 1 November 2005

"Premature Optimization Is The Root Of All Evil" Is The Root Of Some Evil

There's a folklore quote "premature optimization is the root of all evil", attributed to Tony Hoare and Donald Knuth. A variant is due to my PhD advisor's father Michael Jackson: "The First Rule of Program Optimization: Don't do it. The Second Rule of Program Optimization (for experts only!): Don't do it yet.". Unfortunately --- and I'm not the first to note this --- this advice, taken out of context and followed slavishly, often leads people into deep trouble. The problem is that it collides with another well-known feature of software development: it gets more expensive to make changes to the system later in its development, especially cross-cutting changes that impact module interfaces. If you leave optimization to late in development, then profile it and find that fixing the performance bottleneck requires a major change in your design, or forces major module interface changes, then you have a serious problem.

Clearly one should therefore design a system with an eye to where the bottlenecks may be, and try to ensure the design has enough flexiblity to capture the optimizations that will be required. This is dangerous and impossible to get right all the time, but such is life in software development. It is not enough to just think about high-level performance, because sometimes low-level coding issues do have a significant impact and they may be constrained by the design. For example in Gecko the designers created module interfaces that relied on using virtual method calls almost everywhere. Individual virtual calls are very cheap but used very frequently --- and without the support of advanced JIT compilation techniques --- they are a significant performance drag. In some cases, where actual polymorphism is in use, devirtualizing the calls requires significant and expensive restructuring of the code.

I find it useful to do a sort of "gedankenprofiling". Guess or measure some important workloads, then sketch out mentally how each workload will be processed by the proposed designs, focusing on the apparent bottlenecks. Try to guess how much cost per unit work will be incurred at the bottleneck by alternative designs. Do not choose the design that minimizes the cost; instead, choose the design of minimal complexity that can smoothly extend to the cost-minimal design. Then when you start implementing and measuring and discovering your mistaken assumptions, you have the best chance of still getting to a good place relatively cheaply.

Right now I'm redesigning the way we paint frames in Gecko and thinking a lot about how it will perform in various scenarios, and trying to put in just the right amount of flexibility to handle potential future issues. Fortunately I don't have to guess so much since we already know a lot about where our performance problems are.

Friday, 7 October 2005


I'm finally returned from my trip. One week in England, one in California, and a few days in Queenstown. Overall it was a lot of fun. I enjoyed the POPL PC meeting very much; it made me miss research! I had a little time to wander around Cambridge and Oxford ... history fascinates me and it's thrilling just to look at the hundreds-of-years-old buildings and think about the past. At Oxford I gave a talk about Firefox development. In California I spent time with friends, spent a lot of time at the new Mozilla Foundation headquarters, visited Google a couple of times, visited a friend at the Spore development lab, and generally had a great time --- thanks everyone! I gave a radio interview for "The California Report" on KQED, which was fun, although I suspect little of what I said survived editing. I visited Berkeley for a day to give the Firefox talk and meet with various students and faculty --- some old friends, some new.

Queenstown was completely different. It's up in the mountains of the South Island, next to Lake Wakatipu, surrounded by snowcapped mountains. It's very touristy but for all that, it's magnificent. It's early spring so there wasn't much snow; it was chilly and wet at times but overall lovely. We visited Lake Wanaka and Glenorchy, went up the gondola, went jet-boating (recommended!) I managed to sneak away for a couple of decent walks, too. Three days wasn't long enough to get tired of mountain vistas over idyllic lakes and green countryside! Interestingly some of the best views were on our rainy day in Glenorchy (see below). I believe one of the best things about living in New Zealand is that you can easily take holidays in New Zealand :-).

Although it was all fun and a nice change, I'm glad to be at home now eating home-cooked meals and sleeping in my own bed. No more big trips for me this year, I hope. I need to knuckle down and resolve my remaining Firefox 1.5 bugs (including a few embarassing regressions!) and some Novell Firefox issues, and move on with new develoment. I'm also looking forward to spending more time with local family and friends and our local church.

mountains above Glenorchy

Monday, 19 September 2005

Upcoming Travel

On Monday I'm leaving for a two-week work trip. In the first week I'll be in England, specifically Cambridge, for the program committee meeting for the ACM Principles of Programming Languages conference to be held in January 2006. I've spent the last few weeks mostly reviewing papers in preparation for this meeting, where we will select the papers to be presented at the conference. It's been a lot of fun catching up with what's going on in the world of PL research --- and a lot of work, especially because I did some extra background reading to catch up on what's happened in the last several months. I'm looking forward to the PC meeting since I've never done a face-to-face PC meeting before.

After Cambridge I'm heading to Oxford on Friday the 23rd to catch up with a friend and give a talk about Firefox development.

Then I'm off to Mountain View for some face time with other Mozilla developers at the Foundation headquarters. I'm also planning to visit the Google campus on Tuesday the 27th. On Wednesday I'll zoom up to Berkeley to give a talk there about Firefox development and catch up with some friends. I head off back to NZ on Friday night. Along the way I'm hoping to meet up with a bunch of my old friends living in the south Bay. I'm really glad to have chances to travel and sort-of-keep-in-touch with so many people. It's also fun to be on my own without the kids, up to a point; my three-week trip in May was too long, but hopefully two weeks will be about right.

Just a few hours after I get back to Auckland I'll be whisking off again, to Queenstown for a few days with my family. Great! I'll be back at work on Thursday October 6.

Thursday, 8 September 2005

Blast From The Past

A few years ago, while at IBM, I did some work on dynamic data race detection for Java, in conjunction with some colleagues at the Watson lab (hi Manu, Jong, Vivek, Keunwoo, Alexey!). It culminated in a "hybrid race detector" that combined two previously distinct methods --- lockset and happens-before in a nice way. The work was published in PPoPP 2003.

I'm pleased to see that some people at Microsoft Research have picked up the idea and extended it to analyse CLR programs. It's great that people are carrying on with this line of work --- it feels good to have your ideas built upon. (Although many of the extensions they describe in the paper were actually present in our detector --- I probably didn't get around to writing about them in our paper.)

Monday, 29 August 2005

SVG Interoperation

We're going to be shipping SVG support in Firefox 1.5. Opera is already shipping some SVG support in Opera 8. It would be very helpful to know how well these two SVG implementations interoperate. We already know they cover slightly different subsets of SVG 1.1 but for the set of SVG features they both claim to implement, it would be very useful to see if there are many bugs that occur in one browser but not in the other --- it would give us, at least, a chance to fix some of those bugs on our side so authors targeting multiple browsers get better results.

Any volunteers? There are a number of SVG tests out there on the Web. This work should be relatively easy to do. Even just doing a few tests would be helpful. The more that gets tested, the better.

Sunday, 21 August 2005

Speaking The Truth In Love (I Hope)

Today we walked up Mount Eden, one of Auckland's volcanic cones, close to our house. The crater forms a steep grassy bowl and is quite picturesque, and the view of Auckland from the rim is superb. It's a lovely spot for a walk.

Mount Eden crater

Unfortunately the terrain is fragile, being largely loose scoria and dirt, and is being eroded by the passage of large numbers of visitors --- both locals and busloads of foreign tourists. To protect the crater, the authorities prohibit people from descending into it. But despite many strongly worded signs, lots of people insist on going anyway, and that always makes me furious.

Perhaps today I was feeling particularly irascible; I actually went up to a few of these people and pointed out that they weren't supposed to have been down there. Some just looked sheepish and wandered off. Others responded that lots of other people had been doing it; that response really irks me but I think I managed to be quite calm while pointing out that the fact that other people do it doesn't make it right. One man replied that if the authorities wanted to protect the crater they should put a fence around the whole thing, and again I hope I was calm while suggesting that making it ugly for everyone probably wasn't a good solution.

As usual, afterwards I figured out more compelling ways to say everything. If I'm there again and someone tells me "everybody else does it", if they have children with them (most do) I'll ask them if they teach their children that seeing others doing something forbidden makes it OK.

It's all rather disturbing, because I'm rather shy at root and I'd ordinarily never dream of breaching the wall of silence strangers carry in the city. Perhaps I'm turning into an crusty curmudgeon as I age --- or just a crank!

Monday, 1 August 2005

IE7 To Fix A Number Of Standards-Compliance Issues

I've just read Chris Wilson's post on the IE Blog where he lists a number of CSS bugs and missing features that will be fixed in IE7. It doesn't bring them anywhere close to Gecko (nor Opera or Webcore or KHTML) but nevertheless it's a good step that will help a lot of Web developers.

Frankly, I'm surprised, because I had a few reasons to believe they would do almost nothing. As I've blogged before, there are strategic reasons for them to hold the Web back as much as possible, to maximise the pain for Web developers as an encouragement for them to migrate to Avalon. The IE team had been totally unwilling to publically commit to any concrete features or bug fixes. And when IE7 beta 1 had only a couple of bugs fixed, I thought my cynicism had proved correct; I expected them to have done any engine work before beta 1, so that Web developers have the maximum time to test and fix their sites. That's how we try to run Gecko/Firefox.

Anyway, I'm glad I was wrong! I applaud the team for doing the right thing to help web developers even though I suspect it will hurt Microsoft's big plans. I think it will even hurt IE market share: once IE7 has some penetration into the market, more web developers will feel justified in writing pages that don't cater to IE6. IE6 users on Win2K/win9x won't be able to run IE7; some of them will upgrade operating systems, but lots will find it easier to upgrade to Firefox.

Thursday, 21 July 2005

Gecko 1.9

Within a week or two we'll be branching off 1.8 and opening the trunk for Gecko 1.9 development. This will be very exciting because we have a lot of interesting changes planned, many of which are underway and will land soon after the branch is cut. Here's a rundown of some of the upcoming changes:

  • The new Thebes graphics code will be landing. Thebes is a C++ wrapper around cairo. Very soon we should be able to produce Thebes-based Firefox builds for Linux and Windows. We'll be doing intensive development and testing of these builds during the 1.9 cycle until we reach the point where we can make them the default builds. This will give us a number of cool features:

    • Much less graphics code for us to maintain --- most of the work will be in cairo, which is shared with many other projects
    • Various options for accelerated graphics: Glitz on Windows and Linux, Quartz on Mac, XRender on Linux (with an accelerated X server such as Xglx or Exa)
    • Better quality rendering: some antialiasing, bilinear image scaling
    • A powerful new graphics API so we can draw fancier borders, draw rotated HTML, etc
    • Fix various rendering bugs that are currently hard to fix

  • Blake Kaplan's caret patch fixes many issues with caret positioning and drawing by making caret drawing go through our standard paint path. In conjunction with the Thebes work, carets in rotated text boxes will be drawn correctly.
  • We have a units patch from sharparrow which will simplify our code and make Gecko work intelligently on high-density displays. On 200dpi screens we'll draw 2x2 screen pixels for every CSS pixel.
  • We also have an events refactoring patch from sharparrow that will simplify our code and fix a number of bugs. Some more events refactoring patches will follow to simplify the code even more.
  • I have a plan to eliminate our widget trees by having all Gecko content render into one top-level widget whose only children are plugins. This will simplify our code considerably, fix some bugs and should improve performance (especiallly with Glitz).
  • I also have a plan to eliminate the separate view trees we currently maintain and move the view manager's functionality in the presshell. This will also simplify our code a lot and will smooth the path to fixing various bugs (including at least one Acid2 bug).
  • Christian Biesinger has a patch to fix some plugin architecture problems by moving plugin loading to content. This also fixes an Acid2 bug.
  • I hope that David Baron's reflow refactoring branch can land as soon as possible too, which will fix many bugs and simplify our code some more.
  • There is a plan to simplify the SVG code and reduce the footprint of SVG elements by being more intelligent about how we handle DOM SVG values.

I'm excited that these changes will simplify Gecko considerably. It's definitely something that needs to be done. Moving from four trees (content, frames, views, widgets) to just two (content and frames) should be a huge help. Eliminating most of our gfx code will also be a big win.

Tuesday, 19 July 2005


So far during our time back in NZ we haven't had much chance to get out of the city --- which is a bit of a shame because there are so many amazing places to enjoy. We've been up north to Omaha and the Bay of Islands, and to Rangitoto, and I went down to Hamilton for a Linux User's Group talk, and that's it.

We had some free time on Sunday so we decided to go out to the west coast. Auckland is nestled between the Waitemata Harbour coming in from the east and the Manakau Harbour coming in from the west, but it's mainly built up along the east coast. It's always fun to take a drive out west, across the Waitakere hills (a very large and lovely regional park) to the west coast beaches. Due to their relative remoteness and the pounding of the Tasman Sea, these beaches have a wild character that is very refreshing. (The move THE PIANO had a memorable scene of a settler's piano being dumped on a beach --- it was filmed at one of these beaches.) Although these beaches seem remote they're only about an hour's drive from the city (depending on how fast you can take the narrow winding roads through the hills!).

Whatipu itself is a huge sandy area just north of the entrance to the Manakau harbour --- windswept, barren and with very little development. The only access is an unsealed road around the southern edge of the Waitakeres. Although it's winter, it was a clear warm day and there were a couple of dozen cars parked at the end of the road by the time we got there about 3pm. We had a lovely walk along the black ironsand paths, through the wetlands to the beach. The tide was very high so there was no way to walk around to the largest part of the beach, but it was great just to stand there and feel the wind, the sun and the sea, watch the waves break on sandbars and admire the fortitude of the fishing-boat crews as they punched their way out to sea.

It's definitely one of my favourite places. There are a lot of bush walks on that side of the Waitakere hills and I hope we can go out there again to do some of them. But the trip also whetted my appetite to go in other directions. It's easy to live our daily lives here in the city and forget the wonders on our doorstep. In particular I hope we can soon get away for a weekend in the central North Island, which has always fascinated me with its lakes, mighty volcanoes and geothermal displays.

Whatipu Rock


Update! I'm told that the rock at the entrance to the Manukau harbour is called the Ninepin Rock. The object on it is a navigation light, number 4107. 4 white flashes every 30 seconds. And it used to be serviced by my grandfather when he worked for the Auckland Harbour Board!

Thursday, 7 July 2005

EU Patent Law Defeated

The European Parliament tossed out the proposed law that would have enshrined software patents. This is a huge victory that I did not expect.

Some argue that this is not actually a victory for those opposed to software patents:
Dr John Collins, a partner at patent attorney Marks & Clerk said the decision was not a victory for opoonents of software patents.

"Today's outcome is a continuation of inconsistency and uncertainty with regard to software patenting across the EU," he said.

"Software will continue to be patented in Europe as it has been for the last 30 years," said Dr Collins.

He's right in some ways. The Parliament was likely to add new amendments restricting the scope of software patents --- amendments that could not have been deleted again by the EU Council --- and the law's corporate sponsors decided that they prefer the status quo to the amended law. But if you look at those sponsors who supported the law and effectively wrote the original text, and the anti-democratic manner in which amendments to restrict patent scope were removed by the Council, I think it's very clear that overall, no law is better than their law, and their defeat is an important victory.

I don't know who writes Blooomberg's copy but it's quite sinister:
The European Parliament rejected a law on patents for software, ending a three-year effort by companies including Nokia Oyj and Siemens AG to counter U.S. domination of Europe's $60 billion market.

This is someone's spin repeated as fact. Software patents would never have helped anyone counter US domination of the software industry. Software patents protect incumbents and therefore the status quo. A nascent competitor who tried to assert patent claims against an incumbent would be buried in countersuits, and the deepest pockets and the largest patent portfolios would win.

As I've written before, the only way for non-incumbents to use the patent system to their advantage is to abandon the software business and become a pure patent company whose business is suing others. This might bring revenue into the EU, but it wouldn't alter US domination of the software industry (except perhaps by driving the entire industry into the ground). And it doesn't even require software patents in the EU; a European company could just as easily obtain US patents and pursue lawsuits there.

Anyway, it'll be interesting to see what happens next. Unexpected turnarounds like this make me suspect God is interested in this stuff after all :-).

Friday, 1 July 2005

Eclipse CDT

Eclipse 3.1 is out. Eclipse is an IDE framework written in Java and includes a Java IDE that is incredibly powerful. It also has amazing CVS integration. And it's open source, cross platform, has industry support, huge plugin community etc etc. I used Eclipse a lot when I was working on Java code and it was fantastic then ... it's even better now with full Java 1.5 support and lots of new features. On my machine at work it just flies --- I suspect due to a combination of performance improvements in Eclipse, improvements in JVMs, and the power of this machine. If you're writing Java code, there is no reason to not use Eclipse or something of similar power.

Unfortunately I don't work with Java right now, so I took the C/C++ tools for a spin. These tools are called the CDT, now at version 3.0. The CDT feature set is pretty good for a C++ environment --- intelligent (parse-tree based) code completion, intelligent navigation and searching, debugger integration, on the fly builds, and basic rename refactoring. This is very impressive given the nightmarish world of C and C++ parsing and semantics. I'd love to be able to use the CDT to work on Mozilla ... I currently use emacs, which I detest.

The problem is that Mozilla is a huge and complicated codebase. There are more than 4,500 C++ files and 1,700 C files. The build process is complex, and generates a lot of code from IDL in our own special way. This creates two major problems for tools: scaling performance and making sense of the codebase. Last time I tried the CDT (2.1) it just collapsed. The good news is that CDT 3.0 is much better, perhaps partly due to improvements in Eclipse itself and my improved hardware. There's now a built-in tool that reads the 'make' output from a complete build and extracts all sorts of information useful to CDT --- include file paths, defined symbols, which source files are part of the build, and so on. Everything is also much more scalable now; I've actually been able to index most of Mozilla and get code completion and some other things working.

The bad news is that there are still showstopper problems. The make output scanner doesn't work reliably so some of my source files don't get the correct information. When I worked around that by hand, I was able to index most of the Mozilla code but code completion popups and navigation operations took 5-10 seconds to happen, which is unusable. In general while playing with the tools I kept getting into strange situations including hangs, out of memory situations, and runaway background tasks.

It's a real shame because the tools look so good and they seem so close to working --- much closer than six months ago. If a few bugs and glaring performance problems get fixed, I'll be able to throw away emacs and step up into a much more productive environment. I'm really looking forward to that.

It's important to remember that Mozilla is an extreme test, a particularly large and complicated project. No IDE has ever been usable on Mozilla (beyond basic text editing); if CDT gets there, it will be the first. For smaller and simpler projects --- i.e. most C/C++ projects --- the CDT probably works just fine.

Tuesday, 28 June 2005

'Talisman' and Programming Language Semantics

On Friday night I got together with some of my old old buddies, guys I first met in high school, plus my brother, for a game of Talisman. Not a game of great intellectual challenge, but tons of fun. Having the gang back together for the first time in about 11 years made it so much better.

It's interesting to think about how one would implement Talisman in software. The issue is that Talisman revolves around cards --- "adventure cards", "spells", and "characters" --- many of which modify game rules in some way. (Many other games also have this form, of course.) For example, a card might say "adds 2 to your strength in combat with dragons", or "magic objects have no effect against the owner of this object", or "you may roll twice when moving and choose the larger value". Of course if you know the complete set of cards ahead of time, you can have every step of the game check for all cards which may be in effect. But this is not really a faithful encoding of the game. A faithful encoding would give the source code the same modular structure that the game has. In particular you would have a small core corresponding to the core rules, and one module for each card which fully describes the effects of that card.

The natural thing to do is then to identify a set of "extension points" where cards may modify game behaviour, each with a defined interface, and have each card "plug in" to one or more extension points. This works well for many kinds of applications, but unfortunately it doesn't work well here because again it is not very faithful to the structure of the original game. The real game does not define such extension points; instead almost everything is implicitly extensible. We take it for granted that the English text of the rules can be later modified by the text on cards and we don't have to say anything about the possibility beforehand.

Can we have a programming language that supports this sort of implicit extensibility? I think it would be very useful, not just for these sorts of games but also in other situations. One question is whether the problem is "AI-complete" --- that is, whether you need human-level language processing and general-purpose intelligence to resolve ambiguities and contradictions, the sort of intelligence that would probably let you pass a Turing test. I don't think you do, but the only way to be sure is to demonstrate a non-AI-complete solution. I think you could design a language and toolchain so that at least at program composition time, when all the extensions are known, all ambiguities and contradictions are automatically and precisely identified.

Another question is whether a language with implicit extensibility already exists. As far as I know the only languages that come close are those that expose the program's code to the program reflectively, allowing modification. I think some LISP interpreters allow that, and some languages with "eval" can be thought of as supporting it. But self-code-surgery seems like an incredibly crude and not very expressive way to address this problem.

I first considered this problem over ten years ago, it was even in my .project file back then. Now I wish I'd done my thesis on it. It's a bit off the wall but it would have been more fun and maybe had a lot more impact than the fairly pedestrian work I ended up doing.

Monday, 20 June 2005

I Have A Dream

We're seeing some amazing advances in neuroscience. Today people are taking the first steps towards mental input-output --- such as reading basic thoughts, mental control of joysticks, and stimulation of vision centers. Where could this lead?

Wouldn't it be amazing to have a surgically implanted computer capable of directly receiving thoughts and inducing sensory stimulation? Think of the applications!

  • Do away with the whole messy voice, keyboard, mouse etc
  • Work at full efficiency in any environment doing any activity
  • Differential GPS capability makes being lost an anachronism
  • Cellphones become telepathy
  • High-fidelity virtual experiences of all kinds (nudge nudge, wink wink)
  • Hook into sensor and actuator networks make your devices direct extensions of yourself: your house, your car, your pet robotic dog, all at your direct mental command at any range, and all acting as additional senses

But it doesn't stop there. Choose to share your experiences with others in real time. Experience group consciousness. Weave together many experiences of the same event into an incredibly detailed whole.

There are some risks. For protection, all this has to be optional. But who could shut themselves off from this for long? It's all humanity's dreams rolled into one. (Well, most of them.)

Now imagine that stolen nukes go off in New York, London, Beijing and Bangalore. Tens of millions are dead, hundreds of millions more at risk, and the world teeters close to an all-out nuclear exchange. World leaders are told it is possible to create a virus that will cover the world in hours and provide temporary access to the minds of the implanted, maximising the chance of detecting terrorists --- even disabling any who are already implanted --- and pulling the world back from the brink. It's the only ray of hope, and the plan is executed successfully.

After the immediate threat has passed, citizens are given the option of withdrawing from the emergency cooperative. Such objectors are a potential security threat, and become the focus of the rest of the network. They are, in fact, disconnected from the system lest they interfere with it. It's not a popular option, taken up only by a few eccentrics --- almost no-one who has adjusted to the extraordinary lifestyle of the implanted can bring themselves to withdraw.

Over time, it is judged prudent for the security of the system --- and therefore humanity --- for all implanted minds to be nudged towards approval of the system, to avoid those rare cases where there are doubts. Likewise, they are encouraged to have their children implanted at a young age. Other mental traits that cause social friction are removed ... or if they cannot be removed, their exercise is immediately detected and countered within the mind of the perpetrator. The only remaining external threat comes from the dwindling communities of unimplanted, who are therefore relocated to zones where they cannot endanger themselves or others.

Now unrestricted thought belongs only to the men and women at the root of the system, who control its software and thereby shepherd humanity. What happens to them? Do they automate their oversight and relinquish their power, joining the blissful masses? Do they turn to some new and terrible direction? Or do they err, and all succumb to some terrible catastrophe?

I admit to being paranoid --- but to my knowledge, nothing here is technically improbable. We desperately need a new Orwell for the twenty-first century, someone to terrorise us with plausible visions so we know to flee from the hint of them.

Friday, 17 June 2005

Avalon Lockdown

"If someone wanted to build an implementation of the WS- protocols that could talk to Indigo, they can use the public specs to build their own implementation. If however, someone wanted to clone Avalon or Indigo from top to bottom (that is, from APIs down to protocols) they'll probably want to approach Microsoft about licensing," a Microsoft spokeswoman told The Register.

So if you're a developer planning to base products on Avalon or Indigo, be aware that you're locking yourself to run on Microsoft platforms only, forever.

display:inline and tables

We have run into a problem. Some important Web sites have HTML like this:

<div>Hello <table style="display:inline">...</table> Kitty</div>

The intent seems to be to create an inline table. Unfortunately
the CSS spec currently says otherwise:

  • "display:inline" turns the table element into a regular inline element, not a table at all.
  • The presence of table rows and columns causes an anonymous table box to be created to wrap them. This happens to be in roughly the same place as the original table element.
  • The spec is quite clear that this anonymous table box has "display:table". (It says "a box corresponding to a 'table' element will be generated between P and T", and the default style sheet is clearly "table { display: table }".)
  • This is a block-level box inside an inline box, which causes complicated splitting to happen. But the important thing is that this table box will be on its own line, not inline.

Currently, in Gecko we have a bug which effectively causes the anonymous table to be treated as an inline-table, usually (but not always!). Because we don't really support inline-table, it probably doesn't work the way an inline-table should.

What should we do? There are a few possibilities:

  1. Fix our code so that the anonymous table is a block-level table, according to spec. Then get everyone who relies on our behaviour to fix their sites --- soon, before Firefox 1.1 is released. One big problem here is that because we don't support inline-table or inline-block, there may be no way to work around the removal of this feature/bug.
  2. Get the spec changed so that when the parent box is an inline box, the anonymous table is made an inline-table. Then change our code to support inline-table as best we can before release.
  3. Leave the spec as is, but implement inline-table so people can get the old behaviour.

I'm leaning towards the second option, but any feedback would be helpful.

Wednesday, 15 June 2005

Around The World In 22 Days

Well, I'm back! It was quite a journey: Auckland - Singapore - Amsterdam - Frankfurt - Stuttgart - Frankfurt - N�rnberg - Frankfurt - London - New York - Boston - San Francisco - Auckland. More than 52 hours in the air, more than six hours on buses and trains (not counting intra-city travel), four hours of driving, two conferences, two company offices, one wedding, lots and lots of fun.

My favourite city was N�rnberg ... it has beautiful old city walls and other buildings. Amsterdam is nice too. Honestly though, I did not feel tempted to live in any of the cities I visited, over Auckland. I suspect if I get to have more time in London it will come to rival New York in my overall affection as a city to visit.

I was glad to enjoy reunions with some of my most important friends, especially as one of them got married. Unfortunately weddings always make me maudlin, especially when my wife isn't around to lean on for emotional support. I suspect it's because I'm incredibly self-centered and subconsciously begrudge the newlyweds the near-monopoly on attention they rightfully receive. Fortunately that's a minor blip and the wedding was undoubtedly the highlight of my trip.

I flew from Auckland to Europe via Asia for the first time. Now that I've flown both directions around the world, I think it's safe to say God has blessed me with unusual aptitude for air travel. With every journey, on arrival I stay up all day without napping and go to sleep in the evening, then I sleep well and wake up completely adjusted to local time. Concomitantly, I also have the ability to sit in an economy class seat uninterrupted for 14 hours without feeling uncomfortable or even needing to use the toilet. I'll sleep for four or five hours if I need to. I may be heading for deep vein thrombosis, but for now I feel extremely fortunate. Perhaps New Zealanders are evolving adaptations to long-distance air travel.

Thursday, 9 June 2005

Graphics Thoughts

At GUADEC last week I gave a demo of my now-sorta-famous rotating HTML implementation. Since it's a GNOME conference I thought I should get GTK+ themes working in cairo. Here are the results.

Google IFRAME rotated by minus 30 degrees

You can see that bilinear scaling makes the Google logo look good even when we've zoomed in, and in general things look really good now that the GTK+ theme is being rendered. Pay particular attention to the rotated, scaled GTK-themed scrollbar and buttons ... and keep in mind that they still work!

Now the bad news: this required a grotesque and very slow hack. We can't paint a GTK widget theme directly into a cairo context because GTK themes can only paint into GDK rendering contexts (fair enough). My approach is to paint the GTK theme into an offscreen pixmap and then copy that pixmap to the cairo context. That's kinda slow but the real problem is that many GTK widget themes are partially or completely transparent. For example the button theme in this example paints only the slightly darker shadow along the bottom edge of the button. So just rendering into a pixmap and copying that doesn't work because we don't know the alpha values of the pixels, and there is currently no way to reliably create X pixmaps that will capture alpha values. So we resort to our old trick of rendering into two pixmaps: one with a white background and one with a black background. Then with some algebra we can recover the alpha values of the pixels, create a new cairo image surface with the correct pixel colors and alphas, and draw that into the cairo rendering context. Naturally this is really slow, partly because we have to draw the theme of each widget twice, and partly because we have to then ship the pixel data for both pixmaps back from the X server to the client, do significant per-pixel computation, and then ship the results back to the X server. Ugh!

I hope that things change in GTK or Xgl so that we can avoid all this, but I suspect that similar issues will bite us on other platforms. In the meantime, I'm going to look at caching the RGBA rendering of widgets so that we we render the same widget type in the same state with the same size as one we previously rendered, we can just pull the RGBA data out of the cache and blast it to the screen.

Monday, 6 June 2005

An Unexpected Journey

My visits to Stuttgart and Nuremberg went according to plan. I particularly enjoyed Nuremberg --- a wonderful city. But then things took a turn...

Saturday, June 4

I showed up at Nuremberg airport with plenty of time to spare. Everything was fine until the Lufthansa checkin agent said the dreaded words "do you have a visa?" I said I didn't need one ... She said (drum roll) "your passport is not machine readable, you do need one". Then I found out that you can't get a visa waiver without a machine readable passport and mine (issued in 2002 in Washington DC) wasn't. Until June 25 there is a one-time exemption, but it's only for people who haven't visited the United States in the last year, and of course I have, although using an H-1 visa. The rules in this case weren't clear so the checkin agent went to her supervisor, but the verdict was final and grim: no flight to Boston for me.

I briefly had a "what on earth do I do now" moment, followed by somewhat more time praying. Then I had to sit down and think for a while. Three options came to mind: get a visa, get a new passport, or fly straight back to New Zealand. So I called Janet to get the phone number for the New Zealand embassy in Berlin, and called their emergency line for advice. A visa would take ten days so that was out of the question. She quoted me three days for a new passport from London.

At this point I thought that flying directly home would save time, money and hassle since I have no critical need to be in Boston (although I'd be very disappointed to miss my friend's wedding). The Lufthansa ticketing agent had more bad news for me: a new ticket to get back to New Zealand would cost 1700 euros, with no refund on my round-the-world ticket. They wouldn't let me continue on my round-the-world ticket because it turns out Star Alliance has no connections across the Pacific that do not go through the United States (Los Angeles or San Francisco). We looked for routes through Canada, Mexico and even Argentina to no avail. Apparently you can't even change planes in the USA without passing through immigration ... that surprises me but I've never tried it, and they were adamant.

The remaining option was to get a new passport. I called back to the Berlin embassy hotline, who told me I'd better go to London in person and gave me the emergency number there. Loneon told me I could get it done in one hour if I showed up in person, even on Sunday morning. So I decided to give it a go and asked Lufthansa to reroute me to London today and on to Boston tomorrow evening (Sunday). New snag: Star Alliance doesn't fly London to Boston. Fine, I asked them to send me to New York where I can catch the train to Boston. Another snag: the new itinerary appears to exceed the maximum mileage for a round the-world ticket. But after much calculation, apparently it's just under the limit. Phew!

So the current plan is to fly to London today, arriving 3:55pm local time. I can check into an airport hotel and relax a bit, hopefully get online. I'll have to cancel planned meetings with friends in Boston on Saturday night and in New York on Sunday, which is sad, but we'll survive it. On Sunday morning I need to find the New Zealand embassy and get my new passport issued. Then I fly out of London at 6pm, arriving New York 8:35pm ... where I need to find somewhere to stay until I catch an early train from New York City to Boston on Monday morning. If all goes according to plan the net impact will be that I have a less fun weekend than planned and arrive a couple of hours late for work on Monday and a few hundred euros out of pocket... but of course nothing is guaranteed :-).

I'm pleased to report that the New Zealand embassy staff and the Lufthansa agents have been extremely helpful and professional throughout. I'm less pleased with my travel agent, who should have at least mentioned the machine readability issue. In the end, however, the fault is mine.

Update! Sunday, June 5

I made it to London last night, stayed at an exorbitant hotel overnight and took the tube to Picadilly Circus this morning to get my new passport. It only took twenty minutes ... now I'm online at a Starbucks near Charing Cross Station. I should have no trouble making my flight to New York and I may even try to get an earlier flight. London is amazing and I wish I had more time here ... maybe a few months to visit all the museums, monuments, and buildings. I can really feel the weight of its long history, the impact of having been the capital of a great empire.

Update! Monday, June 6

I stayed the night with my friend in Tribeca (thanks mate!) and I should be on my way and in Boston by around mid-day.

Update! Monday afternoon, June 6

I finally made it to Boston, checked into my hotel and arrived at the office. Woohoo!

Tuesday, 31 May 2005


David Reveman just demoed his latest Xgl work (an X server based on Glitz). It's incredibly impressive. He has virtual desktops as the faces of a cube, so you can switch desktops by rotating the cube. Then he played a movie in a translucent window, crossing a virtual desktop boundary so it wraps around the cube. Of course it keeps playing while he rotates the cube. All very smooth and pretty. Amazing.

It looks like on Linux the preferred hardware acceleration story for rich apps like Mozilla/Firefox will be to run the Xgl X server, and have Cairo talk to Xgl via Xrender. Xgl implements the Xrender calls using GL so we get the same sort of hardware acceleration we'd get if we used Glitz directly. With only Xgl talking to the GL drivers, we avoid the problem that the vendor GL drivers aren't very good at handling multiple processes banging on them simultaneously. The only downside is that Xrender doesn't support some of the operations we need accelerated --- e.g., gradients, non-affine transformations, and SVG-style filters. We'll just have to fix that :-).

The story on Windows remains a bit less clear...

Friday, 27 May 2005


I'm currently at the XTech conference in Amsterdam. It's been lots of fun to meet up with people again ... Mozilla people, Novell people, and other people who I've interacted with online but never met face to face.

Amsterdam is an interesting city, dominated (to my mind) by bicycles and canals. I like seeing all the people on bicycles although it is hard to get used to pedestrians being distinctly third-class. I really enjoy just walking around the city ... it's about a 35 minute walk from my hotel to the conference centre, but I'm not tempted to catch a tram or taxi. I had a really good time with Martijn on Tuesday just walking around the central city area. But I get no feeling of "I'd like to live here" ... I guess Auckland has me hooked.

The conference technical talks have been good, especially given there was no real peer review selection mechanism.

A person from the BBC Creative Archive gave a good talk about the potential for users to remix and build on existing digital content and it's exciting to see the BBC making their content avaiable for this --- although the restriction to UK users is a bummer.

The XAML/Avalon talk was almost exactly what I expected. Rob Relyea focused on technical advantages and steered away from any direct discussion of a confrontation with the Web ... though Microsoft's desire to supplant the Web remains completely clear. One interesting point was that the Avalon team prioritizes ease of tooling over ease of hand editing. Another was that most of the demos relied to some extent on 3D effects, something that Cairo doesn't directly address. We need to work on our story there.

I gave my talk earlier today and it seemed to go down rather well. Here it is (apologies if it didn't all get through, uploading files to Moveable Type is a pain!). My main theme is that browsers are now or very soon shipping technology to do rich graphics on the Web --- SVG and <canvas> in particular --- and it's time for developers to start using them. These technologies don't force you to dive down into Flash; you can incrementally extend existing Web pages with existing scripts and CSS styling. This ties into a general theme that people have been pushing at this conference: that the Web can and is evolving, and there is no need to tear it down and replace it with something else.

I think the highlight was my demo of SVG foreignObject in my Cairo build. I really have to thank God for this demo; I was up all night working on it and not until the last minute did I get it working reasonably. Basically I modified our implementation of foreignObject in my Cairo-based build (which I just recently got working with SVG and canvas) to "do the right thing" ... painting the foreign content is subjected to SVG transforms, and event coordinates are translated appropriately. Interestingly, the latter was a lot harder than the former to get right. Of course this build still has many bugs, and performance is pretty bad too. But it's a nice demo. It's much more impressive live, when you can interact with the rotated/scaled content, type into the text fields, scroll, click on links, and so on.

30-degree rotated HTML

Update! Apparently there actually was peer review in most cases. Sorry Edd!

Friday, 20 May 2005

Landed Canvas drawWindow API

I just checked the drawWindow API into trunk, so nightly builds and Deer Park will have it. Please test it out. I'm particularly interested in the results on the less popular platforms --- Mac, Linux on weird architectures, etc. I'm also interested in pages that render incorrectly via drawWindow. I know plugins won't work (except WMODE=transparent Flash should work) --- we can't fix that. File bugs on problems you find and CC me.

Thursday, 19 May 2005

Socialized Medicine

Every so often I hear Americans who believe that their health-care system is the best on the grounds that certain treatments available in the USA are not available, or less common, in other countries. I also hear some New Zealanders saying the same thing. It enrages me. For example, an American friend once said, without arrogance, "my relative lived in New Zealand and got cancer and of course had to come back to the USA for treatment, because, you know, socialized medicine". It wasn't a put-down, it was just a simple statement of what was obvious to her.

In fact, I believe that the US health system is very poorly designed and implemented. Whatever advantages it has are mostly due to the fact that more money goes into it than in any other country (in most cases, very much more). And who accounts for the time and frustration invested by the health consumer? Here's what's been happening to us:

We've been getting bills from our paediatrician (actually their outsourced billing service) for services that should have been covered by our insurance company (Aetna) --- regular checkups and immunisations. Aetna's online accounts show that the billing people submitted charges twice for the same services; they show that Aetna paid once and refused to pay the other. So we called our paediatrician's clinic a few times --- took a while to get someone, timezone issues, and no-one wants to call us back (maybe they don't know we have phones here). They told us we have to talk to billing. Call billing a few times, get a machine, leave messages, not returned. Forget about it for a while. More bills. Tempted to ignore them, but don't want to provoke international diplomatic crisis. Call billing again, get a human being, who says that Aetna never paid them, Aetna had sent them a form saying they didn't know who the charges are for (despite the fact that they paid bills for the same kids, same place, many times before), so billing just resubmitted the charges again hoping they'd go through. Billing promises to sort it out with Aetna, now that we've provided "more information" (that they already knew).


  • Someone is lying about whether Aetna paid out.
  • Billing is incompetent. (Why keep sending us a bill while you know there's a problem between billing and Aetna? Why do I have to call before you try to resolve it?)
  • Everyone has lousy customer service.
  • Whoever idolizes a system which requires us to wrangle three parties who blame each other while we try to avoid the debt collectors should be thrown in a river.
  • The most compelling feature of this system is that the insurance companies and billing agencies can reap free money when the customer pays out due to ignorance or exhaustion.

This is not an isolated case. This kind of thing only happened once to us in the ~4 years that we consumed US health services (OK there were a couple of other billing mistakes that created surprise charges for us, but they were easily sorted out), but it's happened to several people I know, some of them more than once.

Here's how the system here has worked for the regular checkups and immunisations we've had so far. We go to the nearest clinic (either the Plunket nurse's clinic or one with a doctor). The first time at the doctor's clinic, we filled out a form identifying it as our local clinic. The staff take our names and addresses. We pay a copay ($10 or so). That's it. No bills, no insurance companies, no billing agency, no hassle.

So maybe the USA gets great outcomes from vast amounts of money. Good for them. But if you want to emulate them, I suggest emulating their expenditures before emulating their structures.

BTW, you can get private health insurance here if you want it.


I'm going to be out of town for three weeks, until June 12. It's a big, long trip but if you're going to go to the other side of the world, you might as well fit as much in as you can.

On Monday I'm heading off to the XTech conference in Amsterdam (May 25-27). A lot of fun people will be there --- Mozilla people, Opera people, Web standards people, even Microsoft people. It's a chance to reconnect with people I work with on-line every day, and I will also get to meet some people I've never met in the flesh before. On Thursday I'll be presenting a talk about rich graphics on the Web and Mozilla's implementation.

After that I'm going to GUADEC (May 29-31), the big GNOME conference. The schedule looks really interesting and I'm looking forward to getting to meet more GNOME people. Some of my Ximian team will be there too.

Next I'm going to Nuremberg along with other Ximian folks to meet our SUSE colleagues for a few days. This is another chance for me to meet a bunch of people I work with but have never met face to face ... including my boss. That should be interesting.

On June 4 I fly on to Boston. I'll be at our Boston office for the following week, doing some work and getting to know my Ximian buddies better. That will be really valuable. At the end of the week a close friend is getting married near Boston, and I'm incredibly grateful to be able to be there. I fly out on June 12 and should get home on the 14th having circumnavigated the globe!

I'll be offline most of the time I'm on the road, since I'm too lazy to wrangle mobile networking and I try to spend all my time meeting people and having fun. However, I should be connected during the days I'm safely at Novell's bosom in Nuremberg and Boston.

Saturday, 14 May 2005

Rendering Web Page To Images

For a long time now, people have been asking for ways to use Gecko to render a Web page to an image. Creating thumbnails of a Web page is one common desire, but there are lots of potential uses, especially if the feature is available to scripts. I have implemented a new DOM API in 1.8/FF 1.1 that makes this possible. It builds on the canvas element that has recently been implemented in Gecko 1.8 and will be enabled by default soon. (My patch hasn't been checked in yet either, so you can't try this at home just yet.)

To demo this API I've implemented a very simple extension that displays a "thumbnail view" of the currently loaded page in your sidebar. Here's a screenshot. Below is the core source code for the extension. The extension itself is no big deal, and I'm hoping the wonderfully imaginative extension developer community will take this and run with it.



<?xml version="1.0"?>

<?xml-stylesheet href="chrome://global/skin" type="text/css"?>

<script type="application/x-javascript" src="chrome://thumbview/content/thumbview.js"/>
<vbox flex="1" id="before"/>
<html:canvas id="canvas"/>
<vbox flex="1" id="after"/>


function update() {
var w = content.innerWidth + content.scrollMaxX;
var h = content.innerHeight + content.scrollMaxY;
if (w > 10000) w = 10000;
if (h > 10000) h = 10000;

var container = document.getElementById("win");
var canvasW = container.boxObject.width;
var scale = canvasW/w;
var canvasH = Math.round(h*scale);

var canvas = document.getElementById("canvas"); = canvasW+"px"; = canvasH+"px";
canvas.width = canvasW;
canvas.height = canvasH;
var ctx = canvas.getContext("2d");
ctx.clearRect(0, 0, canvasW, canvasH);;
ctx.scale(canvasW/w, canvasH/h);
ctx.drawWindow(content, 0, 0, w, h, "rgb(0,0,0)");

var NavLoadObserver = {
observe: function(aWindow)

function start() {
var obs = Components.classes[";1"].
obs.addObserver(NavLoadObserver, "EndDocumentLoad", false);

window.addEventListener("load", start, false);

Currently the drawWindow function can only be used by "chrome privileged" content, because untrusted Web content could abuse it in various ways. So extension authors and XUL application developers can use it, but normal Web pages cannot.

Update! I overhauled this entry significantly since we may not be adding a method to 'window' after all. The drawWindow method will be there though.

Tuesday, 10 May 2005

Light-Weight Instrumentation from Relational Queries Over Program Traces

Good news! OOPSLA 2005 has accepted our paper on Light-Weight Instrumentation From Relational Queries Over Program Traces by Simon Goldsmith, Alex Aiken and me. This is work I did with Simon in the (northern) summer of 2003 while he was an intern at IBM.

I'm glad to see this work get published; I think it's an exciting new area of research. The idea is to express dynamic program analyses (e.g., "how many times does function A get called during an invocation of function B") as SQL-like queries over a database containing a full trace of the program execution --- but instead of actually building such a database, our compiler translates the relational query into instrumentation code that gets injected into the program to evaluate the query on the fly as the program runs.

Monday, 9 May 2005

The Language Of Doom

I always wonder why most people barely care about "nuclear proliferation". Is it because the name is so mundane? What would be a suitable chilling phrase to refer to the widespread acquisition of nuclear weapons by the petty, the bitter, the paranoid and the xenophobic?

Perhaps if the possibility kept more people awake at night, our leaders would give it more serious attention. Honestly, one of the main reasons I was (and am) against the war in Iraq is that it's distracted the world, and especially the USA, from the far more serious problem of North Korea. But I can hardly blame politicians for that when most of their voters could hardly care less. It seems inevitable that nothing much will be done until we lose a major city to terrorists or an accident. I expect it will be New York or Washington. Unfortunately by then it will be very hard to turn things around.

Friday, 6 May 2005

Cairo Progress

In between checking in layout patches and fixing regressions, I've been pressing on with cairo integration. I've done a couple of major things since the last update:

  • Glitz integration. I have Mozilla builds running with Glitz. Unfortunately there are some serious bugs that would make it embarrassing to post a screenshot right now.
  • Drawing text using cairo. Up till now we've been using Xft to draw text. I updated to the latest cairo version and modified our Xft-Pango code to do Xft-cairo-Pango. This means we use Xft and Pango to do glyph selection and text measurement, and use cairo just to render the actual glyphs. This seems to work quite well. Eventually we will use cairo to measure text instead of Xft. On Windows and Mac we can't use Pango (since we want everything we distribute to be MPL-compatible), so we'll have to reuse more of our existing font code.
  • I also fixed various bugs in our cairo glue ... translucent images need premultiplied alpha, some clipping wasn't implemented, etc.

Next I need to fix some of the more glaring rendering bugs and then I'll start experimenting with SVG/HTML integration (but I'll have to get SVG working; it's currently broken in this build).

One issue which I'm putting off is the fact that native GTK2 themes don't work in cairo --- GTK themes want to draw into a GTK rendering context (naturally enough), and we don't have one. Among other things, this means that Firefox menus are transparent, because their background isn't drawn. I think we can rig something up by having nsNativeThemeGTK render to an offscreen pixmap if a GTK rendering context isn't available, then copying that pixmap to the Cairo context. (Ultimately GTK2 itself will render using cairo and hopefully then this issue will have a cleaner solution.) In any case this is not a priority right now.


Saturday, 30 April 2005


My manager in Utah recently asked me here in Auckland and another person in India for help with a browser problem. I figured out the cause with the help of a person in Austria. An engineer in Germany will apply the fix, but I also alerted some people in Mountain View to the problem.

Wednesday, 27 April 2005

Cairo Status

All my supporting patches have been checked into the trunk. It should be possible to build with cairo on Linux by running configure --enable-default-toolkit=cairo-gtk2 and get results similar to what I posted last week (better, actually, since I've fixed some image rendering bugs).

Tuesday, 26 April 2005

Star Wars

Viewing the episode 3 trailer, I'm struck by the way Palpatine refers to the "Dark Side". Why don't the Sith put a more positive spin on their agenda? Surely PR is a Dark Side power. Here's a tip to get you started, guys: "Jedi for a Free Choice".

Saturday, 23 April 2005

Glimpse Of The Future

One of the big initiatives in 1.9 will be an overhaul of our graphics infrastructure. We're planning to rip out a lot of our existing graphics code and base everything on cairo. This will give us modern 2D graphics capabilities (such as filling, stroking and clipping to paths, general affine transforms, and ubiquitious support for alpha transparency) and also, via Glitz, acceleration using 3D graphics hardware. It will also mean we can use a single rendering pipeline for HTML/CSS, canvas and SVG, so that SVG effects can be applied to HTML content.

Building on work by Vladimir Vukićević and Stuart Parmenter, I've managed to get basic functionality working on cairo, to the point where the browser is semi-usable:


Obviously there are still some glitches, and right now the speed is best described as somewhere between "glacial" and "proton decay", but at least things are working well enough that we can start identifying particular bugs and fixing them.


There's been lots of speculation about which browser will get Acid2 working first. I'd put my money on Safari. The problem is that we're late in the Gecko 1.8/Firefox 1.1 release cycle and there are a couple of bugs that would be quite a lot of work to fix, and introduce significant risk, and they're just not as important as other work that we have long planned for 1.8 and some other strategic work that I'll blog about soon. We will get to it in 1.9.

I'm sure some will seize on this as an opportunity to say "Gecko developers don't care about standards" ... they're simply wrong, as anyone can tell by looking at the huge number of standards compliance bugs we fix in every release. And keep in mind that if everyone's #1 priority was always standards compliance, Firefox would never have happened.

Monday, 18 April 2005


Last weekend our family went to Rangitoto Island on Saturday morning. It's a wonderful trip; the ferry ride to and from the island is great, the climb to the top is easy, the summit views magnificent, and the 600-year-old volcanic island itself is a unique and fascinating environment. Being mostly black lava, it does get hot on a sunny afternoon so I recommend doing as we did, taking the 9:15am ferry from downtown and the 12:45pm return ferry.

The summit outlook over Auckland and the islands of the Hauraki Gulf are incredible, and my photos can't do it justice. I offer you one photo from the track, looking back to Auckland City.


Saturday, 2 April 2005

April Curmudgeon

I loathe April Fool's Day. For an information junkie like me, it's the day when the world conspires to put sugar in your gas tank.

Tuesday, 29 March 2005

Rediscovering Auckland

Over the last month or so our lives have started to settle down and we've had time --- and great weather --- to get out, visit friends and relatives, and see Auckland again after ten years away.

Strangely, I feel a much stronger desire to really know this city than I did before. For example, there are dozens of volcanic cones in the city, most of which have been turned into parks, but I've only been to a handful of them. So yesterday we went to Mount St John for the first time.

Crater of Mount St John

It's quite thrilling to walk through this upper-class suburb in the middle of everything, walk up a strip of reserve between houses and pop out above an amazing natural amphitheatre with amazing views over the city. I wonder how I managed to pass close by hundreds of times --- my old high school is perhaps a mile away --- being only barely aware it was there. Perhaps it's because Mount St John has larger and more famous neighbours --- Mount Hobson, Mount Eden and One Tree Hill.

Here's another shot of the view from the top. Auckland aficionados may wish to identify the four volcanic cones visible in this picture.

View from Mt St John

This Easter long weekend we also went to the Royal Easter Show, walked around One Tree Hill and Albert Park, and had lunch with friends at Grand Harbour and T-Mark. T-Mark is a rather obscure Taiwanese cafe in Newmarket --- very good. I'm exhausted and looking forward to a relaxing day at work tomorrow!

Monday, 28 March 2005

Gecko 1.8 For Web Developers: Collapsing Margins And The 'Clear' Property

Every Gecko release we spend lots of time and energy fixing bugs to make us more standards compliant and, where necessary, more Web-compliant. One area where I did some significant work last year was fixing how we handle content that combines floats, collapsing margins, and the 'clear' property.

This was particularly important because apparently IE has a bug where a block that contains a float automatically has its height extended to include the bottom of the float. In fact, the float children of a block should not normally have any direct effect on the height of the block. But what if you do want a block to extend at least as far as the bottom of its floats, either to match this IE bug, or for your own reasons? CSS 2.1 provides a way. You simply write something like this:

<div style="float:left">...</div>
<div style="clear:both"></div>

The element with 'clear' is forced below the float, and because that element is in the normal flow, it forces its container to be at least that high. The only problem is that this usually doesn't work in Gecko 1.7/FF 1.0. Basically we were treating the space induced by 'clear' as margin space and then collapsing it away. Fixing this required a major overhaul of how clearance and margin collapsing were handled, so that right now I believe we're more standards-compliant in this area than any other browser.

I could go into lots of gory details --- this is an egregiously complex area of the CSS spec --- but I won't. Suffice it to say that the above trick for vertically sizing blocks does work in Gecko 1.8.

Gecko 1.8 For Web Developers: Columns

One of the big things I've been working on for Gecko 1.8 is support for multicolumn layout. Our implementation follows the CSS3 draft pretty closely. Right now we support the properties -moz-column-count, -moz-column-width, -moz-column-gap and (soon) -moz-column-rule.

Basically this lets you set up multiple columns like a newspaper, so that content flows from the end of one column to the top of the next column automatically. The big win is that you can use the full width of a large screen without making lines excessively long. This very blog is using columns on most entries --- check it out with a Firefox trunk build.

The draft specifies 'balanced' columns. This means that we automatically find a minimal height for a set of columns that balances the content as evenly as possible across the columns while fitting into the available width of the page. This is very powerful and you have to see it in action to appreciate what it does for you. However, balanced columns are inherently a little slow. Also, sometimes you want the columns to be a fixed height but extend horizontally as far as necessary to fit the content. We support that by extending the draft slightly; if the 'height' property on the column set is 'auto', we balance the columns and horizontal overflow doesn't happen, otherwise we do not balance the columns and horizontal overflow can happen.

Our implementation has a few limitations that cannot be fixed for 1.8:

  • Tables don't break across column boundaries.
  • Absolutely positioned elements don't break across column boundaries, even if the absolutes' container does.
  • We don't always find the minimum height for a balanced column set, for example when the column set contains blocks with margin, borders or padding things can go wrong.
  • We don't support the CSS properties for page break control, so you can get widows and orphans (a paragraph breaking just after its first line or just before its last line). And you can't prevent breaks from happening without using hacks.
  • There is no way to make content in one column span multiple columns. In the future we may allow content in a column to flow around overflowing floats in previous columns, but we're not going to get that in this release.

Of course there are still some bugs but it's already in much better shape than it was a few weeks ago. In particular some huge issues involving floats breaking across columns have been fixed. Since we use the same code for page layout as for columns, this also means a lot of problems with printing (and print previewing) pages with floats have been fixed.

I'm really interested in having Web developers play with this feature, in particular to shake out any critical bugs. Some tips for using columns:

  • Watch out for content that horizontally overflows the column it's in. The next column will draw right over it.
  • Thus, it might be a good idea to put a DIV inside a columns element and give it a background color, and put the rest of your content inside that.
  • Column balancing can be slow. Unbalanced columns give faster layout.
  • People have always thought of elements as being rectangular, thus we have Javascript properties like clientX, clientY, clientWidth, clientHeight etc. That was never true for inlines, and now it's not even true for blocks. Those properties will just give you the first rectangle of many rectangles that might make up the block. Beware.

Please file Bugzilla bugs when you find we're doing something unexpected or not per spec. If you have other feedback you can email me personally or just blog about it and let me know.

Friday, 25 March 2005

The Great U-Turn

The Herald published Newsweek's annual story about Easter. As usual it tries to please every side ... strange considering what happened on Easter is really the most divisive issue in the universe. But it does raise one of the great questions: how did the disciples recover from Jesus' death --- the apparent crushing of all their beliefs and hopes --- to become potent advocates of his resurrection and take the world by storm? The only credible answer, in my view, is that it really happened.

Thursday, 17 March 2005

Gloomocracy III

The Herald published my letter to the editor, which I wrote before the weekend's articles spotlighted in Gloomocracy II. Here's the edited text.

Having just returned from 10 years in Pittsburgh and New York, I find that the Kiwi love of whinging has made people lose touch with reality. One correspondent complains of "constant rain, wind and cold" when in the past two months all but a few days have been gloriously sunny and warm. Other correspondents wail about soaring house prices, perhaps without realising that the large volume of returning expats is a major factor driving those prices.

From my contacts with returnees and many friends still overseas who hope to return, I'm confident that the influx will continue. Meanwhile, for all those who whine that life is much better in Australia, let them move there. Everyone will be much happier.

It reads a bit harsher than I'd intended. Oh well. Other letters expressed similar sentiments. There were also many letters like these:

If New Zealand wants to retain its best and brightest and return to the top of the OECD I suggest that we abandon our draconian student loans system and spend more money on basic infrastructure. If the costs of education (including post-secondary), healthcare and public transport were all decreased one suspects that the "brain-drain" would not be as severe.

Why would a self-respecting productive person regress to a welfare state that squeezes the fruits of his labour to redistribute to the herd? Wasn't that the reason most expats left in the first place?

I think this illustrates how people use NZ-bashing to push their (often contradictory) agendas as solutions without which NEW ZEALAND IS DOOMED.

On a slight tangent, for some reason the Herald chose to highlight this astonishing letter:

We have the highest interest rates in the developed world and this has given us one of the most overvalued currencies in the Western world.

On the other hand our standard of living has fallen from second in the world to the bottom quarter of the OECD countries.

When will our politicians and economists begin to understand cause and effect?

An interesting question from someone who apparently has no grasp of it whatsoever (assuming the Herald didn't edit out a mass of justifying exposition).

The wool boom of the 1950s temporarily launched NZ to the second top position in some international economic table. Ever since, the Gloomocrats have used it as an excuse to set expectations sky-high --- a benchmark against which all subsequent economic achievements are deemed failure.