Tuesday, 31 August 2010

A Night Out

I had a very nice dinner at "Wildfire Churrascaria" courtesy of Microsoft. Lots of yummy, fatty, salty meat.

I went to catch my bus and missed it by one minute. At 10:41pm, the next bus was at 11:10pm.

Being impatient, instead of waiting for the next bus I ran home ... well, half-ran, half-walked ... 5.5km In 42 minutes. Not fast, but then I had a backpack full of Microsoft swag plus my laptop, and a belly full of meat. Faster than the bus anyway.



TechEd

I'm at Microsoft TechEd in Auckland today and tomorrow --- they reached out to invite a few open source people, so I thought I'd go and check out their messages. It's fun too. One of the highlights of today was a talk about geolocation APIs ... the presenter showed the creation of a toy HTML geolocation app, but couldn't get it working in Chrome and had to switch to Firefox :-). And apparently IE9 isn't going to support the geo APIs at all as far as anyone knows. Funny sort of demo for a Microsoft conference!

In fact so far I haven't seen anything about IE9 at all. There's a lot more Silverlight/WIndows Phone 7. I wonder if that's accidental (this is just the first day after all) ... or not.



Sunday, 29 August 2010

More Dell Fail (Or Maybe NVidia)

I bought a Dell ST2410 monitor for my new home computer (a Dell XPS 8100 shipped with an NVidia GTX260 graphics card, which has two DVI ports). Dell shipped a VGA cable and a DVI-VGA converter, so I thought I'd go out and buy a digital connector. Dell shipped me a DVI-HDMI converter dongle (made by NVidia apparently), the Dell monitor has HDMI and my TV has HDMI so I thought I'd get an HDMI cable and this would be easy. Wrong. The monitor completely fails to detect any signal from the computer. I tried everything I can think of. Even my old Macbook Pro can drive the monitor through its DVI port, the DVI-HDMI dongle and the HDMI cable. So apart from the cable, which obviously works, we have here three parts all shipped by Dell that don't work together. Sigh. I guess I'll try a DVI cable next...



Sunshine Rises Again

As previously reported, the wonderful "Sunshine" Chinese restaurant in Market Place near the Viaduct Harbour suffered a tragic demise. And also as previously reported, it has been reincarnated. Today our family visited the new incarnation, "Crystal Harbour". I am very pleased to report that the new version is very similar indeed to the old "Sunshine". The decor is the same, the layout is the same, the lack of queues is the same, and most importantly the food is very much as it was. Where Sunshine excelled --- the unique barbeque pork buns, the seaweed plate, the ice cream dumplings --- so does Crystal Harbour. Crystal Harbour's promotional material claims there's a new chef, but clearly (and fortunately) a lot has been preserved. One change is that there were a lot more people there today than I ever saw at Sunshine. It could be the novelty factor, but I hope Crystal Harbour does well. I certainly plan to contribute as often as I can!



Thursday, 26 August 2010

-moz-element Landed

Markus Stange picked up the work I did in 2008 on the "-moz-element()" CSS extension (which was later extended by Ryo Kawaguchi), made some major improvements and got it reviewed and landed. Check out his blog post. -moz-element lets you render the contents of an element as the background image for any other element. This is a very powerful tool that can be used in very interesting ways; check out Markus' amazing demos. This feature is on trunk now and will be in Firefox 4. We will also propose this to the CSS WG.

Note for browser UI and extension authors: eventually -moz-element will be the preferred way to render "live" copies of Web page contents (insteading of using MozAfterPaint/drawWindow). Right now, -moz-element can be used to render the contents of a <browser> element elsewhere, although it's less well-tested and is less tweakable for performance. Post-FF4, we can tie -moz-element into the layers framework so that in many cases --- such as tab thumbnails --- rendering -moz-element just recomposites a layer subtree, fully GPU-accelerated.



Tuesday, 24 August 2010

Vinge

The first Vernor Vinge books I read were A Fire Upon The Deep and A Deepness In The Sky ... not surprising, since they're the most famous, and also the best. I was a bit disappointed by Rainbows End. Just recently I read two of his earlier novels (written in the 80s), The Peace War and Marooned In Realtime --- the latter being a sequel to the former --- and I think they're excellent, perhaps not as good as Fire and Deepness, but Marooned in particular I found more intriguing and even quite moving.

Warning: if you haven't read these books, go out now and read them before you come back to the rest of this post, because spoilers are ahead...

Marooned and Rainbows End present two rather different visions of human development, and Rainbows End is far closer to my own thinking even though I like the book less. I'm ignoring the bobbles here --- they're a wonderful plot device, but I think the real themes of Marooned are the technological Singularity and a yearning for anarcho-capitalism. At heart I think Marooned is fundamentally an optimistic view of human progress to the Singularity. Rainbows End, on the other hand, seems to me to be a much darker view, a view of humanity lurching from one potential planet-killing catastrophe to the next at decreasing intervals, with no Singularity-salvation in sight. Now, Vinge may make a liar of me yet, since he's said he'd like to do a sequel to Rainbows End, but based on what he's written so far I guess in the twenty years between the books he's become more pessimistic. Although curiously, he may have become a lot more optimistic about governments --- in Marooned he hates governments, in Rainbows End we see a benign totalitarian state.

Personally I think Rainbows End is too optimistic :-). I wrote about this a while ago and I stand by it: the technology that could eventually lead to some kind of Singularity (very eventually; this stuff is way, way harder than most techo-futurists imagine), leads much sooner to either the total elimination of cognitive freedom or the destruction of all intelligent life. It's just not realistic to imagine we can walk the ever-thinning razor's edge for long. Man is fallen, but he still has a long way to fall. God is going to have to save us from ourselves, again.

Time to stop. Excessive futurist navel-gazing is definitely a sin :-).



Saturday, 21 August 2010

CSS Units Changes Landed

The CSS units changes that I blogged about in January have landed and will be in the next Firefox milestone. With these changes, 1in = 96px always. Likewise 3pt = 4px, 25.4mm = 96px, etc.
This matches the behaviour of Internet Explorer, Safari and Chrome.

By default, when printing, 1in is rendered as one physical inch. For other output media, all these units are scaled in a medium-dependent and platform-dependent way by default. One goal of this scaling is to give results consistent with user expectations and other applications on the system. For example, standard form controls such as checkboxes should look the same in Web pages as in other applications, by default. Another goal is to choose default scaling so that a document designed to print well on normal-sized paper will be readable on the output device, e.g., a phone. So, the advice for authors using CSS physical units is to set lengths so the document looks good when printed without scaling; the browser will then scale those lengths to display the document suitably on different kinds of screens.

There are some rare cases where it makes sense to include true physical measurements in a Web document --- for example, "life size" diagrams, or elements in a touch interface. For these cases we have introduced a new experimental unit, "mozmm". For media such as screens that can be touched, 1mozmm is rendered as one physical millimetre (or as close as we can get based on what we know about the medium). For other media, such as contact lens displays, brain-implanted electrodes, or lasers projecting into the sky, we reserve the right to treat 'mozmm' similarly to 'mm'. Authors should only use mozmm for elements which really need the same physical size on, for example, a 4" phone screen and a 24" monitor. This is hardly ever going to be what you want.

Internally, our DPI code has been overhauled. Everything is now controlled by two parameters: for each window, the number of device pixels per inch (returned by nsIWidget::GetDPI), and also for each window, the default scale (returned by nsIWidget::GetDefaultScale). The 'layout.css.dpi' about:config pref overrides the result of nsIWidget::GetDPI, if present. nsIWidget::GetDPI only affects the interpretation of mozmm (unlike before, where on some platforms, some DPI values would trigger automatic scaling). We set CSS 1px to one device pixel times GetDefaultScale times the current zoom factor. Currently GetDefaultScale always returns 1.0 on all platforms, although on Mac we should set it to the system "default UI scale" (and change some other code to compensate), and on Windows we should set it based on the "system font DPI", which is essentially a user preference that controls scaling of all applications on the system. It's important that the default scale be based on a system-wide setting; that will keep Firefox consistent with the rest of the system, and ensure that the user doesn't get a surprise.



Tuesday, 17 August 2010

The mozRequestAnimationFrame Frame Rate Limit

A few people have been playing with mozRequestAnimationFrame and noticed that they can't get more than 50 frames per second. This is intentional, and it's a good feature.

On modern systems an application usually cannot get more than 50-60 frames per second onto the screen. There are multiple reasons for this. Some of them are hardware limitations: CRTs have a fixed refresh rate, and LCDs are also limited in the rate at which they can update the screen due to bandwidth limitations in the DVI connector and other reasons. Another big reason is that modern operating systems tend to use "compositing window managers" which redraw the entire desktop at a fixed rate. So even if an application updates its window 100 times a second, the user won't be able to see more than about half of those updates. (Some applications on some platforms, typically games, can go full-screen, bypass the window manager and get updates onto the screen as fast as the hardware allows, but obviously desktop browsers aren't usually going to do that.)

So, firing a MozBeforePaint event more than about 50 times a second is going to achieve nothing other than wasting CPU (i.e., power). So we don't. Apart from saving power, reducing animation CPU usage helps overall performance because we can use the free time to perform garbage collection or other house-cleaning tasks, reducing the incidence or length of frame skips.

We need to do some followup work to make sure that on each platform we use the optimal rate; modern platforms have APIs to tell us the window manager's composition rate. But 50Hz is almost always pretty close.

This all means that measuring FPS is a bad way to measure performance, once you're up to 50 or more. At that point you need to increase the difficulty of your workload.



Monday, 16 August 2010

Auckland Food

For cheap tasty food, BBQ King on Wyndham Street West is hard to beat. Today I went there with family and based on past experience, we ordered only two dishes for four people, "BBQ pork and crispy pork stirred noodle" for $13.50 and "seafood fried rice" for $12. After eating the free soup and then dividing the first dish among four of us, we were pretty much satisfied, so boxed the second dish and brought it home. Crazy!

Tragically, my favourite Chinese restaurant in the city --- Sunshine --- closed several months ago. I can understand when a bad restaurant goes under, but not a good one; the imperfections of a market economy! But I have heard rumours that a new Chinese restaurant has taken its place. This needs investigation.

Daikoku Teppanyaki on Quay St is now open for lunch seven days a week. The $13.50 lunch special is still great value.

Around Newmarket: Happy Valley, the Chinese cafe, has closed down. That's sad, since they'd been serving pretty good food since the early 90s.

The Organic Pizza Co.'s $10 lunch specials are pretty good. Their pizzas are about as good as Archie's, but you get a free drink and the place is far less crowded.

Selera, Night Spice, Crazy Noodle and Dee Jai are frequent targets of visits from the Mozilla office, as is the food court under the Rialto carpark. The actual Rialto food court next to the cinema seems to be declining --- two outlets have closed recently --- but they still have the only Subway in the area. We'd go to Hansan more but we're too lazy to walk there except for special occasions.

Pearl Garden still the best yum cha in Newmarket, followed by Sun World and Sunnytown. There's a new place whose name escapes me over near Davis Crescent; not bad, but not great. I need to try it again.



Firefox Sync

I just tried using Firefox Sync to synchronize data between my main Firefox profile and my newly-installed home computer's Firefox profile. It was easy to set up and worked perfectly --- and it was fast too! I have to confess my expectations were not high for a feature that just got turned on for beta 4 :-). Well done everyone! This is definitely going to make my life a little easier.



Sunday, 15 August 2010

mozRequestAnimationFrame

In Firefox 4 we've added support for two major standards for declarative animation --- SVG Animation (aka SMIL) and CSS Transitions. However, I also feel strongly that the Web needs better support for JS-based animations. No matter how rich we make declarative animations, sometimes you'll still need to write JS code to compute ("sample") the state of each animation frame. Furthermore there's a lot of JS animation code already on the Web, and it would be nice to improve its performance and smoothness without requiring authors to rewrite it into a declarative form.

Obviously you can implement animations in JS today using setTimeout/setInterval to trigger animation samples and calling Date.now() to track animation progress. There are two big problems with that approach. The biggest problem is that there is no "right" timeout value to use. Ideally, the animation would be sampled exactly as often as the browser is able to repaint the screen, up to some maximum limit (e.g., the screen refresh rate). But the author has no idea what that frame rate is going to be, and of course it can even vary from moment to moment. Under some conditions (e.g. the animation is not visible), the animation should stop sampling altogether. A secondary problem is that when there are multiple animations running --- some in JS, and some declarative animations --- it's hard to keep them synchronized. For example you'd like a script to be able to start a CSS transition and a JS animation with the same duration and have agreement on the exact moment in time when the animations are deemed to have started. At each paint you'd also like to have them sampled using the same "current time".

These problems have come up from time to time on mailing lists, for example on public-webapps. A while ago I worked out an API proposal and Boris Zbarsky just implemented it; it's in Firefox 4 beta 4. Here's the API, it's really simple:


  • window.mozRequestAnimationFrame(): Signals that an animation is in progress, requests that the browser schedule a repaint of the window for the next animation frame, and requests that a MozBeforePaint event be fired before that repaint.
  • The browser fires a MozBeforePaint event at the window before we repaint it. The timeStamp attribute of the event is the time, in milliseconds since the epoch, deemed to be the "current time" for all animations for this repaint.
  • There is also a window.mozAnimationStartTime attribute, also in milliseconds since the epoch. When a script starts an animation, this attribute indicates when that animation should be deemed to have started. This is different from Date.now() because we ensure that between any two repaints of the window, the value of window.mozAnimationStartTime is constant, so all animations started during the same frame get the same start time. CSS transitions and SMIL animations triggered during that interval also use that start time. (In beta 4 there's a bug that means we don't quite achieve that, but we'll fix it.)

That's it! Here's an example; the relevant sample code:

var start = window.mozAnimationStartTime;
function step(event) {
var progress = event.timeStamp - start;
d.style.left = Math.min(progress/10, 200) + "px";
if (progress < 2000) {
window.mozRequestAnimationFrame();
} else {
window.removeEventListener("MozBeforePaint", step, false);
}
}
window.addEventListener("MozBeforePaint", step, false);
window.mozRequestAnimationFrame();

It's not very different from the usual setTimeout/Date.now() implementation. We use window.mozAnimationStartTime and event.timeStamp instead of calling Date.now(). We call window.mozRequestAnimationFrame() instead of setTimeout(). Converting existing code should usually be easy. You could even abstract over the differences with a wrapper that calls setTimeout/Date.now if mozAnimationStartTime/mozRequestAnimationFrame are not available. Of course, we want this to become a standard so eventually such wrappers will not be necessary!

Using this API has a few advantages, even in this simple case. The author doesn't have to guess a timeout value. If the browser is overloaded the animation will degrade gracefully instead of uselessly running the step script more times than necessary. If the page is in a hidden tab, we'll be able to throttle the frame rate down to a very low value (e.g. one frame per second), saving CPU load. (This feature has not landed yet though.)

One important feature of this API is that mozRequestAnimationFrame is "one-shot". You have to call it again from your event handler if your animation is still running. An alternative would be to have a "beginAnimation"/"endAnimation" API, but that seems more complex and slightly more likely to leave animations running forever (wasting CPU time) in error situations.

This API is compatible with browser implementations that offload some declarative animations to a dedicated "compositing thread" so they can be animated even while the main thread is blocked. (Safari does this, and we're building something like it too.) If the main thread is blocked on a single event for a long time (e.g. if a MozBeforePaint handler takes a very long time to run) it's obviously impossible for JS animations to stay in sync with animations offloaded to a compositing thread. But if the main thread stays responsive, so MozBeforePaint events can be dispatched and serviced between each compositing step performed by the compositing thread, I think we can keep JS animations in sync with the offloaded animations. We need to carefully choose the animation timestamps returned by mozAnimationStartTime and event.timeStamp and dispatch MozBeforePaint events "early enough".

mozRequestAnimationFrame is an experimental API. We do not guarantee to support it forever, and I wouldn't evangelize sites to depend on it. We've implemented it so that people can experiment with it and we can collect feedback. At the same time we'll propose it as a standard (minus the moz prefix, obviously), and author feedback on our implementation will help us make a better standard.



Saturday, 14 August 2010

Google vs Oracle

I don't know much about the Google/Oracle dispute so I'll limit my remarks, but here they are:

I don't understand why Oracle is doing this. They may wish Google was using Java ME, but I would have thought having more developers using Java was good for Java overall. Probably there are important background discussions we are not privy to.

Dalvik is open source, but it's very much a Google project that Google happens to release under an open source license, rather than a community project. So I think of this as two big companies scrapping rather than Oracle launching an attack on the open source community.

However, this extends a disturbing trend of large mainstream companies using software patents to attack competitors, especially prominent in the mobile space. Observers of software patents, including myself --- and even Bill Gates in his infamous 1991 memo --- have always seen that volumes of easily obtained software patents on straightforward ideas could be a powerful weapon to crush competition; software development is so inventive that most programmers daily write code that someone patented somewhere. Fortunately, for a long time, other than "patent trolls", serious industry players declined to use that power. But now that grace has departed and I fear patent armageddon is upon us. In the end the open source community is likely to be particularly hard-hit, since it's easy to detect infringement, and open source communities have limited funds for defense. People have argued that open source communities are less of a target because they have less money to extract, but the most dangerous suits are about shutting down competition, not about extracting licensing fees --- like this Google/Oracle suit, apparently.

Overall I'm extremely gloomy about the situation. A world where each programmer has to be shepherded by a dozen lawyers through patent minefields is not one I will enjoy working in, and it will be disastrous for the progress of software. I call on employees of Oracle, Apple and other litigating companies to protest to their management in the strongest possible terms, including resignation. Google and Mozilla are hiring :-).

It's little consolation that some enlightened countries --- like New Zealand, apparently --- will hopefully remain free of software patents. A software company --- or an open source project --- that can't do business or get distribution in the USA or many other countries (including most of Europe, given the 'method patent' regime) is somewhat crippled.



Thursday, 12 August 2010

Dell Fail

Separate from the laptop discussion, I just bought a new home machine. I just wanted a generic PC, high-ish end for longevity and in case I (or someone else) wants to hack on it. This machine will definitely run Linux, but I'm going to keep the Windows 7 install in a partition in case we ever need it. So I'm going through Dell's Windows 7 first-run experience, and it's not great.

The initial Microsoft setup screens are pretty good, although it all seems to take longer than it should. Then you get to a Dell screen asking you to opt into some Dell stuff, which for some unfathomable reason is rendered in the Windows 95/2000 "Classic" theme, gray box scrollbars and all. It's ugly, jarring and totally mystifying.

Soon you're offered the chance to burn system recovery DVDs. I don't understand why they ask users to obtain blank DVDs and burn them instead of just shipping those DVDs; shipping them with every system would add a few dollars to the system cost, but probably save more in support calls and give a much better user experience.

The application that burns the recovery DVDs has one crazy screen that shows you some information and asks you to click "Next". But there is no "Next" button visible. But there's a vertical scrollbar! Scrolling down, you can get to a "Next" button. Of course, the window is not resizable, and it contains lots of blank vertical space so there is no possible reason why the "Next" button should not be visible.

Microsoft's initial Windows network setup asks you whether you're on a "Home", "Work" or "Public" network, which I bet is often hard for people to answer. I wonder how Windows uses that information. But right after choosing that option, the (preinstalled) McAfee antivirus software pops up an ugly little box in which you have to choose those same options again.

Of course I still have to analyze the system for the paid-to-be-there crapware (including McAfee) and uninstall most of it.

I'm genuinely curious about what motivates system vendors like Dell to sully what could have been a better experience. It's not apathy, since they obviously paid people to develop many of these "extras". Whatever it is, it's no surprise platform vendors want to sell directly to the customer instead of working through partners like Dell.



Wednesday, 11 August 2010

Choosing Sides

My Macbook Pro is 3.5 years old and still works pretty well, apart from the disk being full, frequently spontaneous wake-ups in my backpack which heat everything up alarmingly, and flaky wireless connection. Plus the turn of Moore's Law means I can now get a quad-core laptop and a lot more RAM and disk. So it's finally time to upgrade. I'm opting for non-Apple hardware; Apple have gone beyond the pale pursuing patent warfare and platform lockdown, and I can no longer live with buying their products. A Lenovo W510 is probably in my future.

Now I'm faced with a somewhat difficult decision: Linux vs Windows. There are good reasons on both sides. The best thing about developing in Windows is that it's good to have developers on the platform that most of our users use. VMWare record and replay is also very attractive. The Mozilla build and tools situation on Windows used to be terrible --- very slow builds, horrible profiling --- but it's gotten a lot better thanks to pymake and xperf. But Microsoft, while not as dangerous as Apple at the moment, still aspires to be, and I won't embrace them gladly.

Linux, of course, has the virtue of freedom, and a chance to regain Miguel's love. On Linux you get valgrind. But not many of our users use Linux, and VMWare's record and replay doesn't really work there. I'd have to use X11, which I loathe with passion.

Tough call. Another thing to consider is that whichever way I go, I'll end up using the other in a VM quite often. Dual-booting is also an option.