I seem to have picked up some sort of virus. I'm mostly offline while I wait for this to clear up. Please forgive any delays...
Wednesday, 28 July 2010
I seem to have picked up some sort of virus. I'm mostly offline while I wait for this to clear up. Please forgive any delays...
Thursday, 22 July 2010
Coding Style As A Failure Of Language Design
Variance in coding style is a huge problem. Reading code where the style varies all over the place is painful. Moving code from one place to another and having to restyle it is awful. Constantly adjusting the style in which you're writing code to conform to the local style of the project, module, file, function or line you're modifying is miring.
Therefore projects adopt style rules to encourage and enforce a uniform style for the project's code. However, these rules still have to be learned, and adherence to them checked and corrected, usually by humans. This takes a lot of time and effort, and imperfect enforcement means code style consistency gradually decays over time. And even if it were not so, code moving between projects looks out of place because style rules are rarely identical between projects --- unless you reformat it all, in which case you damage the relationship with the original code.
I see this as a failure of language design. Languages already make rules about syntax that are somewhat arbitrary. Projects imposing additional syntax restrictions indicate that the language did not constrain the syntax enough; if the language syntax was sufficiently constrained, projects would not feel the need to do it. Syntax would be uniform within and across projects, and developers would not need to learn multiple variants of the same language. More syntactic restrictions would be checked and enforced by the compiler, reducing the need for human (or even tool-assisted) review. IDE assistance could be more precise.
Two major counter-arguments arise. People will argue that coding style is a personal preference and therefore diversity should be allowed. This is true if you only participate in particularly small projects, but if you work in a large project then --- unless you are exceptionally fortunate --- you will have to deal with a coding style that is not your preference, no matter what. (Maciej Stachowiak once said that willingness to subjugate one's personal preferences to a project's preferences is a useful barometer of character, and I agree!)
A more interesting counter-argument is that many coding style rules aren't sufficiently formalized so as to be machine-checkable, and might even be very difficult to formalize at all. This is true; for example, line-breaking rules or variable naming rules might be very difficult to formalize. So I relax my thesis to claim that at least those rules which can be formalized should be baked into the language.
(Figuring out exactly which rules can be formalized, and exploring alternative syntax designs that maximize automatic style checkability while still being nice syntax, sound like fun research! Programming language syntax is one of those areas that I think has been greatly under-researched, especially from the HCI point of view.)
Tuesday, 20 July 2010
Working on plugin bugs sucks, but sometimes it's fun to watch a video clip over and over again.
I'd better say something about the Mozilla summit at Whistler before it recedes into the past...
It was fabulous, of course. The setting, the food, the people, the talks, the work, the demos, the mission, all really good.
Whistler was lovely. It had been cold up to the week we arrived so while it was 30C in the village at times later in the week, 20 minutes in a chairlift brought you up into the snow. And on top of Whistler Mountain the snow banks were six metres deep in places.
I wisely arrived a couple of days early and on the Monday I spent almost the entire day hiking alone up to Singing Pass. Unfortunately I didn't reach Singing Pass --- around the snow line the track was completely blocked by fallen trees (see below). I climbed over a couple of dozen of them but with no end in sight, decided that was a losing proposition and turned back. Nevertheless it was a wonderful time and some solitude was excellent fortification for the subsequent social overload.
I'm not really a party person and working large groups of people is a learned skill for me. It's helpful when I have goals to accomplish and I know a lot of the people. In some ways it wasn't all that different from networking at an academic conference, except we're all on the same side.
The main downside of the summit was that I was completely exhausted during most of it because I was simply too excited to sleep. Talking to people, going to sessions, playing games, and hacking code are irresistible.
Next time we should plan some "work week" days immediately after the summit. This would give us more time to talk in depth --- there were so many people at the summit I wanted to talk to, I couldn't give lots of time to more than a few. It would give us more time to code. We should have done it this time but I wasn't on the ball.
As usual I met many contributors for the first time. That's one of the best things about the summits. A few I really wanted to meet didn't make it --- you know who you are!
I managed to get together with some Mozilla Christians for some prayer time. That was very encouraging.
I saw bears. Woohoo!
Saturday, 17 July 2010
Retained Layers has landed and seems to have stuck.
In a previous post I talked about our layers framework. Up until now we've constructed a new layer tree on every paint. With retained layers, we update our layer tree incrementally as content changes in the window. More importantly, we are able to cache the contents of layers. So for example if you have an element with partial opacity, we can cache the rendering of that element in a layer and every time you paint (perhaps with a different opacity each time), we can paint the cached contents without rerendering the element. This provides noticeable speedups for some kinds of fade effects. When we use layers to render CSS transforms, we'll get speedups there too.
One side effect is that we were able to totally reimplement scrolling. Our current scrolling code relies on shifting pixels around in the window for speed, and that often doesn't work well, for example on pages where scrolling content overlays stationary content or vice versa. It's also prone to visible tearing on some platforms because we can't scroll and repaint in a single atomic paint operation. But now, our retained layer tree effectively contains the entire window contents pre-rendered. We can scroll by just adjusting the offsets of some layers and recompositing the layer tree. (Well, almost ... it's slightly more complicated.) This is a relatively simple approach. It eliminates tearing. It lets us aggressively accelerate the scrolling of nasty scrolling pages with complex contents, because we're able to separate the moving content into different layers from the non-moving content and scrolling becomes a matter of simply repainting the scrolled-into-view strip of the moving content and then blending layers together. We're seeing significant improvements in scrolling many kinds of pages. Soon I hope to blog again with more about how this works, what it can do and what the current limitations are.
Immediate improvements are nice, but the most important benefit of retained layers is that it lays down infrastructure we will be exploiting in all kinds of ways. Our D3D and GL layer implementations benefit from reducing browser rendering and caching more rendered content in layer buffers, since compositing buffers is very cheap with GPUs. With those backends, because scrolling is fully layer-based, it will be accelerated by the GPU! Chris Jones is working on enabling layer composition to be in a dedicated process, at which point we'll be able to scroll without synchronizing with the Web content process --- in particular we'll be able to maintain a constant frame rate of smooth scrolling no matter what the Web content is doing, i.e. super-smooth. This will especially benefit Fennec. Fennec will also benefit because its current tile-caching implementation can be replaced with something layer-based; our layer-based scrolling permits the layer manager to cache prerendered content that's not (yet) visible.
Before we ship the next Firefox release there are a few regressions to work on and some performance and quality knobs to tweak. But I'm feeling quite relieved, since this was one of the long poles for that release and the one I've been most on the hook for.
Wednesday, 7 July 2010
Mozilla And Software Patents In New Zealand
Mozilla produces the Firefox Web browser, used by more than three hundred
million people around the world. Firefox is open source and is the result of a
collaboration of a large group of paid developers and volunteers. In fact,
Mozilla funds a team of paid developers in New Zealand working on core Firefox
code; some key innovations in Firefox, such as HTML5 video, are the work of our
New Zealand team. The work we do is some of the most highly skilled and
high-impact software development to be found anywhere in the world. I write
about software patents in my personal capacity as one of Mozilla's senior
software developers, and manager of our Auckland-based development team and
also our worldwide layout engine team. I also formerly worked for three years
at the IBM T.J. Watson Research Center where I participated in the filing of
several software patents based on my research.
The development and distribution of Firefox, like other open source software,
is threatened by the rise of software patents, because the patent system was
not designed for our environment. In software, especially cutting-edge software
like Firefox, every developer is an inventor; coming up with new ways of doing
things is not exceptional, it's what our developers do every single day.
Invention created at such a rate does not deserve or benefit from years of
monopoly protection. Indeed, it will be crippled if we are forced to play the
patent system "to the hilt", to acquire vast numbers of our own software
patents and to navigate the minefield of other people's patents.
The patent system was designed to promote invention and especially the
disclosure of "trade secrets" so that others can build on them. Research casts
doubt on whether it has succeeded at those goals (see
an example), but even if it did, in software development --- especially
open-source software development --- it is clear that no patent incentive is
needed to encourage innovation and publication of our work. Copyright has long
been adequate protection for both closed-source and open source software.
(Open-source software permits copying, but relies on copyright protection to
enforce terms and conditions on copying.) Indeed, the patent system restricts
the dissemination of our work because the best way to distribute knowledge about
software is in the form of code, and that can make us liable for patent
Software development is uniquely able to have huge impact on the world because
copies can be made available to users for free. If we had charged users for each
copy of Firefox there is no doubt we would not be nearly as successful as we
have been, either at changing the world or even at raising money --- Mozilla
has substantial revenues from "tie-ins" such as search-related advertising.
The patent system threatens this business model, because most patent licensing
arrangements require the licensee to pay a per-unit fee. This is not
necessarily a problem for traditional manufacturing, where there is a per-unit
manufacturing cost that must be recouped anyway, but it completely rules out
a large class of software business models that have been very successful.
As well as developing software, Mozilla does a lot of work to improve Web
standards, and here too we have seen damage from the rise of software patents.
We want to ensure that people can communicate over the Internet, especially
on the Web, without being forced to "pay to play". We especially don't want
any organisation to be able to control what people can do and say on the Web via
their patent portfolio. We're already having problems with Web video because
many popular video encoding techniques are patented so the production,
distribution and playback of Web video often requires patent licensing from
organisations such as the MPEG-LA. This has slowed down the standardization and
improvement of Web video and forces the use of effectively non-free software in many
In summary, the patent system is not suited to software development. Software
development, especially open-source software development, is harmed by patents
and does not need patent protection. Development of the Internet is also
hampered by patents. New Zealand stands to benefit directly and indirectly from
the rise of the Internet and collaborative software development and should
protect these benefits by making a clear statement by rejecting the patentability of "inventions" implemented in software.