Eyes Above The Waves

Robert O'Callahan. Christian. Repatriate Kiwi. Hacker.

Friday 22 June 2012

Computer Science In Beijing

The actual point of me being in Beijing was to give a talk at ISMM about memory management in Web browsers, to attend the PLDI and ECOOP conferences and some of the associated workshops, and to talk to CS researchers about all the things I'm interested in.

My ISMM talk was on Friday morning. I felt my delivery was a bit rough because it was all new content for me, but people seemed enthusiastic. I talked about some of the unusual constraints of browser development, such as having to optimize for both perceived performance as well as real performance (unusual for academic researchers, that is). I gave an overview of how we manage memory, with particular emphasis on the cycle collector and some of the optimizations we're doing with it. And I talked quite a bit about Memshrink. I tried to give a lot of credit to Nick Nethercote and others who did the actual work. I've uploaded the slides.

Many ISMM people were very interested in our use of reference counting. There's a paper by Rifat Shahriyar, Stephen Blackburn and Daniel Frampton at this year's ISMM on optimizing reference counting; its motif is that tracing GC has totally defeated reference counting and no-one uses reference counting in "high performance systems", but they will get reference counting "back in the ring". It's a very good paper but their motif is a bit misleading, since in fact reference counting is heavily used in large C++ applications, especially browsers, which we like to think of as high-performance systems! The disconnect is that academic research on reference counting has almost totally focused on memory management for Java virtual machines and similar systems, so what they meant by "high performance system" was "high performance safe-language runtime". Furthermore I realized that there's a critical difference between reference counting in most C++ apps and reference counting for JVM heap management. In the latter case it's generally assumed you don't know which objects are shared across threads, so every refcount update must be an atomic operation, which is incredibly expensive. But in Firefox we statically know which classes can be shared across threads, and in fact we know that most objects cannot, so most of our refcount operations don't need to be atomic and are therefore much cheaper. Thus most of the research results on reference counting do not directly apply to us. However, it would probably still be a good idea for us to mine the literature for good ideas to try --- Richard Jones apparently has a good survey book out, and we should look at Erez Petrank and Steve Blackburn's papers. (Let's not forget David Bacon, whose work inspired our cycle collector in the first place!)

There's a lot of academic research on Javascript. There was very little about compilation at these conferences, other than Brian's PLDI paper on type inference. But there are a lot of people doing static and dynamic analysis of Javascript, doing various tweaks to the language, exploring semantics, etc. That's good.

There's a real explosion in record-and-replay research. That's encouraging, although it's also a bit annoying because very few people are working on the precise problem I care about. A big theme in research these days is "PARALLELIZE ALL THE THINGS", so most of the record-and-replay research is about finding ways to record and replay shared-memory parallel programs, which Firefox isn't really yet. There are some cool ideas there but what we really need is a low-overhead easy-to-deploy record-and-replay system that doesn't depend on anything difficult such as code instrumentation or static analysis or heroic post-facto constraint solving, and we're willing to stick to a single core to get it ... but it's not clear who's working on this, except us with rr.

A related problem is a new instance of a common pattern: in order to carve out space in an area that's getting crowded, researchers are starting to make up interesting variations on record-and-replay to do research on --- variations that probably will never be relevant to real-world problems. I met some researchers trying to record the activities of particular objects, which I don't think will ever make sense if we get the ability to record a whole process at low overhead.

My friends at IBM presented their paper on Race Detection For Web Applications. This paper applies happens-before race detection techniques to detect a page's incorrect dependencies on the ordering of event dispatch. It's a great start, but there's a lot more that can be done to reduce false positives and to create a deployable tool.

One thing they and a lot of other researchers need is an instrumentation framework for Javascript in the browser. In Java there are well-known techniques for instrumenting bytecode to do all sorts of cool dynamic analyses, and the VM also provides lower level hooks like JVMTI. We need something like that for JS. Shu-yu Guo did a nice presentation of Mozilla-based tools, including Jason et al.'s Debugger object at the JSTools workshop, and it occurred to me (and not only me, I think) that it would be really great to extend Debugger with the ability to replace a script with an instrumented version. We already have a lot instrumentation points via DOM events and XPCOM observers, so just adding Debugger-based script instrumentation might be enough to build some really amazing tools, such as the aforementioned race detector.

Amer Diwan gave an interesting keynote about methodological flaws that endanger CS research results. I took the opportunity to ask a question about the bias coming from our reluctance to publish negative results. That week I heard about a few interesting projects that could have worked but didn't, where knowledge about why they didn't would be really valuable to the community. I harp on this regularly; I don't know if my nagging is going to have any effect, but if I ever have the chance to organize a workshop, it'll be the Workshop Investigating Negative Results ("WINR!").

I spent a lot of time talking to Leo Meyerovich. His work on synthesizing parallel layout engines from high-level descriptions is excellent. I used to think it could never scale to handle real CSS, but these days I'm rather more optimistic.

Overall there was a lot of interesting work, and certainly a lot of interesting people, but nothing really blew me away with "how could that possibly work!". Still, an excellent trip.

Comments

mccr8
Olli and I have read all of the papers on cycle collection we've been able to find. There are some cool small-picture ideas I think we can make some use of, but unfortunately most of the big-picture of the published work focuses on concurrent stuff which we can't really use because it requires that you know what is doing the add-ref or dec-ref, in addition to what is getting the ref adjusted. There's a lot of weirdness in Firefox, compared to the standard academic setting of Java. But we do have the advantage that we only care about cycle collection for one program, which we control! I should read some of these more general ref counting papers, though.