Eyes Above The Waves

Robert O'Callahan. Christian. Repatriate Kiwi. Hacker.

Tuesday 3 November 2009

Challenges In Software Research

One of the greatest errors I see in computer science research is a tendency to view research as a spectrum with "applied" at one end and "basic" at the other, with attached assumptions that "applied" is incremental, boring, and engineering-oriented, but "basic" is crisp, radical, and intellectually satisfying. This is a false dichotomy, and I like to debunk false dichotomies.

I think the most principled way to construct such a spectrum is to classify research projects according to the degree of simplifying assumption they make to reach a crisp problem statement. So "applied" research adopts most of the relevant "real world constraints" into its problem statements, and "basic" research throws most of them out. The error is to assume that "real world constraints" make a problem boring, intellectually unsatisfying, and non-conducive to finding real breakthroughs [1]. This may be true "on average", but when choosing a research project one is not constrained by averages. In my experience, interesting research topics are abundant, and there are problems whose solution can be immediately applicable to the world while also being intellectually satisfying and potentially breakthrough science. (The current frenzy over combined symbolic and concrete execution is a good example.)

Let me suggest several such problems in the area of software development technology that I feel aren't yet getting the attention they deserve.

Verifying Refactorings

We know that verifying real software against full specifications is prohibitively expensive in most cases. However, I hypothesize that one could cheaply verify most changes that land in mozilla-central. The secret is that most patches are refactorings --- changes that should not alter observable behaviour --- or can be split into refactorings plus a smaller change that actually alters behaviour. "Should not alter observable behaviour" is very simple to state and understand, but still very tight. It would be huge win if we could prove that most such patches meet that specification. It's unclear how difficult this would be in general, but clearly patches that are the output of refactoring tools should be tractable to verify, and there is a lot of unexplored territory beyond that.

Improved Record And Replay Debugging

I don't have time to work on Chronomancer, but someone should be working out how to expose omniscience to the programmer to optimize the debugging experience.

Testing Fixes For Non-Reproducible Bugs

Record and replay technology is more or less here. (I'll post more about VMWare's solution later, but suffice to say it's working quite well so far!) Suppose you have recording of a bug which isn't reproducible. Now you can replay that recording one or more times and understand the bug and write a patch that should fix it. The problem you immediately run into is: how do you check that the patch really fixes the bug? You somehow need to inject the fix into the code and replay; that's tricky, and the fix may disturb the replay and cause the bug not to occur on that run for reasons unrelated to the bug actually being fixed.

Performance Measurement And Variation

Measuring performance is hard. One huge problem is that you want to measure effects that are much smaller than the random noise in the tests. I don't really know where this noise comes from. I would love to see research that analyzes the sources of run-to-run performance variation and describes techniques for reducing the variation.

I believe all these problems are reasonably easy to get into, have been little investigated in the past, have lots of room for exploration by multiple groups, are intellectually stimulating, and are likely amenable to solutions that could be immediately applied to the needs of software developers.



[1] The converse error --- assuming that "basic" research must not be boring or incremental --- is also widely held, but more easily falsified. Consider the endless stream of esoteric type theories that are both boring and completely incremental, such as my POPL '99 paper!

Comments

VanillaMozilla
Actually, it would be a very substantial advance in the state of the art just to get programmers not to deference erroneous pointers. We've figured out what causes crashes, but that knowledge hassn't made it into most of our compilers. We're well into the 21st century and programs still crash because we're still using stone-age tools. Amazing.
Anonymous
Mmm, so you worked on type theory... During my classes (preparing the Agregation in Mathematics, option Computer Science) I found a Robert O'Callahan (article, book, thesis, I don't remember) about type theory and I was wondering if it was you... Perhaps it was actually you ?
FrnchFrgg
I don't remember if I put my name in previous comment. Anyway, if you want to know what it is, the Agregation is http://en.wikipedia.org/wiki/Agregation
Robert O'Callahan
FrnchFrgg: It probably was me, yes.
James Napolitano
We could add to the list some of the interesting memory management ideas you had written about in an earlier blog entry:
http://weblogs.mozillazine.org/roc/archives/2007/10/abstraction_pen.html
While you said what you proposed wasn't really supported yet in a real world language, some Mozillians have already effectively extended the rules of C++ by adding custom macros to certain declarations. These macros expand to nothing and so have no effect on the compiled code, but are recognized by certain DeHydra scripts and are used to ensure that certain properties are met. Perhaps one could similarly implement some macros and scripts that would enforce that one object owns another and deletes it before being deleted itself.