Saturday 16 October 2010
Here's an interesting article about bias in medical research. Almost everything they say about medical research applies to computer science too.
So you have to be very skeptical of empirical results in computer science, especially "we tried this and it worked" results. Fortunately --- unlike biology, I suspect --- at the heart of many computer science papers is just a clever idea or observation someone came up with. You can take that idea and try it for yourself. But don't bet the company on it just because some researcher said it worked great.
Of course, the situation is still pretty bad overall. The sick research culture that only rewards positive results doesn't just create selection bias in reported results; it also deprives us of interesting data about ideas that don't work. After all, people only try things they think will work, so failures on average should be more surprising --- and therefore more interesting --- than successes.
At PLDI this year not a single paper reported a negative result. However, during the short talks session one group from Microsoft presented a cute piece of work applying an automated reasoning tool to perform compilation; nice idea, but it didn't really work. I was so excited I ran up to the microphone just to say "thank you for presenting a negative result!". Then I added "We need more results like this", and everyone laughed...