Monday 19 April 2010
Changing The World
Last Tuesday I gave a keynote at the NZCSSRC at Victoria University in Wellington. I was a bit nervous because I've never given a keynote before. From my point of view, it went well --- I felt good while I was giving it, and afterwards several people talked to me having obviously reflected on what I'd said and how it applied to their own work. However, I never trust my own feelings too much so I'm not 100% sure how it went down :-). I've got slides here.
I kicked off my talk with the parable of the talents. OK, that's not something you expect to see at a computer science conference, but this was a keynote so I took some extra latitude :-). I pointed out that the word "talent" originally was a unit of mass, particularly precious metals, and it acquired its current meaning due to this parable. Thus, for as long as it's had its current meaning, it has been associated with the idea that with talent God also gives responsibility to make the best use of one's talent. I emphasized that this is as true for computer science talent as for other kinds, perhaps even more so because computer science has this wonderful property that we can deploy incredible functionality at near-zero marginal cost. I pointed out that if you make a change to Firefox to save one second per user per day, it's like saving three thousand lives. The rest of my talk was about ways to maximise one's impact on the world.
I distinguished "research" from "development" by defining "development" as building something that will be used in practice, while "research" creates and transmits knowledge that will help others build something practical. (These are not mutually exclusive for a given project --- development projects almost always create and transmit knowledge too.) Obviously, only development has direct impact, but pure research can be useful because it lets you drop constraints, so you can understand problems more clearly and iterate faster on solutions. I think characterizing research based on which constraints have been dropped is a very good way to understand the place of your work in the world. Many factors influence people's choices about whether to do research or development. It's simpler to have impact with development. However, research offers more crisp intellectual problems because you can drop ugly constraints, and it also lets you publish more because you can iterate faster.
Many people think that solving problems is the hard part of research, but in my view, problem selection is the hardest part of research. Most research I see won't have an impact even if it's completely successful because of poor problem selection. The ideal problem is crisp and intellectually satisfying, yet a solution would have immediate impact. Such problems are rare but since the space of problems is so large, they do exist! Find them!
I brought up program slicing as a negative example. Since 1984 hundreds or thousands of papers have been written about program slicing, but in reality it is almost never used. I believe the research in this area has been completely pointless. Probably hundreds of millions of dollars have been wasted, not to mention enormous amounts of the time of very smart people. This is criminal misuse of resources. It's important to note that just because large numbers of smart people are working on a problem, it's not necessarily a good one!
I observed that most researches choose problems by first having a technology or skill in mind and then looking for "applications" --- problems that their technology might solve. I call this the "solution-driven" approach to problem selection. I used it throughout my career. I believe it's a huge mistake. At the outset you restrict the problem space to those problems that look like they might be a fit for your technology, you rank those problems by how good a fit they are to your technology instead of how important they are to the world, and even then you run the risk that your technology is not the best solution for those problems.
A better approach is "problem-driven": start by identifying important problems, and then find the best solution to them. It sounds obvious when you say it that way, but it's not what most researchers do :-). For example, we know that debugging is an incredibly important problem because most programmers spend most of their time doing it. So one could analyze the problem of debugging, and figure out how to make debugging faster or cheaper, or how to avoid those bugs being present in the first place. Do this without having preconceived ideas about what the solution should be. Use "Wizard of Oz" techniques to evaluate solutions before creating them --- e.g., use human intelligence to pretend to be a tool and see if the results are helpful, before you build the tool. You'll be able to iterate much faster and you won't be emotionally invested in proving that the tool is useful.
The downside of the problem-driven approach is that often the solutions will demand expertise you don't have. You may be required to learn something new. I think that's OK. You can also collaborate, or even hand the problem off to someone else and retry problem selection.
I went on to give some tips about publishing. If you want to do research with impact you should publish in top conferences and journals because the lesser ones are ignored. You often see research in top conferences that actually repeats work previously published in lesser conferences, because people didn't know about the latter ... and you find that the later top-conference papers end up more widely cited. Sad but true. Publishing in top conferences is not that hard if you know how to play the game and you have selected good problems. Read lots of proceedings and journals to understand what kinds of papers are accepted at the conference and how they should be written. Don't be discouraged by rejection --- paper reviews are very random. Try to choose problems that are amenable to compelling evaluation; for certain kinds of problems, it's very difficult to prove that you solved them, for others it's easy.
I ranted a bit about negative results. We're not surprised when we build something and it works --- if we hadn't expected it to work, we probably wouldn't have done it. Thus we are surprised when it doesn't work. Surprising results are more interesting, therefore negative results are more interesting, especially if the failure was for an interesting reason, rather than "it was too hard to solve" (although that can also be interesting). Unfortunately, you generally can't publish negative results; the research community is just broken this way. I would like to see a Journal Of Negative Results, but people would probably be afraid to publish there.
I ranted about corporate research. Corporate research labs are started in times of plenty as vanity projects --- "research is awesome, we're awesome, let's do it". But genuine research (by my definition above) hardly ever benefits the company doing it --- it should be thought of as corporate philanthropy --- so over time, to justify themselves, the labs do some amount of development as well. Unfortunately, artificially separating that development from the development done by "product" groups creates problems of "tech transfer". Those problems would not exist if you simply put those people into product teams. This is more or less what Google does, as I understand it. (However, there can be tactical advantages to having a separate lab; it might let you attract and keep smart people you couldn't otherwise get.)
I proceeded onto the most controversial part of my talk: how to improve the quality of computer science research done in New Zealand. Top people want to work with other top people; not only top peers in their area, but top collaborators in other areas if needed. Thus, groups of top people attract more top people --- students, researchers and faculty. This is one reason why the rankings of the best research universities are so stable. Therefore I proposed collecting the best researchers in New Zealand into a single institution; this would be more effective at attracting top researchers to New Zealand --- and at keeping them here. It's not a zero-sum game. There would also be an education benefit: the majority of students are interested in vocational training, not computer science, and they could be directed to other institutions, so this elite institution could focus on teaching computer science to the small number of students who want to learn it. This would be very good for the minority of employers like me who need graduates with hard-core computer science.
Surprisingly no-one tried to shoot me down on stage, but I got some feedback later :-). Some good issues were raised, but I still think it would be desirable. Of course it would be politically extremely difficult to make happen.
I concluded by talking more about development. The common idea that research is more intellectually challenging is false; generally, dropping constraints makes problems easier. Research simply gives you more freedom to ignore uninteresting problems. But there are development jobs that are very interesting with huge impact!
Megacorporations are horrendously inefficient. Large numbers of people work on many projects that make no sense at all. If you go to work for a megacorporation, make sure you're going to work on a specific project that you know will have impact. Otherwise, go for a small organization; they're much more efficient so your work will not be diluted. Consider contributing to open source projects; they usually have immediate impact, they let you disseminate your work widely, and you can learn a lot from them.
Other things being equal, choose projects lower on the software stack. Putting parts together has less influence and impact than designing the parts. The latter affects more users too. Other things being equal, choose projects with more users (or potentially more users), since impact scales with the number of users.
Obviously, we should be striving to have positive impact, not negative impact! If you are regularly embarrassed by your employer, leave. Computer science people are fortunate to have many opportunities to work for employers who will not embarrass us.
Yeah, it was fun. On Wednesday I'm going down to Hamilton to give a similar talk at the University of Waikato for Kingitanga Day. I'll probably talk less about how people can have impact through research and more about how I'm having impact at Mozilla and its open source community, and how other people can have impact through open source.
Comments
Now, I don't dismiss the 'solution-driven" approach entirely - there's a lot of value in looking at new things and asking how they could be applied to existing problems. But approaching from that angle, the mindset is often wrong, trying to apply the new tool to every problem, or to see problems for it to solve that aren't real. I think that was the case with our "pure research" team - they solved problems that developers didn't see as problems.
I found it reminiscent of Paul Graham. Is that coincidence?
Maybe when experienced CS guys think about their field they more-or-less come to the same conclusions - have you ever heard experienced CS guys passing on insights inconsistent with what you say here?
Maybe experienced CS people do have these insights, but certainly most of them don't practice them :-).
I seem to remember Steve Jobs using the same reasoning in a pep talk to his developers of the first Apple Mac in relation to startup time.
"research" creates and transmits knowledge that will help others build something practical
but what if "others" never materialize? That is, what if the economic environment never supports debugger developers? That is the strange world we live in: very few human tools have as much economic impact as software debuggers and yet very few developers work on debuggers.
Debugging illustrates another weakness in conventional approaches to software research: final impact depends on integrating with existing work processes and educating users. Debugging is an intense activity with a lot of user-investment in tools. To advance the state of the art one has to go beyond publishing or even developing.
Tony, all the time people bring up serendipitous results from apparently pointless research. That certainly does happen. The question is, is it a better use of resources to try something pointless and hope you get lucky, or aim for something with impact?
Indeed, when you tell a collegue what you've been working on, you speak about all those tries that didn't quite work, and those tries give value to the positive result that you have eventually found. They proove that your result isn't just something easy to find as soon as you get interested in the subject, but that a lot of the "naive" ideas that you tried (and other searchers in the field would have tried) weren't the good way, and that your found way is really different/elegant/novel.
The life of a research guy is filled with doubts, tries, mistakes and backtracks, and that is good. But strangely, as soon as we get to teach what we found (be it in a course, in a conference or a journal), then everything transforms into a world of everlasting truths with no room for the mistakes and more generally for the process. This is especially true in "pure" sciences.
And I think that it's a shame; teaching our errors too gives more value to our findings, and also helps people to learn because those errors they would probably face anyway during their comprehension process.
To put this another way, *right from the start* he made it a point for the engineer to ask himself if he was asking the right questions before beginning to solve problems.
It's a lesson I've kept close to me ever since. It's gotten me into trouble. OTOH, I can sleep at night.
While I'm at it, what would it take for Gecko to support H.264 video in the "VIDEO" tag (such as in the YouTube beta)? Love Camino, hate to switch to Safari to try H.264 YouTube video. Thanks!