Friday 23 June 2017
When I was a young graduate student working on static analysis tools, conventional wisdom was that a static analysis tool needed to have a low false-positive rate or no-one would use it. Minimizing false-positives was a large chunk of the effort in every static analysis project.
It seems that times have changed and there are now communities of developers willing to tolerate high false positive rates, at least in some domains. For example:
It will also miss really obvious bugs apparently at random, and flag non-bugs equally randomly. The price of the tool is astronomical, but it's still worthwhile if it catches bugs that human developers miss.Indeed, I've noticed people in various projects ploughing through reams of false positives in case there are any real issues.
I'm not sure what changed here. Perhaps people have just become more appreciative of the value of subtle bugs caught by static analysis tools. Maybe it's a good time to be a developer of such tools.