Sunday, 23 April 2017

One Does Simply Walk Into Mordor

... on the Tongariro Northern Circuit. This central North Island "Great Walk" is a loop circumnavigating Mount Ngauruhoe (aka "Mount Doom") and much of the Tongariro volcanic complex. The terrain varies but is mostly a mixture of lava fields and somewhat barren alpine plains scoured by old pyroclastic flows (most recently by the "super-colossal" ~1800-year-ago Taupo eruption). It doesn't take much imagination, or sophisticated marketing, to see it as (a peaceful and beautiful version of) Mordor.

We completed the Circuit yesterday, after four days of hiking. It's certainly possible to do it in three or even two days, but we took it easy and had time to do all the side tracks: the Tongariro summit track on day two (Mangatepopo Hut to Oturere Hut), the Ohinepango springs near Waihohonu on day three, and the Upper Tama Lake track on day four (Waihohonu Hut to Whakapapa Village).

Thank God, we had perfect weather. We also had a great time lounging around at the huts talking to other trampers of all ages, and even playing a variety of games. Overall it was another wonderful "Great Walk" experience. The only improvement would have been if one of the volcanoes was erupting, but you can't have everything!

The new Waihohonu Hut was impressive. It's probably the best DoC hut I've yet seen: multiple outdoor decks with picnic tables, well insulated, a very spacious common area with wood-burning stove, eight gas cooking stoves, solar-powered LED lighting, and even solar water heating!

After destroying a couple of cameras by getting them wet while tramping (only cheap point-and-shoots), I bought a waterproof camera. On this trip I took some underwater photos and they turned out quite well!

Wednesday, 5 April 2017

Rust Optimizations That C++ Can't Do (Version 2)

My last post was wrong in the details, so let me show a better example. Rust code:

pub extern fn foo(v: &u64, callback: fn()) -> u64 {
  let mut sum = 0;
  for _ in 0..100 {
    sum += *v;
C++ version here.

The inner loop of the Rust version turns into:

        inc     ebx
        call    r14
        add     r12, r15
        cmp     ebx, 100
        jl      .LBB0_1
sum is in r12 and the evaluation of *v has been hoisted out of the loop with the result in r15. In the compiled C++ code the value has not been hoisted:
.LBB0_1:                                # =>This Inner Loop Header: Depth=1
        add     rbx, qword ptr [r15]
        call    r14
        dec     ebp
        jne     .LBB0_1

Of course the performance difference in this case would be negligible, but if *v was replaced with a more complicated expression, the difference would increase. (No, I don't know why the Rust code counts up instead of counting down like C++ to get slightly better code; someone should fix that.)

Here it's really clear that the semantics of Rust are making this optimization possible. In C++ v could be a reference to a global variable which is modified by the callback function, in which case hoisting the load would be incorrect.

Several people pointed out that with LTO/WPO a C++ compiler might be able to determine that such cases don't happen, in this case that no function passed into callback directly or indirectly modifies v. That's what I meant in my previous post by "a C++ compiler could do some analysis to show that can't happen" ... but as I continued, "those analyses don't scale". The larger and more complicated your code gets, the more likely it is that static analysis will fail to prove the invariants you care about. Sometimes, due to dynamic linking or run-time code generation, relevant code just isn't available to be analyzed. In other cases the analysis algorithms aren't sophisticated enough to prove what you care about (often because no such algorithm is known), or adequate algorithms exist but they were too expensive to deploy. Perhaps the worst cases for sophisticated global analysis are when the property you care about isn't even true, because you accidentally violated it in some part of the program, but you'll never know, because the only effect of that accident is that performance got a little bit worse somewhere else. (E.g. with this example if you accidentally change one of the callback functions to indirectly modify v, your hypothetical WPO compiler does a slightly worse job and you won't know why.)

What we need are language-level invariants that apply globally and support aggressive optimization using only local analysis. C++ has some, but Rust's are stronger. We also need those invariants to be checked and enforced, not just "it's undefined behavior if you do this"; (safe) Rust wins again on that.

Rust Optimizations That C++ Can't Do

Update dbaupp on Reddit pointed out that this particular example does not support the point I'm trying to make. I'll explain why below. I followed up with a new post with a better example.

Consider the Rust code

pub struct Foo {
    v1: u64,
    v2: u64,
pub fn foo(t: &Foo) -> u64 {
    t.v1 + t.v2

The stable Rust compiler, in Release mode, generates

 leaq (%rdi,%rsi), %rax

Hang on! foo is supposed to take a reference, so shouldn't it take a pointer as its parameter and do two loads to get the values of v1 and v2? Instead it receives the values of v1 and v2 in %rdi/%rsi and adds them using leaq as a three-operand "add" instruction. The Rust compiler has decided (correctly) that even though you wrote a reference, in this case it's more efficient to just pass the parameter by value. This is good to know as a Rust developer; I can use references freely without worrying about whether it would be more efficient to pass the value directly.

C++ compilers can't do this optimization, for a couple of reasons. One reason is that C++ has a standard ABI that dictates how parameters are passed; it requires that references are always passed as pointers. A more interesting reason is that in general this optimization is not safe in C++. An immutable Rust reference promises that the referenced value does not change (except for some special cases involving UnsafeCell which are easily detected by the compiler), but not so for C++ const references; a "const" value can be modified through other mutable references, so copies induced by pass-by-value could change program behavior. For a simple case like this a C++ compiler could do some analysis to show that can't happen, but those analyses don't scale (e.g. if foo called, and was called by, unknown code, all bets are off).

Here Rust's strong control over aliasing is enabling improved optimization, not just improved safety. Update Actually as dbaupp points out, this optimization example is not specific to Rust. In fact, LLVM is doing it via local analysis (the sort I mentioned above), and clang++ produces the same results for the equivalent C++ code. In fact Rust in general would need to preserve references because you can convert a reference to a raw pointer and compare them for equality.

It's easy to find other examples, such as improved code motion and common subexpression elimination. I think with more study of large Rust programs, more investment in the Rust compiler, and tighter control over unsafe Rust, we will probably discover that Rust enables many more interesting optimizations.

One of the promises of hair-shirt functional languages was that referential transparency and other strong semantic guarantees would enable extremely aggressive optimizations, which would recover the performance lost by moving away from the stateful semantics of hardware. That didn't really work out, but with Rust we might get the aggressive optimizations on top of semantics that are already very efficient.

Sunday, 2 April 2017

Pararaha Valley

This weekend my son and I went out to camp overnight at Pararaha Valley in the Waitakere Ranges west of Auckland. We got dropped off on Piha Road at the north end of the Upper Huia Dam track, near the centre of the Ranges, and I planned to walk from there for 5-6 hours to reach the campsite near the west coast. After one and a half hours we reached the dam, and unfortunately discovered that the next section, Nugget Track, was closed to protect against kauri die-back. We had no choice but to retrace our steps for another hour and half. The Upper Huia track is pretty rough so at 2pm we were back where we started having used up a lot of energy :-(. (The trailhead has a map showing track closures but Nugget Track was not clearly marked as closed, so I'll contact the council about that.)

I wasn't sure what to do but decided we could probably reach the campsite before dark by following roads instead. So we walked west along Piha Road, south along Lone Kauri Rd towards Karekare, then crossed over into Pararaha Valley via the Buck Taylor Track. We got to the campsite around 6pm, having walked for about 7.5 hours at a fairly quick pace, so I was pretty tired. My son was fine :-).

The Pararaha Valley campsite is a lovely spot. I thought the swamp, er I mean wetland, at the bottom of the valley would mean a lot of mosquitoes but that wasn't a problem. The site documentation says that site has only stream water that needs to be treated, but there's a cooking shelter with roof runoff into a tank, and we drank the tank water untreated with no problems. (One of the other people at the campsite told me he'd drunk it two weeks ago while running the Hillary Trail (67km!).)

This morning we had a short two hour walk from the campsite up the coast along the beach to Karekare to get picked up. We got back to the city in time for church :-).

Let's Make NZ More Expensive For Tourists

I agree with Labour leader Andrew Little on this. Tourist numbers are growing, and making NZ a more expensive destination is a perfectly fair and rational alternative to fast-growing tourist numbers. NZ is never going to be a budget destination; better to be a high-price luxury destination with money reinvested to maintain and improve our facilities and environment, than a low-price destination with an environment degraded by too many tourists. I don't think we're anywhere close to being seen as a "rip-off".

FWIW many backpackers I meet while tramping are here for months; for these people the impact of a border-crossing tax would amortized. Fewer people making longer trips is also more climate-friendly.

Monday, 27 March 2017

The Parable Of The Workers In The Vineyard Really Is About Grace

A small matter perhaps, but a "professor of New Testament and Jewish Studies at Vanderbilt University Divinity School" has claimed that Matthew 20:1-16 is about economic justice:

This parable tells the story of a series of workers who come in at different points of the day, but the owner pays them all the same amount. The parable is sometimes read with an anti-Jewish lens, so that the first-hired are the "Jews" who resent the gentiles or the sinners entering into God's vineyard. Nonsense again.

"Jesus' first listeners heard not a parable about salvation in the afterlife but about economics in present. They heard a lesson about how the employed must speak on behalf of those who lack a daily wage."

This interpretation must be popular in some circles, because I once heard it preached in a sermon by a guest speaker. I was frustrated and mystified then, and now; I just don't see grounds for rejecting the traditional eschatalogical application, in which the landowner is God and the "denarius" is salvation. The parable's introduction says "The kingdom of heaven is like ..." and its conclusion says "So the last will be first, and the first will be last". The passage immediately leading up to the parable is even more clearly eschatalogical and ends with the same formula:

Everyone who has left houses or brothers or sisters or father or mother or wife[e] or children or fields for my sake will receive a hundred times as much and will inherit eternal life. But many who are first will be last, and many who are last will be first.

Even allowing for the likelihood that the text was arranged thematically rather than chronologically, clearly the writer thought the passages belonged together.

It's true that "the kingdom of heaven" sometimes refers to God's will being done on earth, but the context here is strongly against that. Furthermore, the traditional interpretation fits the text perfectly and aligns with Jesus' other teachings. God is absurdly generous while still being just; those thought to be most religious are on shaky ground; lost sheep are welcomed in. The traditional interpretation is not inherently anti-Jewish since it applies just as well to the Jewish undesirables ("tax collectors and sinners") beloved by Jesus as to Gentile converts.

Many Biblical passages extol economic justice. This isn't one of them.

Wednesday, 22 March 2017

Blogging Vs Academic Publishing

Adrienne Felt asked on Twitter:

academic publishing is too onerous & slow. i'm thinking about starting a blog to share chrome research instead. thoughts?
It depends on one's goals. I think if one's primary goal is to disseminate quality research to a broad audience, then write papers, publish them to arXiv, and update them in response to comments.

I've thought about this question a fair bit myself, as someone who's interacted with the publication system over two decades as a writer and reviewer but who won't perish if I don't publish. (Obviously if you're an academic you must publish and you can stop reading now...) Academic publishing has many problems which I won't try to enumerate here, but my main issues with it are the perverse incentives it can create and the erratic nature of the review process. It does have virtues...

Peer review is the most commonly cited virtue. In computer science at least, I think it's overrated. A typical paper might be reviewed by one or two experts and a few other reviewers who are capable, but unfamiliar with the technical details. Errors are easy to miss, fraud even easier. Discussions about a paper after it has been published are generally much more interesting than the reviews, because you have a wider audience who collectively bring more expertise and alternative viewpoints. Online publishing and discussion could be a good substitute if it's not too fragmented (I dislike the way Hackernews, Reddit etc fragment commentary), and if substantive online comments are used to improve the publication. Personally I try to update my blog posts when I get particularly important comments; that has the problem that fewer people read the updated versions, but at least they're there for later reference. It would be good if we had a way to archive comments like we do for papers.

Academic publishing could help identify important work. I don't think it's a particularly good tool for that. We have many other ways now to spread the word about important work, and I don't think cracking open the latest PLDI proceedings for a read-through has ever been an efficient use of time. Important work is best recognized months or years after publication.

The publishing system creates an incentive for people to take the time to systematically explain and evaluate their work and creates community standards for doing so. That's good, and it might be that if academic publishing wasn't a gatekeeper then over time those benefits would be lost. Or, perhaps the evaluation procedures within universities would maintain them.

Academic publishing used to be important for archival but that is no longer an issue.

Personally, the main incentives for me to publish papers now are to get the attention of academic communities I would otherwise struggle to reach, and to have fun hanging out with them at conferences. The academic community remains important to me because it's full of brilliant people (students and faculty) many of whom are, or will be, influential in areas I care about.

Thoughts On "Java and Scala’s Type Systems are Unsound" And Fuzz Testing

I just belatedly discovered Java and Scala’s Type Systems are Unsound from OOPSLA last year. It's a lovely paper. I would summarize it as "some type system soundness arguments depend on 'nonsense types' (e.g. generic types with contradictory bounds on type parameters) having no instances; if 'null' values can inhabit those types, those arguments are invalid". Note that the unsoundness does not lead to memory safety issues in the JVM.

The paper is a good illustration of how unexpected feature interactions can create problems for type systems even when a feature doesn't seem all that important at the type level.

The paper also suggests (implicitly) that Java's type system has fallen into a deep hole. Even without null, the interactions of subtyping, generics and wildcards are immensely complicated. Rust's rejection of subtyping (other than for lifetimes, which are tightly restricted) causes friction for developers coming from languages where subtyping is ubiquitous, but seems very wise for the long run.

I think the general issue shown in this paper could arise in other contexts which don't have 'null'. For example in a lazy language you can create a value of any type by calling a function that diverges. In a language with an explicit Option type, if T is a nonsense type then Option is also presumably a nonsense type but the value None inhabits it.

The paper discusses some methodological improvements that might detect this sort of mistake earlier. One approach it doesn't mention is fuzz testing. It seems to me that the examples in the paper are small enough to be found by fuzz testing techniques searching for programs which typecheck but contain obviously unsound constructs (e.g. a terminating function which can cast its parameter value to any type). Checking soundness via fuzz testing has been done to small extent with Rust (see paper) but I think more work in that direction would be fruitful.

Tuesday, 21 March 2017

Deterministic Hardware Performance Counters And Information Leaks

Summary: Deterministic hardware performance counters cannot leak information between tasks, and more importantly, virtualized guests.

rr relies on hardware performance counters to help measure application progress, to determine when to inject asynchronous events such as signal delivery and context switches. rr can only use counters that are deterministic, i.e., executing a particular sequence of application instructions always increases the counter value by the same amount. For example rr uses the "retired conditional branches" (RCB) counter, which always returns exactly the number of conditional branches actually retired.

rr currently doesn't work in environments such as Amazon's cloud, where hardware performance counters are not available to virtualized guests. Virtualizing hardware counters is technically possible (e.g. rr works well in Digital Ocean's KVM guests), but for some counters there is a risk of leaking information about other guests, and that's probably one reason other providers haven't enabled them.

However, if a counter's value can be influenced by the behavior of other guests, then by definition it is not deterministic in the sense above, and therefore it is useless to rr! In particular, because the RCB counter is deterministic ("proven" by a lot of testing), we know it does not leak information between guests.

I wish Intel would identify a set of counters that are deterministic, or at least free of cross-guest information leaks, and Amazon and other cloud providers would enable virtualization of them.

Thursday, 16 March 2017

Using rr To Debug Go Programs

rr seems to work well with Go programs, modulo some limitations of using gdb with Go.

The DWARF debuginfo produced by the standard Go compiler is unsatisfactory, because it does not include local variables held in registers. Instead, build with gccgo.

Some Go configurations generate statically linked executables. These work under rr, but they're slow to record and replay because our syscall-interception library is not loaded. When using rr with Go, be sure to generate dynamically linked executables.

Running Go tests using go test by default builds your project and then runs the tests. Recording the build step is wasteful, especially when the compiler is statically linked (see above). So build the test executable using go test -compiler gccgo -c and then run the test executable directly under rr.

Saturday, 25 February 2017

Against Online Voting

There's a push on by some to promote online voting in New Zealand. I'm against it.

Government has called for local government to take the lead in online voting, with the need to guarantee the integrity of the system - most notably safety from hacking - the top priority.

But Hulse questioned some politicians' interest in a digital switchover.


Hulse said the postal system isn't bullet-proof either.

"Government seems to want absolute watertight guarantees that it is not hackable or open to any messing with," she said.

"I don't know why people seem to feel somehow the postal vote has this magic power of being completely safe. It can be, and has been, hacked in its own way by people just going round at night and picking up ballot papers."

I think the fears around last year's US election emphasized more than ever the importance of safeguarding the electoral process. Hacking is a greater threat than, say, people stealing postal ballot papers for a few reasons.

Stealing postal ballots is necessarily risky. A thief must be physically present and there is a risk of being caught. In contrast, many hacks can be done from anywhere and it's relatively easy to conceal the source of the attack.

The risk and effort required to steal ballots is proportional to the number of ballots stolen. In contrast, successful hacks are often easy to scale up to achieve any desired effect with no extra work or risk.

Some fraction of people are likely to notice their postal ballot went missing and notify the authorities, who can then investigate, check for other missing ballots, and invalidate postal ballots en masse if necessary. Hacks against online voting systems can be much more stealthy.

People systematically overestimate the security of electronic systems, partly because a relatively small group has the background to fully understand how they work and the risks involved --- and even that group is systematically overconfident.

A spokeswoman for the Mayor's office said Goff was not available for comment due to other commitments this week, but said the Mayor endorsed the push for online voting.

Deputy Mayor, Bill Cashmore, also said while there is no clear timeline on when a trial might run, online voting is also something he would like to see introduced - pointing out it would probably be well-received by young voters and a good way of maximising the reach of the democratic process.

That's all very well, but confidence in the integrity of the voting process is also an important driver for participation.

The fundamental problems with online voting are well-understood by the research community. The NZ Government should continue holding the line on this, at least for Parliamentary elections. For local body elections (which the Mt Albert election is referenced above is not), voter participation is so low you can make the argument that reducing the integrity of the election is worthwhile if online voting significantly increases the participation rate. However, there is evidence it may not.

Friday, 24 February 2017

306 Points In "Lords Of Waterdeep"

A new record for our household.

It was a three-player "long" game. Nine Plot Quests is probably too many but they were good value and created a lot of synergies. The most important quest was Shelter Zhentarim Agents, which I completed first; I probably drew about 15 corruption during the game, so had a constant supply of Intrigue cards, and with only two other players I was able to play two or three Intrigues per round (a couple of buildings with play-Intrigue helped). I also completed Seize Citadel Of The Bloody Hand early and Recruit Lieutenant in the fourth round.

I discovered a nice trick after completing Fence Goods For Duke Of Darkness and building Hall Of The Three Lords: arrange to have three or four turns in a row (the Lieutenant usually plays last after the other players have placed their agents, then you can reassign agents from Waterdeep Harbor), then place an agent on Hall Of The Three Lords and place three rogues on three vacant action spaces for ten victory points. In your following turns, place your agents on those action spaces and scoop your rogues back up as well as taking the benefit of the action; Fence Goods For Duke Of Darkness gives you two gold for each one, so you get a bonus six gold. If one or two of those action spaces are The Librarium or other buildings that let you take resources and place additional resources elsewhere for your next agent to pick up, so much the better.

Sunday, 19 February 2017

"New Scientist" And The Meaning Of Life

A recent issue of New Scientist has a headline article on "The Meaning Of Life". It describes research showing correlations between a sense of purpose and various positive attributes such as health and happiness. It then touches on a few ways people can try to enhance their sense of purpose.

Unfortunately the second paragraph gives the game away:

As human beings, it is hard for us to shake the idea that our existence must have significance beyond the here and now. Life begins and ends, yes, but surely there is a greater meaning. The trouble is, these stories we tell ourselves do nothing to soften the harsh reality: as far as the universe is concerned, we are nothing but fleeting and randomly assembled collections of energy and matter. One day, we will all be dust.

Quite so --- assuming a secular worldview. In that context, the question "what is my purpose?" has no answer. The best you can hope for is to grab hold of some idea, make it your purpose and refrain from asking whether you have chosen correctly.

To me, even before I became a Christian, that approach seemed unsatisfactory --- too much like intentional self-delusion. To conclude "there is no greater meaning" and carry on as if there is didn't seem honest.

Later I came to believe that Christianity is objectively true. That means God has created us for a specific (yet broad) purpose --- "to glorify God and enjoy him forever", in the words of the Westminster Shorter Catechism. The question has an answer. We'll spend eternity unpacking it, though.

Monday, 6 February 2017

What Rust Can Do That Other Languages Can't, In Six Short Lines

struct X {
  y: Y
impl X {
  fn y(&self) -> &Y { &self.y }

This defines an aggregate type containing a field y of type Y directly (not as a separate heap object). Then we define a getter method that returns the field by reference. Crucially, the Rust compiler will verify that all callers of the getter prevent the returned reference from outliving the X object. It compiles down to what you'd expect a C compiler to produce (pointer addition) with no space or time overhead.

As far as I know, no other remotely mainstream language can express this, for various reasons. C/C++ can't do it because the compiler does not perform the lifetime checks. (The C++ Core Guidelines lifetime checking proposal proposes such checks, but it's incomplete and hasn't received any updates for over a year.) Most other languages simply prevent you from giving away an interior reference, or require y to refer to a distinct heap object from the X.

This is my go-to example every time someone suggests that modern C++ is just as suitable for safe systems programming as Rust.

Update As I half-expected, people did turn up a couple of non-toy languages that can handle this example. D has some special-casing for this case, though its existing safety checks are limited and easily circumvented accidentally or deliberately. (Those checks are in the process of being expanded, though it's hard to say where D will end up.) Go can also do this, because its GC supports interior pointers. That's nice, though you're still buying into the fundamental tradeoffs of GC: some combination of increased memory usage, reduced throughput, and pauses. Plus, relying on GC means it's nearly impossible to handle the "replace a C library" use case.

Saturday, 4 February 2017

rr 4.5.0 Released

We just released rr 4.5.0. Massive thanks to Keno Fischer who probably contributed more to this release than anyone else.

This is another incremental release. rr is already pretty good at what it does --- single-core recording, replaying, and debugging with gdb and reverse execution. The incremental nature of recent releases reflects that. It's still a significant amount of work to keep updating rr to fix bugs and cope with new kernel and hardware features as they start getting used by applications. Notably, this release adds support for Intel Knights Landing and Kaby Lake CPUs, contains a workaround for a regression caused by a Linux kernel security fix, and adds support for recording software using Hardware Lock Elision (and includes detection of a KVM bug that breaks our HLE support). This release also contains a fix that enables recording of Go programs.

We have some encouraging reports of rr adoption and rr users pretty much ceasing use of standalone gdb. However, rr definitely needs more users. When hardware and OS issues crop up, the bigger the user-base, the easier it will be to get the attention of the people whose cooperation we need. So, if you're a happy rr user and you want to help make sure rr doesn't stop working one day, please spread the word :-).

Let me say a tiny little bit about future plans. I plan to keep maintaining rr --- with the help of Keno and Kyle and other people in the community. I don't plan to add major new user-facing functionality to rr (i.e. going beyond gdb), but rather build such things as separate projects on top of rr. rr has a stable-in-practice C++ API that's amenable to supporting alternative debugger implementations, and I'd be happy to help people build on it. I've built but not yet released a framework that uses parts of DynamoRio to enable low-overhead binary instrumentation of rr replays; I plan to keep that closed at least until we have money coming in. The one major improvement that we expect we'll add to rr some time this year is reworking trace storage to enable easy trace portability and fix some existing issues.

I Really Admire Jehovah's Witnesses

... and other door-to-door evangelists.

I just had a couple of JWs visit me. I always cut through the script by asking door-to-door evangelists who they're with. Anyone who won't tell me straight-up I'd shoo off immediately, but people generally are open about it. On hearing they were JWs, I confirmed that they don't believe Jesus is God and asked them what they make of John chapter 1. We fenced amicably about those issues a little bit and I think they decided I wasn't worth their time.

Of course I think JWs are totally wrong about this key doctrine. However, I really admire the zeal and fortitude of anyone who does honest door-to-door evangelism, because I've done just enough of other kinds of evangelism to know how hard it would be. I believe that these people are at least partly motivated by genuine love for the lost. As Penn Jillette observed, if you think you've come across truth that's life-changing for everyone, only a coward or a criminal would fail to evangelize --- though door-to-door may not be the most effective approach.

Regardless, bravo!

Tuesday, 31 January 2017

A Followup About AV Test Reports

Well, my post certainly got a lot of attention. I probably would have put a bit more thought into it had I known it was going to go viral (150K views and counting). I did update it a bit but I see no reason to change its main thrust.

A respondent asked me to comment on the results of some AV product comparison tests that show Microsoft's product, the one I recommended, in a poor light. I had a look at one of the reports they recommended and I think my comments are worth reproducing here.

In that test, MS got 97% (1570 out of 1619) ... lower than most of the other products, but the actual difference is very small.

The major problem with tests like this is that they are designed to fit the strengths of AV products and avoid their weaknesses. The report doesn't say how they acquire their malware samples, but I guess they get them from the same sources AV vendors do. (They're not writing their own since they say they independently verify that their malware samples "work" on an unprotected machine.) So what they're testing here is the ability of AV software to recognize malware that's been in the wild long enough to be recognized and added to someone's database. That's exactly the malware that AV systems detect. But in the real world users will often encounter malware that hasn't had time to be classified yet, and possibly (e.g. in spear-phishing cases) will never be classified. A more realistic test would include malware like that. Testers should cover that by generating a whole lot of their own malware that's not in any database, and see how AV products perform on that. My guess is that that detection rate would be around 0% if they do it realistically, partly because in the real world, malware authors can iterate on their malware until they're sure the major AV products won't detect it. (And no, it doesn't really matter if the AV classifier uses heuristics or neural nets or whatever, except that using neural nets makes it devilishly hard to understand false positives and negatives.)

So for the sake of argument let's suppose 20% of attacks in the real world use new unclassified malware and 80% use old malware and none of the 20% are detected by AV products. In this report, that would be 405 additional malware samples not detected by any product. Now Microsoft scores 77.6% and the best (F-Secure in this case) scores 79.9%. That difference doesn't look as important now.

The other major issue with this whole approach is that it takes no account of the downsides of AV products. If a product slows down your system massively (even more than other products), that doesn't show up here. If a product blocks all kinds of valid content, that doesn't show up here. If a product introduces huge security vulnerabilities --- even if they're broadly known --- that doesn't show up here. If a product spams the user incessantly with annoying messages (that teach them to ignore security warnings altogether, poisoning the human ecosystem), that doesn't show up here.

This limited approach to testing probably does more harm than good, because to the extent AV vendors care about these test results, they'll optimize for them at the expense of those other important factors that aren't being measured.

There are also some issues with this particular test report. For example they say how important it is to test up-to-date software, and then test "Microsoft Windows 7 Home Premium SP1 64-Bit, with updates as of 1st July 2016" and Firefox version 43.0.4. But on 1st July 2016, the latest versions were Windows 10 and Firefox 47.0.1. Another issue I have with this particular report's product comparisons is that I suspect all it's really measuring is how closely their malware sample import pipeline matches the pipelines of other vendors. Maybe F-Secure won that benchmark because they happen to get their malware samples from exactly the same sources as AV-Comparatives and the other products use slightly different sources. The source of malware samples is critical here and I can't find anywhere they say what it is.

Of course there may be research that's better than this report. This just happens to be one recommended to me by an AV sympathizer.

Update Bruce points out that page 6 of the report, second paragraph, does describe a bit more about how they acquired their samples, and they say they scan the Internet themselves for malware samples. I don't know how I missed that! But there's still critical information missing like the lead time between a sample being scanned and then tested. I think the issues I wrote about above still apply.

Monday, 30 January 2017

Tripling Down Against USA Conference Hosting

Well, that escalated quickly!

I wrote "Really, Please Stop Booking International Conferences In The USA" and then the very next day chaos erupted as Trump's executive order on the "seven Muslim countries" appeared and went into effect.

I'm less open-borders than a lot of the people currently freaking out --- but regardless, the order is cruel and capricious, especially how it shut out people with valid visas and even green cards without warning. And I think this reinforces my point about conferences: how can you book a conference with international attendees in a country where this happens?

Administration staff are already talking about other disturbing changes:

Miller also noted on Saturday that Trump administration officials are discussing the possibility of asking foreign visitors to disclose all websites and social media sites they visit, and to share the contacts in their cell phones. If the foreign visitor declines to share such information, he or she could be denied entry. Sources told CNN that the idea is just in the preliminary discussion level.

No thanks :-(.

It has been pointed out that Trump's EO might actually force some organizations to hold more conferences in the USA because some US residents may not be able to return home if they go abroad. That makes some sense but it feels wrong.

On another note, around now the top US universities start their annual drive to recruit foreign grad students. I wonder how that's going to go. The professoriat not being exactly Trump country, they'll have to reconcile their fears with a message that foreign students are going to be OK. Careers depend on it.

Friday, 27 January 2017

Rustbelt Is Hiring

Rustbelt is looking for postdocs.

Most academic CS research projects strike me as somewhere between "could be useful" and "hahahaha no". Rustbelt is one of the very few in the category of "oh please hurry up we need those results yesterday". That's exactly the sort of project researchers should be flocking to.

The long-term goal here is to specify formal semantics for Rust's safe and unsafe code and have a practical, tool-supported methodology for verifying that modules containing unsafe code uphold Rust's safety guarantees. This would lead to an ecosystem where most developers mostly write safe code and that's relatively easy to do, but some developers sometimes write unsafe code and that's a lot harder to do because you have to formally verify that it's OK. In exchange, you know that your unsafe code isn't undermining the benefits of Rust; it's "actually safe". I think this would be a great situation to have. We need the results as soon as possible because they will suggest changes to Rust and/or what is considered "valid unsafe code", and the sooner they happen, the easier that would be.

rr Talk At Auckland C++ Meetup, February 21

I've signed up to give a talk about rr at the Auckland C++ Meetup on February 21, at 6pm. Should be fun!

Really, Please Stop Booking International Conferences In The USA

Last year I opined that international organizations should stop booking conferences in the USA in case Trump became President and followed through on his promise to ban Muslims from entering the country.

That particular risk soared when he was unexpectedly elected, then decreased given he seems to have backtracked on a blanket Muslim ban. Still, it feels like almost anything could happen, and because conference locations have to be booked pretty far ahead, IMHO prudence still means preferring non-USA venues until the future becomes more clear. (The pre-Trump ESTA changes asking for social media account data are also quite troubling. It makes me wonder whether I'm making trouble just posting this .. and I shouldn't have to.)

I especially suggest that if you're one of those people who sees Trump as a clear and present danger to American democracy and social order, it doesn't really make sense to be comfortable with organizing international conferences there.

(To be clear: I'm not calling for some kind of boycott to punish Americans for making "the wrong choice". It's simply about making sure international conferences are as inclusive and congenial as possible.)

Thursday, 26 January 2017

Disable Your Antivirus Software (Except Microsoft's)

I was just reading some Tweets and an associated Hackernews thread and it reminded me that, now that I've left Mozilla for a while, it's safe for me to say: antivirus software vendors are terrible; don't buy antivirus software, and uininstall it if you already have it (except, on Windows, for Microsoft's).

Update (Perhaps it should go without saying --- but you also need to your OS to be up-to-date. If you're on Windows 7 or, God forbid, Windows XP, third party AV software might make you slightly less doomed.)

At best, there is negligible evidence that major non-MS AV products give a net improvement in security. More likely, they hurt security significantly; for example, see bugs in AV products listed in Google's Project Zero. These bugs indicate that not only do these products open many attack vectors, but in general their developers do not follow standard security practices. (Microsoft, on the other hand, is generally competent.)

Furthermore, as Justin Schuh pointed out in that Twitter thread, AV products poison the software ecosystem because their invasive and poorly-implemented code makes it difficult for browser vendors and other developers to improve their own security. For example, back when we first made sure ASLR was working for Firefox on Windows, many AV vendors broke it by injecting their own ASLR-disabled DLLs into our processes. Several times AV software blocked Firefox updates, making it impossible for users to receive important security fixes. Major amounts of developer time are soaked up dealing with AV-induced breakage, time that could be spent making actual improvements in security (recent-ish example).

What's really insidious is that it's hard for software vendors to speak out about these problems because they need cooperation from the AV vendors (except for Google, lately, maybe). Users have been fooled into associating AV vendors with security and you don't want AV vendors bad-mouthing your product. AV software is broadly installed and when it breaks your product, you need the cooperation of AV vendors to fix it. (You can't tell users to turn off AV software because if anything bad were to happen that the AV software might have prevented, you'll catch the blame.) When your product crashes on startup due to AV interference, users blame your product, not AV. Worse still, if they make your product incredibly slow and bloated, users just think that's how your product is.

If a rogue developer is tempted to speak out, the PR hammer comes down (and they were probably right to do so!). But now I'm free! Bwahahaha!

Monday, 16 January 2017

Browser Vendors And Business Interests

On Twitter it has been said that "browser-vendors support web-standards when and only when those standards align with their own business interests". That's not always true, and even if it was, "business interests" are broad enough to make surprising results possible in a competitive browser market.

Mozilla, as a nonprofit, isn't entirely driven by "business interests". Mozilla often acts for "the good of the Web" even when that costs them money. (Example: pressing on with their own browser engine instead of switching to Chromium.)

Other vendors perceive (rightly or wrongly) that being seen to "do the right thing" has some business value. There is a PR and marketing effect, but also a recruiting effect; being seen as an evil empire makes it harder to recruit talented staff when other good options are available. To some extent Mozilla's existence has encouraged other vendors to compete on virtue.

Competitive markets can force vendors to implement standards they otherwise might not want to. For example, Apple needs an iOS browser that can render the modern Web or they'll leak market share to Android, so they're forced to implement Web platform improvements that you might think are not in the interests of their App Store business.

Decisions about Web standards and implementations are often made by individuals keen to "do the right thing" even if it might clash with corporate priorities. Everyone's good at rationalizing their decisions to themselves and others.

Saturday, 14 January 2017

Browser Vendors Are Responsible For The State Of Web Standards

The W3C and WHATWG are mainly just forums. They have some policies that may help or hurt standards work a little bit, but most responsibility for the state of Web standards rests on the participants in standards groups, especially browser vendors, who drive the development of most Web standards and are responsible for implementing most of them too. Therefore, most of the blame for problems in Web standards should be assigned to the specific browser vendors who generated and implemented those standards, and influenced them in various directions.

For example, the reason media element playback, Web Audio and MediaStreams are all quite different APIs is because when I proposed a MediaStream-based alternative to Web Audio, no other browser vendors were interested. Google already had separate teams working on Web Audio and MediaStreams and was already shipping behind a prefix, Apple was barely engaged at all in the Web Audio working group, and Microsoft was completely disengaged. It's not because of anything specifically wrong with the W3C or WHATWG. (FWIW I'm not saying my proposal was necessarily better than what we got; there are technical reasons why it's hard to unify these APIs.)

In fact, even assigning responsibility to individual browser vendors glosses over important distinctions. For example, even within the Chrome team you've got teams who care a lot about Web standards and teams who care a bit less.

One way to make positive change for Web standards is to single out specific undesirable behavior by specific vendors and call them out on it. I know (from both giving and receiving :-) ) that that has an impact. Assigning blame more generally is less likely to have impact.

Sunday, 8 January 2017

Parenting Notes

  • My oldest son just got a new phone. By mutual agreement, it's a dumbphone. He's trying to activate it and feels obliged to read the terms and conditions before agreeing to them. Somehow he's even more of a stickler for rules than I am; I've been trying to teach him how you sometimes need to break rules, but it's tricky work without wounding his conscience. He's struggling through the obtuse legalese. It's a real coming-of-age moment.
  • My children don't watch much TV and their Internet usage is restricted, but they read a lot of books and I'm supposed to keep track of them. I try hard to get them to read books I've already read and liked, but I still end up having to read a lot of popular child-oriented fiction, much of which is trash. Why aren't more of the ultra-popular teen series better written? There are some good writers, like J.K. Rowling, but so many others just churn out formula with mediocre writing and people lap it up. Right now I'm in Michael Grant's Gone series, which has very average writing but at least combines some good ideas in interesting ways. Could be worse; there's Rick Riordan, who writes similarly but without the ideas. The same rubbish exists in adult fiction, but I don't have to read that.
  • My kids don't like movies. I struggle to get them to go and see any movie, Star Wars, whatever, or even watch them at home. I don't understand it.
  • On the flip side, they love modern board games. That's excellent because my wife and I do too. It's a bit tough trying to work during the school holidays while they're constantly playing great games like Dominion, Lords of Waterdeep, Settlers, etc.

Saturday, 7 January 2017

Cheltenham Beach

Today we drove to Devonport to walk around North Head and Cheltenham Beach. It was a lovely summer day and Cheltenham Beach looked amazing, especially for a beach that's more or less in the heart of Auckland.

Wednesday, 4 January 2017

How China Can Pressure North Korea

This CNN article claims that there's very little even China can do to influence North Korea. I wonder why that is, because (though I'm relatively uninformed) it seems to me China could have a big impact on North Korea by reforming their policy for handling North Korean escapees.

Currently Chinese policy is that any escapees are returned to North Korea. This is inhumane because those returnees are jailed and often tortured, not to mention the hardships endured by escapees trying to reach South Korea traveling in secret. I understand that China doesn't want to host a flood of North Korean refugees, but I don't know why they couldn't coordinate with South Korea to efficiently ship any and all North Korean escapees to South Korea. (Maybe South Korea doesn't want this, but they could be made to accept it.) This would probably lead to people pouring out of North Korea, which would put significant pressure on the regime. Not only would this give political leverage, but it's also a far more humane policy.

Monday, 2 January 2017

Is CMS Software Generally Really Bad?

I'm helping overhaul our church Website. The old site was built in Joomla 1.5, and the new site is in Joomla 3.6 (I didn't choose the tech). Joomla's supposed to be a pretty popular system so I'm amazed at how much I dislike it and wondering whether there are better options available. There are a lot of small annoyances, some major design issues, and some really bad bugs.

One fundamental issue is that it's very difficult to work backwards from viewing a page on the site to being able to change the content on that page, until you've studied Joomla and the site internals to figure out how the pages are assembled. I had to read a lot of not-particularly-well-organised documentation to grasp the concepts of menus, articles, components, modules and templates, then do hours of spelunking with Firefox devtools, browsing the administrator interface and making educated guesses to figure out which page parts were generated by which Joomla entities. It seems to me that this could be a lot easier, either by offering WYSIWYG tools that show you visually how a page is assembled (and let you edit those parts in-place!), or at least by leaving consistent notes in the generated DOM to indicate where pieces came from. Simplifying the assembly model would also help a lot.

Another fundamental issue is that there's no version control or change preview. As far as I know the only way to figure out what impact a change is going to have is to make it on the live site. It might be possible to set up a staging server, but that looks difficult, and probably impractical for a small-ish project like ours. If you don't like the effects of the change you have to reverse it manually, which is horrible and error-prone. Without version control there's no way to view changes made by others, make experimental branches, etc. The two latter problems would not be improved by a staging server.

Those issues seem so fundamental to me I'm surprised a major CMS fails on them.

Then there are the bugs. My least favourite bug right now is that often, when I edit HTML using the WYSIWYG HTML editor, upon saving the article the targets of all links are replaced with the target of the last link (destroying my work in a non-undoable way, see above). I assume such a terrible bug must be specific to our installation somehow, but the system is complex and opaque enough that debugging it seems impractical.

Our church's needs are not complicated. The site is mostly static and there isn't a huge amount of content. Nevertheless the pages load quite slowly and the generated HTML/CSS/JS is bloated. It's tempting to start over with a completely different approach. Then again, I know there's a plethora of CMSes and Web frameworks and Joomla is very popular (at least in absolute terms) so I feel like there must be something I'm missing.