Monday, 8 June 2020

Cape Brett 2020

This weekend I walked to Cape Brett Hut for the third time, with some friends and family. It's a tough hike but once again we had great weather, especially for a winter tramp.

Getting to and from the start at Rawhiti was quite exciting, but once we were on the track the walk was straightforward — but arduous. There is no especially difficult terrain, just endless up-and-down steep hills. The bush is beautiful; there are consistently spectacular views of the Bay of Islands to the north and the Tutukaka coast to the south, with steep cliffs, rocks, and sheltered bays; and the ocean looks appropriately vast. We took our time and got to the hut after about eight hours, at 4pm. One of our group went down to the rocks to fish, and caught a snapper which we had for dinner along with sausages and bread. There were a dozen seals on the rocks across the channel that were fun to watch.

The DoC website has a permanent warning that the hut water is salty. Last year it seemed fine, but this time it was certainly not fit for human consumption, so we had to conserve water carefully.

The next day we watched the sun rise over the ocean and had a good bacon-and-egg breakfast. The return walk was again both arduous and glorious. We ran a little late and finished right as the sun was setting over the Bay of Islands, which was very beautiful. Everyone was pretty tired and some were quite sore, but over time I think the good memories will last!

My Google Maps Disaster

Last weekend I did the Cape Brett Track with some friends and family (more about that in a separate post). On Saturday morning I left Whangarei at 6am, planning to take Russell Road to the trailhead at Rawhiti around 7:30am, so we could start the walk just after dawn.

The first thing that went wrong is that we missed the turnoff to Russell Road from State Highway One. Google Maps (and my road atlas) shows the turnoff at Whakapara, but no road signs in that area mention Whakapara, so it's easy to miss in the dark.

We carried on, looking for a place to turn around. The next opportunity was at the intersection with Puhipuhi Road. Google Maps shows Puhipuhi Road connecting with Russell Road via Teatree Flat Road and Peach Orchard Road, so I thought we might as well just take it. (Narrator: big mistake!)

Upon reaching Teatree Flat Road, we discovered the street sign was marked "no exit". Worrying, but Google Maps clearly shows it connecting to Peach Orchard Road, so we might as well have a look, right? (Narrator: no.)

At the end of Teatree Flat Road, a sign says Council Maintenance Ends, but a road continues where Peach Orchard Road should be and it doesn't look too bad. Backtracking now might have forced our friends to wait for quite a while ... so we gave it a go. (Narrator shakes head.)

Peach Orchard Road deteriorated rapidly. Before we knew it, our people-mover was bouncing down a track that seemed to be mostly rocks and tree roots. It was clear that I had made a dreadful mistake and should not have been on that road in that car. I was genuinely scared and praying fervently. There was nowhere to turn around and even if there was, I'm not sure our car could have gotten back up the hill, so I gritted my teeth and kept going, just hoping the road wouldn't get worse and our car wouldn't fall apart. Thank God, the road started to improve again, connected to a proper gravel road, and then we were on Russell Road. We caught up to our friends and were back on schedule.

But there is a sequel! After we finished the walk on Sunday night, we started driving out and immediately discovered we had a flat tire! I assume this was a slow leak from damage inflicted by Peach Orchard Road :-(. Still, I'm grateful it didn't stop us from getting to Rawhiti. We changed the tire and were able to get home after several hours of cautious driving.

All in all it was a cheap lesson. Don't trust Google Maps for connectivity or state of rural roads! At least check to see if there is Street View for the roads you want to use. In fact the drivers of the Google Maps car were wise enough to skip the scary part of Peach Orchard Road :-).

Monday, 11 May 2020

Why Forking HTML Into A Static Language Doesn't Make Sense

Quite often someone proposes something like this:

HTML should never have grown into the mutated application runtime it is today. The presentational concerns for documents are different from application rendering. The javascript stack should have been something entirely separate.

I strongly feel we should create a lightweight HTML fork that is again document-centric and doesn't allow for all of this javascript nonsense. Something that doesn't allow for stupid custom UI or behavioural tracking. Just text, images, videos, and links. The painting algorithm would be dead simple, documents would load lightning fast, and we'd be confident there would be no ad malware.

Rather than reply directly, I decided to write this blog post so I can refer to it again next time this comes up.

There are incorrect factual assumptions here. "The presentational concerns for documents are different from application rendering" is incorrect; there's a lot of overlap, and there is a continuum between static documents and interactive applications, with use-cases all the way along that continuum. "Something that doesn't allow for stupid custom UI or behavioural tracking. Just text, images, videos, and links" is contradictory; embedded images, videos and links allow for lots of tracking and custom UI.

A bigger issue is that defining a static fork of HTML is easy, but persuading Web developers to use it is where the real problem lies, and that is immensely difficult and no-one has any good ideas for how to do it. Of course, if you do find a way to persuade Web developers to avoid features you don't like, there isn't much value in defining a specific "static subset" of HTML.

By the way, I just partially lied. Defining a static fork of HTML is easy, but I'm sure everyone who thinks the modern Web has gone off the rails has a slightly different list of features that should be allowed, so getting your entire constituency to agree on a specific list of allowed features is going to be very difficult. (E.g., are forms in that subset? contenteditable? Video? Animated GIFs? CSS :hover? CSS media queries? Different people will give different answers to these questions.)

Some people believe that defining a static subset of HTML would benefit Web developers and users because we could build a much more efficient browser (or browser mode) for that subset. As a former Mozilla distinguished engineer, I'm skeptical. Even something as fundamental as window size changes requires the ability to do incremental relayout. Any kind of editing or forms requires support for DOM changes, as does rendering of incrementally loaded documents. You could take a few shortcuts but you won't easily beat the performance of existing browsers on the same content, especially keeping in mind the incredibly impressive optimizations those browsers have already implemented.

The only possible benefit I can see of a defined static HTML subset is that for documents opting into that subset, browsers might be able to put all those documents into a single sandboxed child process instead of splitting them into a different sandboxed process per site, on the assumption that these documents are a negligible security risk to each other. I don't think this is at the forefront of the minds of advocates for a static HTML subset.

Friday, 8 May 2020

Omniscient JS Debugging In Pernosco

Until recently Pernosco was limited to debugging statically-compiled languages with DWARF debuginfo. Many of our potential customers would like to debug other languages as well, so we're doing that, starting with support for debugging Javascript running in V8. Here's a demo:

You also can open the Pernosco session in the demo yourself and follow along!

This is an rr recording of Chrome, processed by Pernosco. We haven't modified Chrome/V8 source code. Instead Pernosco observes the operation of V8 — using our omnisicient infrastructure — and infers what the JS code is doing. We get an omniscient debugging experience for Javascript — the first tool in this space, as far as I know. Not only that, you get C++ and Javascript debugging tightly integrated together, which is invaluable in some scenarios.

Currently we only understand the V8 interpreter, so we disable the JIT (--js-flags=--jitless). Handling the V8 baseline compiler would be more work, but probably straightforward. What we already have should be enough to debug many Node applications, Electron apps, and even Web apps.

Pernosco is quite a different sort of experience to the "browser devtools" debugging most Web developers are used to. We can certainly implement more devtools features, but for some applications a tool like WebReplay will be more appropriate.

As usual we're only scratching the surface of what can be done here. We've built infrastructure to make it easier to support more languages, and we're eager to add whatever our customers might need. If you're interested in using Pernosco for your debugging needs, get in touch with your requirements and we may be able to set up you with a free demo.

Friday, 17 April 2020

Have Some Humility, Mike Hosking

Today Mike Hosking is saying that our lockdown was an overreaction.

On March 23 he was saying "If we want to beat Covid-19, shut New Zealand down". "Michael Baker is right, we need to be shutting doors." No concern expressed about overreacting.

March 24, after the lockdown was announced, he wrote: "Thank God we got there in the end". "My only real fear at this point is us." "Being stuck will be fun for some for a while, then it'll be a pain. That's a very small price to pay for, at last, the urgency of this starting to drive decision making."

I do not see anything from Hosking, from then until recently, saying our lockdown was an overreaction.

So it's easy now to look at results in Australia and say maybe what they did was enough. But who could be confident saying that three weeks ago? Not Mike Hosking apparently, and I think not anyone.

Have some humility, Mike Hosking.

Friday, 27 March 2020

What If C++ Abandoned Backward Compatibility?

Some C++ luminaries have submitted an intriguing paper to the C++ standards committee. The paper presents an ambitious vision to evolve C++ in the direction of safety and simplicity. To achieve this, the authors believe it is worthwhile to give up backwards source and binary compatibility, and focus on reducing the cost of migration (e.g. by investing in tool support), while accepting that the cost of each migration will be nonzero. They're also willing to give up the standard linking model and require whole-toolchain upgrades for each new version of C++.

I think this paper reveals a split in the C++ community. I think the proposal makes very good sense for organizations like Google with large legacy C++ codebases that they intend to continue investing in heavily for a long period of time. (I would include Mozilla in that set of organizations.) The long-term gains from improving C++ incompatibly will eventually outweigh the ongoing migration costs, especially because they're already adept at large-scale systematic changes to their code (e.g. thanks to gargantuan monorepo, massive-scale static and dynamic checking, and risk-mitigating deployment systems). Lots of existing C++ software really needs those safety improvements.

I think it also makes sense for C++ developers whose projects are relatively short-lived, e.g. some games. They don't need to worry about migration costs and will reap the benefits of C++ improvement.

For mature, long-lived projects that are poorly resourced, such as rr, it doesn't feel like a good fit. I don't foresee adding a lot of new code to rr, so we won't reap much benefit from improvements in C++. On the other hand it would hurt to pay an ongoing migration tax. (Of course rr already needs ongoing maintenance due to kernel changes etc, but every extra bit hurts.)

I wonder what compatibility properties toolchains would have if this proposal carries the day. I suspect the intent is the latest version of a compiler implements only the latest version of C++, but it's not clear. An aggressive policy like that would increase the pain for projects like rr (and, I guess, every other C++ project packaged by Linux distros) because we'd be dragged along more relentlessly.

It'll be interesting to see how this goes. I would not be completely surprised if it ends with a fork in the language.

Wednesday, 11 March 2020

Debugging Gdb Using rr: Ptrace Emulation

Someone tried using rr to debug gdb and reported an rr issue because it didn't work. With some effort I was able to fix a couple of bugs and get it working for simple cases. Using improved debuggers to improve debuggers feels good!

The main problem when running gdb under rr is the need to emulate ptrace. We had the same problem when we wanted to debug rr replay under rr. In Linux a process can only have a single ptracer. rr needs to ptrace all the processes it's recording — in this case gdb and the process(es) it's debugging. Gdb needs to ptrace the process(es) it's debugging, but they can't be ptraced by both gdb and rr. rr circumvents the problem by emulating ptrace: gdb doesn't really ptrace its debuggees, as far as the kernel is concerned, but instead rr emulates gdb's ptrace calls. (I think in principle any ptrace user, e.g. gdb or strace, could support nested ptracing in this way, although it's a lot of work so I'm not surprised they don't.)

Most of the ptrace machinery that gdb needs already worked in rr, and we have quite a few ptrace tests to prove it. All I had to do to get gdb working for simple cases was to fix a couple of corner-case bugs. rr has to synthesize SIGCHLD signals sent to the emulated ptracer; these signals weren't interacting properly with sigsuspend. For some reason gdb spawns a ptraced process, then kills it with SIGKILL and waits for it to exit; that wait has to be emulated by rr because in Linux regular "wait" syscalls can only wait for a non-child process if the waiter is ptracing the target process, and under rr gdb is not really the ptracer, so the native wait doesn't work. We already had logic for that, but it wasn't working for process exits triggered by signals, so I had to rework that, which was actually pretty hard (see the rr issue for horrible details).

After I got gdb working I discovered it loads symbols very slowly under rr. Every time gdb demangles a symbol it installs (and later removes) a SIGSEGV handler to catch crashes in the demangler. This is very sad and does not inspire trust in the quality of the demangling code, especially if some of those crashes involve potentially corrupting memory writes. It is slow under rr because rr's default syscall handling path makes cheap syscalls like rt_sigaction a lot more expensive. We have the "syscall buffering" fast path for the most frequent syscalls, but supporting rt_sigaction along that path would be rather complicated, and I don't think it's worth doing at the moment, given you can work around the problem using maint set catch-demangler-crashes off. I suspect that (with KPTI especially) doing 2-3 syscalls per symbol demangle (sigprocmask is also called) hurts gdb performance even without rr, so ideally someone would fix that. Either fix the demangling code (possibly writing it in a safe language like Rust), or batch symbol demangling to avoid installing and removing a signal handler thousands of times, or move it to a child process and talk to it asynchronously over IPC — safer too!