Thursday, 29 March 2018

Speeding Up `dwarfdump` With Rust

Writing a debugger for C++ on Linux, you spend a lot of time examining pretty-printed DWARF debug information using tools like readelf, objdump or dwarfdump. Unfortunately this can be quite slow. For Firefox's libxul.so, dwarfdump's pretty-printed output of just the main .debug_info is 25GiB. The standard objdump and readelf tools take about three minutes to print it to /dev/null.

The Rust gimli crate includes a Rust-implemented version of dwarfdump. Initially it took about eight minutes to dump libxul, although the comparison is unfair because dwarfdump dumps more data than readelf. I decided to try to speed dwarfdump up. TL;DR: I reduced the dump time from 506s to 26s by fixing some simple issues and taking advantage of Rust "fearless parallelism". I think there are interesting opportunities for speeding up many kinds of command-line tools using Rust and parallelism.

Initially gimli dwarfdump was using println! to print every line of output. Every println! call temporarily locks the Rust stdout stream and performs a write system call, even when redirected to a file. This is extremely inefficient; it's much more efficient to buffer output so multiple lines are written with each system call. Since dwarfdump only used one thread, it can take the lock at the start and hold it permanently. These changes reduced the run time to 215s.

I did some profiling to look for hotspots amenable to micro-optimizations. I found that we spent a lot of CPU time in Rust's formatted-string padding code. dwarfdump used right-padding formatting of an empty string ("{:indent$}") to insert a specified number of spaces to indent each line. This is quite slow because the padding implementation writes the padded value to a temporary buffer, measures the length of the resulting string, and then writes the buffered value and the padding bytes. I changed dwarfdump to indent by printing slices from a large string of spaces, reducing the run time to 142s. I also found that where some non-empty string tokens were being right-padded, we could speed things up emitting the padding as a string slice, taking advantage of the fact that these are static strings whose lengths we know. That reduced the run time to 99s.

There are probably opportunities to improve the performance of Rust formatted text output by combining static knowledge of the format string and the types and other details of the parameters.

I then turned to parallelism. dwarfdump prints the contents of .debug_info by looping over each compilation unit, and this is easy to do in parallel — in Rust, at least. In Rust, a method that takes an immutable &self reference guarantees it can be called safely on the same object from multiple threads, so it's evident from a library's API types whether and how it can be used with multiple threads, and the compiler checks your usage. Better still, since immutability is the default, in practice Rust libraries tend to work well with multithreading, and gimli is no exception.

One important detail is that we want to output the results for compilation units in the correct order: the output for a compilation unit has to be buffered and printed to stdout after the output for all previous compilation units. You don't want to buffer the output of too many compilation units at once; each of those buffers can be tens of megabytes. I created a parallel-output utility function to handle that. It assigns compilation units to a fixed number of worker threads (N) in a strict round-robin order and ensures that a worker thread doesn't start working on a new compilation unit until the results for its previous compilation unit have been printed. Thus at most N output buffers are required. With eight worker threads (for my quad-core, eight-hardware-threads Skylake laptop) this reduced the run time to 26s.

Even with a single worker thread, run time dropped to 77s; some of the improvement actually came from changing writes from a BufWriter to a Vec<u8>. That may be due to io::Result error propagation and checking. Two threads give 47s, and four threads give 30s. Exceeding the physical cores provides negligible returns.

At peak performance gimli dwarfdump produces 1GB of output per second. It's interesting that even for relatively simple pretty-printing and such high data volume, serialized large writes to stdout are not the bottleneck. This suggests that even simple Unixy stdio-pipeline tools might benefit from internal parallelism.

Achieving this performance by improving existing C tools would have been a lot more difficult than with gimli and Rust. There's no way to be sure that the relevant DWARF processing code, e.g. binutils/dwarf.c, is safe for use on multiple threads. (Given the presence of many unadorned static variables, it probably isn't.) Efficiently switching output backends would have been more difficult than in Rust, where it's idiomatic to parameterize output code on the static type of the output backend, e.g. fn dump_info<W: Writer>(w: &mut W, ...).

Tuesday, 20 March 2018

Too Many DWARF Packaging Options

There are too many ways in which DWARF debuginfo can be packaged, and it makes building DWARF-consuming tools a nightmare.

Debuginfo can be packaged into the executable file, or into a separate external debuginfo file This debuginfo file can be referenced by a build id stored in the .note.gnu-build-id section, or via a filename stored in the .gnu_debuglink section.

Either of those kinds of debuginfo file can use "split DWARF". With "split DWARF", most of the debuginfo is not present in the file itself. Instead most of the debuginfo is left in the in object files that were input to the linker, and the debuginfo binary references those files. Unfortunately there are two flavours of this scheme: the DWARF5 standard "DWO" and the GNU variant "DWZ", and they are different. You can merge the debuginfo from multiple DWO/DWZ files into a single file, although I don't think that adds to the complexity of debuginfo-consuming tools.

On top of that, there is also "multi-file DWZ". This lets you extract DWARF data that's common to multiple binaries into a single "alternative debuginfo" file referenced by multiple binaries via a .gnu_debugaltlink section. This has been standardized as "supplementary object files". Fedora is using the GNU variant in its debuginfo packages already.

You can optionally gzip-compress individual ELF sections in any of those files.

These techniques could, in theory, be combined. Fedora uses external debuginfo files with multi-file DWZ. I think you could have a binary that uses both "supplementary object files" and "split DWARF". I think the DWARF spec rules out a "supplementary object file" itself using "split DWARF", or vice versa, but it's a bit vague.

It's unfortunate that "supplementary object files" and "split DWARF" are different.

It's also very unfortunate that many new DWARF features exist in a GNU flavour and a standard flavour. In practice DWARF-consuming tools need to support both, which is extra work. The DWARF community could learn from the painfully-won wisdom of the Web standards community.

I apologise if I got any of the above wrong. It's complicated and confusing, and documentation is scattered or in some cases nonexistent.

Update Turns out that the ELF per-section compression is more complex than I knew. Any section may be compressed with SHF_COMPRESSED, in which case it starts with a Elf32_External_Chdr or Elf64_External_Chdr. Some Fedora packages use this. However, you can also have compressed sections containing a different sort of header, "ZLIB" followed by an 8-byte big-endian uncompressed size.

Also, it appears the GNU tools will accept .zdebug variants of every .debug. The "z" is supposed to indicate compression, but nothing seems to require that .zdebug be compressed or .debug uncompressed.

Tuesday, 6 March 2018

"Zach": AI Fraud In Christchurch

David Farrier's story is amazing. Go and read it.

I have no doubt whatsoever that this is all a ridiculous fraud. Apart from the implausibility of it all, the purported technical details make no sense. Serious AI outfits use racks of cheap servers, not expensive supercomputers like the Cray XC50. The XC50 is cooled with water, not liquid nitrogen. Training an AI by sending it emails in English would only work if it has already achieved human-level intelligence, in which case a) impressive effort by Albi Whale, purported "boy genius" b) why bother taking the time of medical professionals to train it, when you could be learning much faster from Wikipedia and other resources on the Internet c) why are you applying the pinnacle of mankind's technological achievement to transcribing medical records in Christchurch?

David Whale's emails to Farrier are full of bluster, someone who doesn't know much about computers trying to impress someone whom he thinks also doesn't know much. He's not telling the truth. Robert Seddon-Smith and John Pickering are either active fraudsters or victims.

Most concerning is that it sounds like government health boards are either on the verge of funneling funds to this fraud, or are already doing so. That needs to be stopped.

Friday, 2 March 2018

Tongariro Northern Circuit #2

A couple of weeks ago I went with a friend to do the Tongariro Northern Circuit for the second time in less than a year. The weather wasn't quite as good as last time but we had another great trip.

We drove down to the mountains, stopping at Orakei Korako on the way — a pretty good thermal area. We spent Friday night in Taupo so we could arrive at the first hut, Mangatepopo, in good time on Saturday. We had a swim in Lake Taupo — surprisingly warm. On Saturday morning we drove to Whakapapa with a stop for a two-hour walk around Lake Rotopounamu — lovely and peaceful.

We knew Cyclone Gita was scheduled to hit New Zealand on Tuesday, the planned fourth day of our walk, so I checked at the DoC office at the start of our walk in Whakapapa. They advised us to just go ahead, that if necessary on the fourth day we could walk out to the Desert Road in 1.5 hours instead of returning to Whakapapa over the exposed saddle between Ruapehu and Tongariro.

The first day to Mangatepopo Hut was a bit rainy and the track is in poor condition ... perhaps the worst one-day section of track in the entire Great Walks system. Nevertheless the area is full of pretty wildflowers this time of year, and we got to Mangatepopo in good time, just after 3pm. Having just two in our group — instead of ten on my last tramp! — meant we talked to a lot more people, most of whom seemed to be Americans for some reason. We had a great time, made popcorn and shared it out, taught San Juan to a few people, and got to know some trampers we'd see a lot of for the rest of the walk.

Sunday's walk across Tongariro to Oturere Hut was busy as expected, as we shared the track with hundreds of people doing the Tongariro Crossing that day. Nevertheless the landscape never ceases to amaze. Oturere is quite cramped (we heard it's scheduled for an upgrade in 2019) but we had another great time: more San Juan, more popcorn, and finally a clear view of Ngauruhoe.

On Monday morning at Oturere I overhead one man propounding "religious people think they have all the answers ... must be comforting (to be so ignorant)". That's totally contrary to my experience. Who contemplates the mystery of the Trinity, or their own sin, for example, and comes away thinking they have all the answers? I resisted interjecting.

The forecast for Tuesday was that with cyclone Gita approaching we'd have 100+km/hour winds and heavy rain. That would be a bad combination for a four-hour walk in very exposed terrain between Ruapehu and Ngauruhoe. We decided to accelerate plans and complete Tuesday's leg on Monday. So, we had a two-and-a-half hour walk to Waihohonu hut — still the best hut in New Zealand AFAIK — and after a half-hour break carried on all the way back to Whakapapa (five and a half hours) with short sides trip to Lower Tama Lake and Taranaki Falls. Rain was forecast for the afternoon but we only had a light sprinkle before we arrived at Whakapapa around 5pm. We drove all the way back to Auckland that night without difficulty. All in all, another excellent tramp.