Tuesday, 12 November 2019

The Power Of Collaborative Debugging

An under-appreciated problem with existing debuggers is that they lack first-class support for collaboration. In large projects a debugging session will often cross module boundaries into code the developer doesn't understand, making collaboration extremely valuable, but developers can only collaborate using generic methods such as screen-sharing, physical co-location, copy-and-paste of terminal transcripts, etc. We recognized the importance of collaboration, and Pernosco's cloud architecture makes such features relatively easy to implement, so we built some in.

The most important feature is just that any user can start a debugging session for any recorded execution, given the correct URL (modulo authorization). We increase the power of URL sharing by encoding in the URL the current moment and stack frame, so you can copy your current URL, paste it into chat or email, and whoever clicks on it will jump directly to the moment you were looking at.

The Pernosco notebook takes collaboration to another level. Whenever you take a navigation action in the Pernosco UI, we tentatively record the destination moment in the notebook with a snippet describing how you got there, which you can persist just by clicking on. You can annotate these snippets with arbitrary text, and clicking on a snippet will return to that moment. Many developers already record their progress by taking notes during debugging sessions (I remember using Borland Sidekick for this when I was a kid!); the Pernosco notebook makes this much more convenient. Our users find that the notebook is great for mitigating the "help, I'm lost in a vast information space" problem that affects Pernosco users as well as users of traditional debuggers (sometimes more so in Pernosco, because it enables higher velocity through that space). Of course the notebook persists indefinitely and is shared between all users and sessions for the same recording, so you have a permanent record of what you discovered that your colleagues can also explore and add to.

Our users are discovering that these features unlock new workflows. A developer can explore a bug, recording what they've learned in the code they understand, then upon reaching unknown code forward the debugging session to a more knowledgeable developer for further investigation — or perhaps just to quickly confirm a hypothesis. We find that, perhaps unexpectedly, Pernosco can be most effective at saving the time of your most senior developers because it's so much easier to leverage the debugging work already done by other developers.

Collaboration via Pernosco is beneficial not just to developers but to anyone who can reproduce bugs and upload them to Pernosco. Our users are discovering that if you want a developer to look into a bug you care about, submitting it to Pernosco yourself and sending them a link makes it much more likely they will oblige — if it only takes a minute or two to start poking around, why not?

Extending this idea, Pernosco makes it convenient to separate reproducing a bug from debugging a bug. It's no problem to have QA staff reproduce bugs and submit them to Pernosco, then hand Pernosco URLs to developers for diagnosis. Developers can stop wasting their time trying to replicate the "steps to reproduce" (and often failing!) and staff can focus on what they're good at. I think this could be transformative for many organizations.

Thursday, 7 November 2019

Omniscient Printf Debugging In Pernosco

Pernosco supports querying for the execution of specific functions and the execution of specific source lines. These resemble setting breakpoints on functions or source lines in a traditional debugger. Traditional debuggers usually let you filter breakpoints using condition expressions, and it's natural and useful to extend that to Pernosco's execution views, so we did. In traditional debuggers you can get the debugger to print the values of specified expressions when a breakpoint is hit, and that would also be useful in Pernosco, so we added that too.

These features strongly benefit from Pernosco's omniscient database, because we can evaluate expressions at different points in time — potentially in parallel — by consulting the database instead of having to reexecute the program.

These features are relatively new and we don't have much user experience with them yet, but I'm excited about them because while they're simple and easily understood, they open the door to "query-based debugging" strategies and endless possibilities for enhancing the debugger with richer query features.

Another reason I'm excited is that together they let you apply "printf-debugging" strategies in Pernosco: click on a source line, and add some print-expressions and optionally a condition-expression to the "line executions" view. I believe that in most cases where people are used to using printf-debugging, Pernosco enables much more direct approaches and ultimately people should let go of those old habits. However, in some situations some quick logging may still be the fastest way to figure out what's going on, and people will take time to learn new strategies, so Pernosco is great for printf-debugging: no rebuilding, and not even any reexecution, just instant(ish) results.

Monday, 4 November 2019

The BBC's "War Of The Worlds"

Very light spoilers ahead.

I had hopes for this show. I liked the book (OK, when I read it >30 years ago). I love sci-fi. I like historical fiction. I was hoping for Downton Abbey meets Independence Day. Unfortunately I think this show was, as we say in NZ, "a bit average".

I really liked the characters reacting semi-realistically to terror and horror. It always bothers me that in fiction normal people plunge into traumatic circumstances, scream a bit, then get over it in time for the next scene. This War Of the Worlds takes time to show characters freaking out, resting and consoling one another, but not quite getting it all back together. Overall I thought the acting was well done.

I think the pacing and editing were poor. Some parts were slow, but other parts (especially in the first half) lurch from scene to scene so quickly it feels like important scenes were cut. It was hard to work out was going on geographically.

Some aspects seemed pointlessly complicated or confusing, e.g. the spinning ball weapon.

Call me old-fashioned, but when a man abandons his wife I am not, by default, sympathetic to him, so I spent most of the show thinking our male protagonist is kind of a bad guy, when I'm clearly supposed to be siding with him against closed-minded society. I even felt a bit vindicated when towards the end his lover Amy wonders if they did the right thing. At least for a change the Christian-esque character was only a fool, not a psychopath, so thank God for small mercies.

I guess I'm still waiting for the perfect period War Of The Worlds adaptation.

Saturday, 2 November 2019

Explaining Dataflow In Pernosco

Tracing dataflow backwards in time is an rr superpower. rr users find it incredibly useful to set hardware data watchpoints on memory locations of interest and reverse-continue to find where those values were changed. Pernosco takes this superpower up a level with its dataflow pane (see the demo there).

From the user's point of view, it's pretty simple: you click on a value and Pernosco shows you where it came from. However, there is more going on here than meets the eye. Often you find that the last modification to memory is not what you're looking for; that the value was computed somewhere and then copied, perhaps many times, until it reached the memory location you're inspecting. This is especially true in move-heavy Rust and C++ code. Pernosco detects copying through memory and registers and follows dataflow backwards through them, producing an explanation comprising multiple steps, any of which the user can inspect just by clicking on them. Thanks to omniscience, this is all very fast. (Jeff Muizelaar implemented something similar with scripting gdb and rr, which partially inspired us. Our infrastructure is a lot more powerful what he had to work with.)

Pernosco explanations terminate when you reach a point where a value was derived from something other than a CPU copy: e.g. an immediate value, I/O, or arithmetic. There's no particular reason why we need to stop there! For example, there is obviously scope to extend these explanations through arithmetic, to explore more general dataflow DAGs, though intelligible visualization would become more difficult.

Pernosco's dataflow explanations differ from what you get with gdb and rr in an interesting way: gdb deliberately ignores idempotent writes, i.e. writes to memory that do not actually change the value. We thought hard about this and decided that Pernosco should not ignore them. Consider a trivial example:

x = 0;
y = 0;
x = y;
If you set a watchpoint on x at the end and reverse-continue, gdb+rr will break on x = 0. We think this is generally not what you want, so a Pernosco explanation for x at the end will show x = y and y = 0. I don't know why gdb behaves this way, but I suspect it's because gdb watchpoints are sometimes implemented by evaluating the watched expression over time and noting when the value changes; since that can't detect idempotent writes, perhaps hardware watchpoints were made to ignore idempotent writes for consistency.

An interesting observation about our dataflow explanations is that although the semantics are actually quite subtle, even potentially confusing once you dig into them (there are additional subtleties I haven't gone into here!), users don't seem to complain about that. I'm optimistic that the abstraction we provide matches user intuitions closely enough that they skate over the complexity — which I think would be a pretty good result.

(One of my favourite moments with rr was when a Mozilla developer called a Firefox function during an rr replay and it printed what they expected it to print. They were about to move on, but then did a double-take, saying "Errrr ... what just happened?" Features that users take for granted but are actually mind-boggling are the best features.)

Thursday, 31 October 2019

Improving Debugging Workflow With Pernosco

One of the key challenges for debuggers is that the traditional interactive debugging workflow — running your program interactively and starting it under the debugger or connecting to it once it's running, and pausing it to inspect its state — doesn't work well for a lot of people anymore. That workflow isn't convenient when the application normally doesn't run locally — e.g. because testing more often happens in CI, or on a phone, or the code you care about runs as part of a big distributed system. It also falls down when pausing the debuggee breaks the system. As software has increasingly moved to the cloud and mobile platforms, this has become a bigger deal and it's no wonder use of interactive debugging has waned. "Remote debugging" helps a bit, but it tends to be painful and although it can bridge gaps between machines, it doesn't bridge gaps in time.

We've published a couple of documents on how Pernosco tackles this, in particular how Pernosco integrates with CI and how Pernosco supports uploads from developers and QA (manual and automatic). A big part of the solution is just record-and-replay (with rr in our case). Being able to record execution on one machine, without stopping the application, and replay execution on another machine at another time, enables a lot of new workflows that mitigate the above problems. However Pernosco goes further in some important ways.

One issue is that just being able to replay execution isn't enough; we also want a good debugging experience during the replay. This means we need to capture compiled debuginfo, source code and other relevant information that aren't strictly necessary for the replay. In many cases that data isn't even available at the recording site, but it might be available somewhere (e.g. a symbol server or build artifact archive) for us to get later. So our debugging infrastructure has to support collecting information at the recording site, harvesting it from various sources later, and actually using it during the debugging session. This is not at all trivial, and Pernosco has a lot of code to handle this sort of thing, some of which needs to be customized for specific customers. For example, Pernosco identifies Firefox binaries built by Mozilla CI and knows how to locate the relevant symbols and sources from Mozilla's archives. For developer and QA-submitted recordings, Pernosco examines the trace to locate relevant debuginfo and source code and upload them. For source code hosted in well-known public repositories (e.g. mozilla-central or Github), we minimize overhead by uploading only local changes and having our debugger client fetch the public changes from the public repository at debugging time.

Note that rr on its own provides trace portability but debugging ported traces is tricky. With rr pack and rr record --disable-cpuid-features, it is generally possible to create rr recordings that can be replayed on other machines. However, when you replay with gdb, locating symbols and source files is problematic when the replay machine filesystem does not exactly match the recording machines. For example when gdb sees the shared-library loader load /home/roc/libfoo.so, that file might not be present at that location on the replay machine (or worse, it might be a different version) so gdb won't load the right symbols. You can try to work around this by populating a "sysroot" directory with the relevant files, copied and renamed from the trace, but figuring out which trace files need to go where is hard (because e.g. it depends on the symlinks present on the recording machine, which rr doesn't capture in the recording, and it's not even clear how you'd do that).

Another important feature for enabling new workflows is just having a cloud-based Web client. We want to minimize the barrier to getting into a debugging session, and it's hard to think of an easier way than publishing a link which the user clicks on to enter a specific debugging session — no installation, no configuration. Those links can be published wherever you already notify users about test failures.

One thing I'm really excited about is that Pernosco enables splitting failure reproduction from debugging. Traditionally, developers had to reproduce a bug locally when they wanted to use an interactive debugger to debug it. Pernosco lets you delegate the reproduction step to other people (or automation). For example, when QA staff find a bug, instead of writing down the steps to reproduce to send to a developer (and inevitably having a back-and-forth discussion about exactly what's required to reproduce the bug, etc), QA can upload a recording to Pernosco and pass the link to the developer. This saves time and money — especially when QA staff are cheaper and/or more scalable then your developer team.

Friday, 25 October 2019

Auckland Half Marathon 2019

I ran the Auckland Half Marathon last Sunday. My time was 1:46:51, a little slower than last year. I didn't quite have the mental endurance I guess. As always I hope I'll do better next year, though I am getting older...

As usual, I ran barefoot. This year I had climbing tape again which definitely makes it a bit easier on my feet at this speed.

People ask why I run barefoot. I didn't start running until I was about 35, and I started running on a beach where I'm usually barefoot. After I got used to that, it never felt right to run in shoes ... they're just weight on my feet. To be honest, I also like being a bit eccentric. I've never had any serious issues with injuries so at this point, I don't feel like changing anything. I did try wearing Vibram 5-Finger foot-gloves for my first half-marathon, but my feet got all sweaty and were still sore at the end so they don't seem to help me much. Sticking small patches of climbing tape to the hardest-wearing parts of my feet (the balls, and my second toes) is plenty of protection when running fast. When running slower (20K training runs around 2:10) I don't even need that.

Pernosco Demo Video

Over the last few years we have kept our work on the Pernosco debugger mostly under wraps, but finally it's time to show the world what we've been working on! So, without further ado, here's an introductory demo video showing Pernosco debugging a real-life bug:

This demo is based on a great gdb tutorial created by Brendan Gregg. If you read his blog post, you'll get more background and be able to compare Pernosco to the gdb experience.

Pernosco makes developers more productive by providing scalable omniscient debugging — accelerating existing debugging strategies and enabling entirely new strategies and features — and by integrating that experience into cloud-based workflows. The latter includes capturing test failures occurring in CI so developers can jump into a debugging session with one click on a URL, separating failure reproduction from debugging so QA staff can record test failures and send debugger URLs to developers, and letting developers collaborate on debugging sessions.

Over the next few weeks we plan to say a lot more about Pernosco and how it benefits software developers, including a detailed breakdown of its approach and features. To see those updates, follow @_pernosco_ or me on Twitter. We're opening up now because we feel ready to serve more customers and we're keen to talk to people who think they might benefit from Pernosco; if that's you, get in touch. (Full disclosure: Pernosco uses rr so for now we're limited x86-64 Linux, and statically compiled languages like C/C++/Rust.)

Monday, 7 October 2019

Food In Auckland 2019

Some places I like these days that are (mostly) pretty close to my house:

  • Jade Town (Dominion Road): Uighur food. A bit Chinese, a bit Indian, but not much like either.
  • Cypress (Dominion Road): New-ish yum cha (dim sum) place. Some interesting dishes I'd never seen before.
  • Viet Kitchen (Dominion Road): More authentic, bit more expensive Vietnamese.
  • Hot And Spicy Pot (Dominion Road, city): tasty pay-per-weight stir-fry.
  • Barilla (Dominion Road): Decent cheap dumplings and other Chinese food.
  • Tombo (Newmarket): Quality Korean-Japanese BBQ/hotpot buffet.
  • Hansan (Newmarket, city): Still a favourite cheap Vietnamese-Chinese place.
  • Master Dumpling (Newmarket): good dumplings, great sweet potato in melted sugar.
  • Faro (Newmarket): Nice Korean lunch combo.
  • Momotea (Newmarket): Taiwanese-style cafe with strong frozen drink selection.
  • Selera (Newmarket): Malaysian cafe food.
  • Sun World (Newmarket): Reliably good yum cha.
  • Kimchi Project (city): Kimchi fusion-eseque place. Kimchi carbonara is great.
  • BBQ Duck Cafe (city): Good value tasty Hong Kong-style cafe.
  • Nol Bu Ne (city): Great value Korean food.
  • Uncle Man's (city): Malaysian food — best roti around IMHO.
  • Kiin (Mt Eden): Great cheap Thai food.
  • Altar (Mt Eden): Good value sandwhich+fries lunch special.
  • Corner Burger (Mt Eden): Great burger+fries+shake combo — the shakes are especially good.
  • Gangnam Style (Takapuna): Korean BBQ buffet. The worst thing about this place is the name.
  • Petra Schwarma (Kingsland): Really nice Jordanian food.
  • Mama Rich (Greenlane): Good Malaysian cafe food, a bit cheaper than Selera.
  • Chocolate Boutique (Parnell): Not exactly a restaurant but for dessert, a great option.

Thursday, 3 October 2019

Pouakai Circuit

Last weekend I went with a couple of people down to Mt Taranaki to tramp the Pouakai Circuit, planning to do it leisurely over three days. We drove down on Saturday morning (about 5 hours from Auckland), had a fine lunch at the Windsor Cafe in Inglewood, and hiked from a carpark up to Holly Hut via the Kokowai Track. (Normally you'd take the Holly Track from the Egmont Visitor's Centre but that track is currently closed due to a slip, though it should be open soon.) The posted time was 4.5 hours but we did it in 3 hours; my companions are very fast, and I'm not too bad myself though the steep uphill with steps made me struggle a bit! The weather was great and we had some excellent views of Mt Taranaki along the way. It's a beautiful mountain with snow cover on a clear winter's day.

Holly Hut is a fine hut. Some generous donors installed LED lighting, which is great during the winter when the days are short. It was the first Saturday of the school holidays but there were just two other people there. Late Saturday afternoon we did the ~1 hour return side trip to Bell Falls, which were beautiful. The weather had gotten cloudy, drizzly and misty and the daylight was fading, but that added to the effect!

On Sunday morning we had a bit of a late start because we thought we'd spend Sunday night at Pouakai Hut, which is nominally 2.5 hours walk away. The track crosses Ahukawakawa Swamp, in a basin between the old Pouakai volcano and the slopes of Mt Taranaki — most picturesque, especially in drizzly foggy weather. We had been hoping to do the side track to the Pouakai summit on the way to the hut, but bursts of rain and the likelihood of seeing nothing but cloud dissuaded us. We got to Pouakai Hut in about 1.5 hours and had an early lunch, then had to think hard: with phone reception at the hut, we got a new weather forecast for Monday, which promised heavy rain, wind, lower temperatures and possible thunderstorms — even less promising than the rain currently beating against the hut. We decided to avoid that weather by finishing the circuit on Sunday and driving back to Auckland late. That plan worked out pretty well; it didn't rain much during the rest of Sunday and although our fast pace left us all feeling a bit tired when we got back to the carpark, we had fun and got back to Auckland before midnight. All in all, a short trip with lots of driving and lots of fast walking, but still a great chance to appreciate God's creation.

I'm looking forward to doing this trip again when the weather's better and the Holly Track is open! It's quite accessible from Auckland, and we could do it with less fit people if we take more time. We could also save some energy by starting at the Egmont Visitor's Centre at the top end of the road, then taking the Holly Track to Holly Hut, then returning via Pouakai Hut to the lower carpark and sending some fitter people up the road to fetch the car.

Wednesday, 2 October 2019

Is Richard Dawkins A Moral Realist?

An interview with Richard Dawkins in the New Scientist (21 September 2019) contains this exchange:

Graham Lawton: Another chapter in your book looks at progress in moral issues such as gender and racial equality, and you present a very upbeat picture. Do you worry that progress has gone into reverse?
Richard Dawkins: No. It's important to take the long view. I think there's absolutely no doubt that we're getting better as the centuries go by. The moral standards of a 21st century person are significantly different from those of a 20th century person.

Dawkins here seems to assume there are objective moral standards against which human moral opinions can be measured, i.e. moral realism. This surprised me because Dawkins is such a strong advocate for naturalism and it has always seemed obvious to me (including before I became a Christian) that naturalism is incompatible with moral realism — the famous is-ought gap. Sean Carroll, for example, has written about this much better than I could. In fact, Dawkins has apparently written (in River Out of Eden, quoted here):

The universe we observe has precisely the properties we should expect if there is at the bottom, no design, no purpose, no evil and no good.
so unless he's changed his mind about that, it seems Dawkins at least professes to be a moral anti-realist.

This confusion illustrates why I've always found moral anti-realism so deeply unsatisfactory in real life. One can argue that there are no objective moral facts, and that moral claims simply express opinions shaped by evolution and culture etc, and even try to believe those things — but the temptation to think, speak and act as moral realists seems practically irresistible ... so much so that even the most prominent moral anti-realists consistently yield to it, and hardly anyone even notices.

Addendum Arguably the New Scientist quote could be interpreted in other ways, e.g. that by "getting better" Dawkins meant more internally consistent, or more in accordance with his personal subjective moral opinions. However I think it's obvious most people would interpret it as "objectively morally better" and thus if Dawkins meant something else, he needs to work a lot harder at eliminating such misleading language.

Thursday, 12 September 2019

Dissatisfied With Docker

I am not satisfied with Docker.

Untrusted users should be able to run their own container instances. Running a basic container instance means forking a process, putting it in the right kernel namespaces, and setting up mounts and virtual network interfaces, and those can all be done without privileges. Unfortunately, in Docker, access to the Docker daemon is tantamount to being root. Fixing this would improve both convenience and security.

In fact, a global system daemon should not be needed. Either users should be able to run their own daemons or container management should avoid having a daemon at all, by storing container state in a shared database.

Docker container builds are too slow. Installing a container image requires piping massive amounts of image data to the daemon over a socket, which is ridiculous. This could be avoided by passing a file descriptor to the container manager ... or you might even get rid of container image management and start containers by pointing directly to the files comprising the image.

Docker container instances start too slowly. It shouldn't take seconds to start or stop a simple small container. The system calls required to start a simple container run in milliseconds.

No doubt these observations are old news, so I assume there are better container implementations out there. Which one do I want?

Wednesday, 10 July 2019

Cape Brett 2019

It's the school holidays so I took one of my children and one of my friends (a young adult) for a tramping trip to Cape Brett Hut on Monday and Tuesday. It's nominally an eight-hour walk each way. The hut used to be a lighthouse-keeper's house and is in a spectacular setting right at the end of the Cape, on a grassy slope with the ocean on three sides. The walking track is through lovely bush with great views north to the Bay of Islands and south along the Tutukaka coast. It's an excellent trip, just a few hours drive from Auckland and because it's up north and coastal, it's good to do during the winter. The hut is almost never fully booked, perhaps because the walk is quite long and arduous (compared to other walks in the area).

I went there eight years ago with a group. That time we took a water taxi to the hut and a water taxi carried our packs out while we walked back along the track. This time I wanted to do it "properly": walking with our supplies both ways. The nature of the group was also quite different; eight years ago the group was much larger and with a greater variation of fitness, while this time there was just the three of us and we're all pretty fit (but I'm eight years older than last time!).

We stayed at Whangaruru Beachfront Camp on Sunday night so we could get to the trailhead early on Monday and start walking around 7:45am, not long after sunrise at 7:30am. The days are short and I wanted to reach the hut with plenty of daylight to enjoy the destination. We ended up being pretty fast and got to the hut in about six hours, just before 2pm! It's a tough track, with lots of steep uphills and downhills, and we all felt a fair bit of soreness in our legs. Nevertheless I was pretty pleased with our speed and my ability to keep up with the younger people. The weather was great both days — mild, mostly cloudy, and a light breeze in places — and the scenery was brilliant. We had time to rest in the sun and explore the end of the Cape before it got dark, then we cooked a tasty meal.

Around 8pm, when it had been fully dark for a while, a couple arrived at the hut. They told us they'd walked to the hut in just five hours, most of that in the dark. That deflated my pride a bit!

During the night the skies cleared and the moon set, giving an excellent view of the stars through the windows next to my bunk.

On Tuesday we again got up pretty early, around 7am. The sun hadn't risen but we'd already been in our bunks for ten hours. We got to see a lovely sunrise over the ocean. We left the hut at 8:20am and this time finished the walk out in just five hours and twenty minutes. Perhaps surprisingly, I felt a lot better on the second day than the first, and so did the others. Our packs were a little lighter, but I think the previous day's workout had made us all a bit fitter. I found it exhilarating grinding up steep hills without pausing and then stretching the legs for a fast walk along the flat or slightly downhill, and I also felt more agile on the steep downhill sections.

This was a great trip and I really feel thankful to God for the privilege of being able to it. I look forward to doing it again with other people; the walk isn't for everyone, but there's always the water taxi option.

Monday, 1 July 2019

Auckland Rust Meetup: "Building An Omniscient Debugger In Rust"

I gave this month's Auckland Rust Meetup talk: a very high-level overview of Pernosco's architecture and then a dive into some superficial metrics about the project, comments on the third-party crates we use, and some thoughts about the pros and cons of Rust for this project. I apologise for the slides being thrown together in a hurry, and they're probably a bit hard to follow without my commentary.

Thursday, 20 June 2019

Stack Write Traffic In Firefox Binaries

For people who like this sort of thing...

I became interested in how much CPU memory write traffic corresponds to "stack writes". For x86-64 this roughly corresponds to writes that use RSP or RBP as a base register (including implicitly via PUSH/CALL). I thought I had pretty good intuitions about x86 machine code, but the results surprised me.

In a Firefox debug build running a (non-media) DOM test (including browser startup/rendering/shutdown), Linux x86-64, non-optimized (in an rr recording, though that shouldn't matter):

Base registerFraction of written bytes
RAX0.40%
RCX0.32%
RDX0.31%
RBX0.01%
RSP53.48%
RBP44.12%
RSI0.50%
RDI0.58%
R80.01%
R90.00%
R100.00%
R110.00%
R120.00%
R130.00%
R140.00%
R150.00%
RIP0.00%
RDI (MOVS/STOS)0.25%
Other0.00%
RSP/RBP97.59%

Ooof! I expected stack writes to dominate, since non-opt Firefox builds have lots of trivial function calls and local variables live on the stack, but 97.6% is a lot more dominant than I expected.

You would expect optimized builds to be much less stack-dominated because trivial functions have been inlined and local variables should mostly be in registers. So here's a Firefox optimized build:

Base registerFraction of written bytes
RAX1.23%
RCX0.78%
RDX0.36%
RBX2.75%
RSP75.30%
RBP8.34%
RSI0.98%
RDI4.07%
R80.19%
R90.06%
R100.04%
R110.03%
R120.40%
R130.30%
R141.13%
R150.36%
RIP0.14%
RDI (MOVS/STOS)3.51%
Other0.03%
RSP/RBP83.64%

Definitely less stack-dominated than for non-opt builds — but still very stack-dominated! And of course this is not counting indirect writes to the stack, e.g. to out-parameters via pointers held in general-purpose registers. (Note that opt builds could use RBP for non-stack purposes, but Firefox builds with -fno-omit-frame-pointer so only in leaf functions, and even then, probably not.)

It would be interesting to compare the absolute number of written bytes between opt and non-opt builds but I don't have traces running the same test immediately at hand. Non-opt builds certainly do a lot more writes.

Tuesday, 4 June 2019

Winter Tramp: Waihohonu Hut To Tama Lakes

New Zealand's Tongariro National Park is one of my favourite places. We had a three-day weekend so I drove three friends and family down for a two-night stay at Waihohonu Hut, surely the grandest public hut in New Zealand, and we enjoyed the park in a wintry setting ... an interesting change from our previous visits.

We left Auckland around 7am on Saturday to avoid traffic — often hordes of people leave Auckland for long weekends — but there was no congestion. After stopping for lunch in Turangi we reached the trailhead on the Desert Road shortly before 1pm. The wind was cold and there was thick low-lying cloud, but it wasn't snowing ... yet. From there the walk to Waihohonu Hut is easy in less than two hours, on a good quality track with a very gentle upward slope. Much of the route is very exposed but the wind wasn't as high as forecast and we were well equipped. Towards the end it started snowing gently, but that was fun and we got to the hut in high spirits before 3pm. The hut is well insulated and other trampers had arrived before us and got the fire going, and the LED lighting was on, so it was cosy. We talked to them, made popcorn, watched the snow fall, played some card games and enjoyed the rest of the afternoon and evening as more trampers trickled in.

I had wondered how full the hut would get. There are 28 bunks, but it's off-season so they can't be booked, and given the public holiday potentially a lot of people could have turned up. As it happened about 35 people ended up there on Saturday night — many people tramping in from the Desert Road just to visit Waihohonu, like us, but also quite a few doing round trips from Whakapapa or even doing the Tongariro Northern Circuit (which requires alpine skills at this time of year). People grabbed bunks as they arrived, and the rest slept on spare mattresses in the common room, which was actually a fine option. The only problem with sleeping in the common room is people staying up late and (probably other) people coming in early for breakfast. Even though it was technically overfull, Waihohonu Hut's common areas are so spacious that at no time did it ever feel crowded.

On Sunday morning there was a bit more snow on the ground, some low cloud and light snow falling. I was hoping to walk west from the hut to the Tama Saddle, which separates Mt Ruapehu to the south from Mts Tongariro and Ngauruhoe to the north, and visit the Lower Tama Lake just north of the saddle. It was unclear what conditions were going to be like but the forecast had predicted snow would stop falling in the morning, and we were well equipped, so we decided to give it a go. The expected walking time was about six and a half hours and we left before 9am so we had plenty of time. In the end it worked out very well. The cloud lifted quickly, except for some tufts around Ruapehu, and the snow did stop falling, so we had stunning views of the mountains the whole day. We were the first walkers heading west that day so we walked through fresh snow, broke the ice of frozen puddles and streams, and saw the footprints of rabbits and other animals, and relished the pristine wintry environment. It's the first time I've done a long-ish walk in the snow in the wilderness like this, and it was magnificent! I'm so grateful we had the chance to be there and that the weather turned out well.

As we got close to the saddle the snow was thicker, up to our knees in a few places, and the wind got stronger, and at the Lower Tama Lake it was quite cold indeed and blowing steadily from the east. I was a bit worried about having to walk back into that wind, and there was still the possibility of a change in the weather, so even though we were ahead of schedule I decided after lunch above the lake we should head back to Waihohonu rather than carrying on up to Upper Tama Lake (where no doubt the views would have been even better, but the wind would have been even colder!). Interestingly though, we were far from alone; many people, mostly foreign tourists, had walked to the lakes from Whakapapa (on the western side of Ruapehu), a shorter walk, and even headed up the ridge to the upper lake. As it turned out, our walk back was pretty easy. The wind mostly died away and the sun even came out.

We got back to Waihohonu about 3:30pm and once again relaxed at the hut for the rest of the afternoon, catching up with the trampers who were staying both nights and meeting new arrivals. That night the hut was again overfull but only by a couple of people, and again that wasn't a problem.

This morning (Monday) the sky was completely clear, giving magnificent views of snow-covered Ngauruhoe and Ruapehu through the hut's huge picture windows. A thick frost on the ground combined with the snow to form a delightfully crunchy surface for our walk back to the car park. I for one kept turning around to take in the incredible views. It was a very pleasant drive back in the sun through the heart of the North Island, but I can't want to go tramping again!

Wednesday, 29 May 2019

A Few Comments On "Sparse Record And Replay With Controlled Scheduling"

This upcoming PLDI paper is cool. One thing I like about it is that it does a detailed comparison against rr, and a fair comparison too. The problem of reproducing race bugs using randomized scheduling in a record-and-replay setting is important, and the paper has interesting quantitative results.

It's unfortunate that the paper doesn't mention rr's chaos mode, which is our attempt to tackle roughly the same problem. It would be very interesting to compare chaos mode to the approach in this paper on the same or similar benchmarks.

I'm quite surprised that the PLDI reviewers accepted this paper. I don't mean that the paper is poor, because I think it's actually quite good. We submitted papers about rr to several conferences including PLDI (until USENIX ATC accepted it), and we consistently got quite strong negative review comments that it wasn't clear enough which programs rr would record and replay successfully, and what properties of the execution were guaranteed to be preserved during the replay. We described many steps we had to take to get applications to record efficiently in rr in practice, and many reviewers seemed to perceive rr as just a collection of hacks and thus not publishable. Yet it seems to me this "sparse replay" approach is considerably more vague than rr about what it can handle and what gets preserved during replay. I do not see any principled reason why the critical reviewers of our rr paper would not have criticised this paper even harder. I wonder what led to a different outcome.

Perhaps making the idea of "sparse replay" (i.e., record only some subset of behaviour that's necessary and sufficient for a particular application) a focus of the paper effectively lampshaded the problem, or just sufficiently reduced expectations by not claiming to be a general-purpose tool.

I also suspect it's partly just "luck of the draw" in reviewer assignment. It is an unfortunate fact that paper review outcomes can be pretty random. As both a submitter and reviewer, I've seen that scores from different reviewers often differ wildly — it's not uncommon for a paper to get both A and D reviews on an A-to-D scale. When a paper gets both A and D, it typically gets a lot more scrutiny from the review committee to reach a decision, but one should also expect that there are many (un)lucky papers that just happen to avoid a D reviewer or fail to connect with an A reviewer. Given how important publications are to many people (fortunately, not to me), it's not a great system. Though, like democracy, maybe it's better than the others.

Saturday, 25 May 2019

Microsoft's Azure Time-Travel Debugging

This looks pretty cool. The video is also instructive.

It's not totally clear to me how this works under the hood, but apparently they have hooked up the Nirvana TTD to the .NET runtime so that it will enable TTD recording of invocations of particular methods. That means you can inspect the control flow, the state of registers (i.e. local variables), and any memory read by the method or its callees, at any point during the method invocation. It's not clear what happens if you inspect memory outside the scope of the method (e.g. global variables) or if you inspect memory that was modified concurrently by other threads. Plus there are performance and other issues listed in the blog post.

This seems like a good idea but somewhat more limited than a full-fledged record-everything debugger like rr or WinDbg-TTD. I suspect they're pushing this limited-scope debugging as a way to reduce run-time overhead. Various people have told me that WinDbg-TTD has almost unusably high overhead for Firefox ... though other people have told me they found it tolerable for their work on Chrome, so data is mixed.

One interesting issue here is that if I was designing a Nirvana-style multithread-capable recorder for .NET applications — i.e., one that records all memory reads in some fashion via code instrumentation — I would try building it into the .NET VM itself, like Chronon for Java. That way you avoid recording stuff like GC (noted as a problem for this Azure debugger), and the JIT compiler can optimize your instrumentation. I guess Microsoft people were looking for a way to deploy TTD more widely and decided this was the best option. That would be reasonable, but it would be a "solution-driven" approach to the problem, which I have strong feelings about.

Monday, 20 May 2019

Don't Call Socially Conservative Politicial Parties "Christian"

There is talk about starting a new "Christian" (or "Christian values") political party in New Zealand. The party might be a good idea, but if it's really a "social conservative" party, don't call it "Christian".

Audrey Young writes:

The issues that would galvanise the party are the three big social issues before Parliament at present and likely to be so in election year as well: making abortions easier to get, legalising euthanasia, and legalising recreational cannabis.

None of those issues are specifically Christian. None of them are mentioned directly in the New Testament. I even think Christians can be for some version of all of them (though it makes sense to me that most Christians would oppose the first two at least). Therefore "social conservative" is a much more accurate label than "Christian" for a party focused on opposing those changes.

A truly Christian party's key issues would include reminding the voting public that we all sinners against God, in need of repentance and forgiveness that comes through Jesus. The party would proclaim to voters "how hard it is for the rich to enter the kingdom of God" and warn against storing up treasures on earth instead of heaven. It would insist on policies that support "the least of these". It would find a way to denounce universally popular sins such as greed, gluttony and heterosexual extra-marital sex, and advocate policies that reduce their harm, while visibly observing Paul's dictum "What business is it of mine to judge those outside the church? Are you not to judge those inside?" A Christian party would follow Jesus' warning against "those who for a show make lengthy prayers" and downplay their own piety. It would put extraordinary emphasis on honouring the name of Christ by avoiding any sort of lies, corruption or scandal. Its members would show love for their enemies and not retaliate when attacked. If they fail in public, they would confess and repent in public.

That sounds pretty difficult, but it's what Jesus deserves from any party that claims his name.

I'm all for Christians being involved in politics and applying their Christian worldview to politics, if they can succeed without making moral compromises. But it's incredibly important that any Christian who publicly connects Christ with politics takes into account how that will shape unbelievers' view of Christianity. If they lead people to believe that Christianity is about being socially conservative and avoiding certain hot-button sins, with the gospel nowhere in sight, then they point people towards Hell and betray Jesus and his message.

Monday, 6 May 2019

Debugging Talk At Auckland Rust Meetup

I gave a talk about "debugging techniques for Rust" at tonight's Auckland Rust Meetup. There was many good questions and I had a good time. It wasn't recorded. Thanks to the organiser and sponsors!

I'm also going to give a talk at the next meetup in June!

Monday, 29 April 2019

Goodbye Mozilla IRC

I've been connected to Mozilla IRC for about 20 years. When I first started hanging out on Mozilla IRC I was a grad student at CMU. It's how I got to know a lot of Mozilla people. I was never an IRC op or power user, but when #mozilla was getting overwhelmed with browser user chat I was the one who created #developers. RIP.

I'll be sad to see it go, but I understand the decision. Technologies have best-before dates. I hope that Mozilla chooses a replacement that sucks less. I hope they don't choose Slack. Slack deliberately treats non-Chrome browsers as second-class — in particular, Slack Calls don't work in Firefox. That's obviously a problem for Mozilla users, and it would send a bad message if Mozilla says that sort of attitude is fine with them.

I look forward to finding out what the new venue is. I hope it will be friendly to non-Mozilla-staff and the community can move over more or less intact.

Friday, 26 April 2019

Update To rr Master To Debug Firefox Trunk

A few days ago Firefox started using LMDB (via rkv) to store some startup info. LMDB relies on file descriptor I/O being coherent with memory-maps in a way that rr didn't support, so people have had trouble debugging Firefox in rr, and Pernosco's CI test failure reproducer also broke. We have checked in a fix to rr master and are in the process of updating the Pernosco pipeline.

The issue is that LMDB opens a file, maps it into memory MAP_SHARED, and then opens the file again and writes to it through the new file descriptor, and requires that the written data be immediately reflected in the shared memory mapping. (This behavior is not guaranteed by POSIX but is guaranteed by Linux.) rr needs to observe these writes and record the necessary memory changes, otherwise they won't happen during replay (because writes to files don't happen during replay) and replay will fail. rr already handled the case when the application write to the file descriptor (technically, the file description) that was used to map the file — Chromium has needed this for a while. The LMDB case is harder to handle. To fix LMDB, whenever the application opens a file for writing, we have to check to see if any shared mapping of that file exists and if so, mark that file description so writes to it have their shared-memory effects recorded. Unfortunately this adds overhead to writable file opens, but hopefully it doesn't matter much since in many workloads most file opens are read-only. (If it turns out to be a problem there are ways we can optimize further.) While fixing this, we also added support for the case where the application opens a file (possibly multiple times with different file descriptions) and then creates a shared mapping of one of them. To handle that, when creating a shared mapping we have to scan all open files to see if any of them refer to the mapped file, and if so, mark them so the effects of their writes are recorded.

Update Actually, at least this commit is required.

Thursday, 11 April 2019

Mysteriously Low Hanging Fruit: A Big Improvement To LLD For Rust Debug Builds

LLD is generally much faster than the GNU ld.bfd and ld.gold linkers, so you would think it has been pretty well optimised. You might then be surprised to discover that a 36-line patch dramatically speeds up linking of Rust debug builds, while also shrinking the generated binaries dramatically, both in simple examples and large real-world projects.

The basic issue is that the modern approach to eliminating unused functions from linked libraries, --gc-sections, is not generally able to remove the DWARF debug info associated with the eliminated functions. With --gc-sections the compiler puts each function in its own independently linkable ELF section, and then the linker is responsible for selecting only the "reachable" functions to be linked into the final executable and discarding the rest. However, compilers are still putting the DWARF debug info into a single section per compilation unit, and linkers mostly treat debug info sections as indivisible black boxes, so those sections get copied into the final executable even if the functions they're providing debug info for have been discarded. My patch tackles the simplest case: when a compilation unit has had all its functions and data discarded, discard the debug info sections for that unit. Debug info could be shrunk a lot more if the linker was able to rewrite the DWARF sections to discard info for a subset of the functions in a compilation unit, but that would be a lot more work to implement (and would potentially involve performance tradeoffs). Even so, the results of my patch are good: for Pernosco, our "dist" binaries with debug info shrink from 2.9GB to 2.0GB.

Not only was the patch small, it was also pretty easy to implement. I went from never having looked at LLD to working code in an afternoon. So an interesting question is, why wasn't this done years ago? I can think of a few contributing reasons:

People just expect binaries with debug info to be bloated, and because they're only used for debugging, except for a few people working on Linux distros, it's not worth spending much effort trying to shrink them.

C/C++ libraries that expect to be statically linked, especially common ones like glibc, don't rely on --gc-sections to discard unused functions. Instead, they split the library into many small compilation units, ideally one per independently usable function. This is extra work for library developers, but it solves the debug info problem. Rust developers don't (and really, can't) do this because rustc splits crates into compilation units in a way that isn't under the control of the developer. Less work for developers is good, so I don't think Rust should change this; tools need to keep up.

Big companies that contribute to LLD, with big projects that statically link third-party libraries, often "vendor" those libraries, copying the library source code into their big project and building it as part of that project. As part of that process, they would usually tweak the library to only build the parts their project uses, avoiding the problem.

There has been tension in the LLD community between doing the simple thing I did and doing something more difficult and complex involving DWARF rewriting, which would have greater returns. Perhaps my patch submission to some extent forced the issue.

Friday, 5 April 2019

Rust Discussion At IFP WG2.4

I've spent this week at a IFIP WG2.4 meeting, where researchers share ideas and discuss topics in programming languages, analysis and software systems. The meeting has been in Paihia in the Bay of Islands, so very conveniently located for me. My main talk was about Pernosco, but I also took the opportunity to introduce people to Rust and the very significant advances in programming language technology that it delivers. My slides are rudimentary because I wanted to minimize my talking and leave plenty of time for questions and discussion. I think it went pretty well. The main point I wanted researchers to internalize is that Rust provides a lot of structure that could potentially be exploited by static analysis and other kinds of tools, and that we should expect future systems programming languages to at least meet the bar set by Rust, so forward-looking research should try to exploit these properties. I think Rust's tight control of aliasing is especially important because aliasing is still such a problematic issue for all kinds of static analysis techniques. The audience seemed receptive.

One person asked me whether they should be teaching Rust instead of C for their "systems programming" courses. I definitely think so. I wouldn't teach Rust as a first programming language, but for a more advanced course focusing on systems programming I think Rust would be a great way to force people to think about issues such as lifetimes — issues that C programmers should grapple with but can often get away with sloppy handling of in classroom exercises.

Saturday, 30 March 2019

Marama Davidson And The Truth About Auckland's History

On March 24 I sent the following email to Marama Davidson's parliamentary office email address.

Subject: Question about Ms Davidson's speech at the Auckland peace rally on March 16 I was at the rally. During her speech Ms Davidson mentioned that the very land we were standing on (Aotea Square) was taken from Māori by European settlers by force. However Wikipedia says
By 1840 Te Kawau had become the paramount chief of Ngāti Whātua. Cautious of reprisals from the Ngāpuhi defeated at Matakitaki, Te Kawau found it most convenient to offer Governor Hobson land around the present central city.
https://en.wikipedia.org/wiki/History_of_Auckland Can you clarify Ms Davidson's statement and/or provide a source for her version? Sincerely, Robert O'Callahan

I haven't received a response. Te Ara agrees with Wikipedia.

I'd genuinely like to know the truth here. It would be disappointing if Davidson lied — blithely accepting "all politicians lie" is part of the path to electing people like Donald Trump. On the other hand if the official histories are wrong, that would also be disappointing and they need to be corrected.

Monday, 18 February 2019

Banning Huawei Is The Right Decision

If China's dictator-for-life Xi Jinping orders Huawei to support Chinese government spying, it's impossible to imagine Huawei resisting. The Chinese government flaunts its ability to detain anyone at any time for any reason.

The argument "no-one has caught Huawei doing anything wrong" (other than stealing technology) misses the point; the concern is about what they might do in the future.

The idea that you can buy equipment from Huawei today and protect it from future hijacking doesn't work. It will need to be maintained and upgraded by Huawei, which will let them add backdoors in the future even if there aren't any (accidental or deliberate) today.

Don't imagine you can inspect their systems to find backdoors. Skilled engineers can insert practically undetectable backdoors at many different levels of a computer system.

These same issues apply to other Chinese technology companies.

These same issues apply to technology companies from other countries, but New Zealand should worry less about technology companies from Western powers. Almost every developed country has much greater rule of law than China has; for example US spy agencies can force tech companies to cooperate using National Security Letters, but those can be challenged in court. We also have to weigh how much we fear the influence of different governments. I think New Zealand should worry a lot less about historically friendly democracies, flawed as they are, than about a ruthless tyranny like the Chinese government with a history of offensive cyberwarfare.

New Zealand and other countries may pay an economic price for such decisions, and I can see scenarios where the Chinese government decides to make an example of us to try to frighten other nations into line. Hopefully that won't happen and we won't be forced to choose between friendship with China and digital sovereignty — but if we have to pick one, we'd better pick digital sovereignty.

It would be easier for Western countries to take the right stand if the US President didn't fawn over dictators, spit on traditional US allies, and impose tariffs on us for no good reason.

Monday, 11 February 2019

Rust's Affine Types Catch An Interesting Bug

A function synchronously downloads a resource from Amazon S3 using a single GetObject request. I want it to automatically retry the download if there's a network error. A wrapper function aws_retry_sync based on futures-retry takes a closure and automatically reruns it if necessary, so the new code looks like this:

pub fn s3_download<W: Write>(
    client: S3Client,
    bucket: String,
    key: String,
    out: W,
) -> io::Result<()> {
    aws_retry_sync(move || {
        let response = client.get_object(...).sync()?;
        if let Some(body) = response.body {
            body.fold(out, |mut out, bytes: Vec| -> io::Result {
                out.write_all(&bytes)?;
                Ok(out)
            })
            .wait()?;
        }
    })
}
This fails to compile for an excellent reason:
error[E0507]: cannot move out of captured variable in an `FnMut` closure
   --> aws-utils/src/lib.rs:194:23
    |
185 |     out: W,
    |     --- captured outer variable
...
194 |             body.fold(out, |mut out, bytes: Vec| -> io::Result {
    |                       ^^^ cannot move out of captured variable in an `FnMut` closure
I.e., the closure can execute more than once, but each time it executes it wants to take ownership of out. Imagine if this compiled ... then if the closure runs once and writes N bytes to out, then the network connection fails and we retry successfully, we would write those N bytes to out again followed by the rest of the data. This would be a subtle and hard to reproduce error.

A retry closure should not have side effects for failed operations and should not, therefore, take ownership of out at all. Instead it should capture data to a buffer which we'll write to out if and only if the entire fetch succeeds. (For large S3 downloads you need parallel downloads of separate ranges, so that network errors only require refetching part of the object, and that approach deserves a separate implementation.)

Ownership types are for more than just memory and thread safety.

Mt Taranaki 2019

Last weekend I climbed Mt Taranaki again. Last time was just me and my kids, but this weekend I had a larger group of ten people — one of my kids and a number of friends from church and elsewhere. We had a range of ages and fitness levels but everyone else was younger than me and we had plans in place in case anyone needed to turn back.

We went this weekend because the weather forecast was excellent. We tried to start the walk at dawn on Saturday but were delayed because the North Egmont Visitor's Centre carpark apparently filled up at 4:30am; everyone arriving after that had to park at the nearest cafe and catch a shuttle to the visitor's centre, so we didn't start until 7:40am.

In short: we had a long hard day, as expected, but everyone made it to the crater, most of us by 12:30pm. Most of our group clambered up to the very summit, and we all made it back safely. Unfortunately clouds set in around the top not long before we go there so there wasn't much of a view, but we had good views much of the rest of the time. You could clearly see Ruapehu, Ngauruhoe and Tongariro to the east, 180km away. It was a really great day. The last of our group got back to the visitor's centre around 6pm.

My kid is six years older than last time and much more experienced at tramping, so this time he was actually the fastest of our entire group. I'm proud of him. I think I found it harder than last time — probably just age. As I got near the summit my knees started to twinge and cramp if I wasn't careful on the big steps up. I was also a bit shorter of breath than I remember from last time. I was faster at going down the scree slope though, definitely the trickiest part of the descent.

On the drive back from New Plymouth yesterday, the part of the group in our car stopped at the "Three Sisters", rock formations on the beach near Highway 3 along the coast. I just saw it on the map and we didn't know what was there, but it turned out to be brilliant. We had a relaxing walk and the beach, surf, rocks and sea-caves were beautiful. Highly recommended — but you need to be there around low tide to walk along the riverbank to the beach and through the caves.

Sunday, 27 January 2019

Experimental Data On Reproducing Intermittent MongoDB Test Failures With rr Chaos Mode

Max Hirschhorn from MongoDB has released some very interesting results from an experiment reproducing intermittent MongoDB test failures using rr chaos mode.

He collected 18 intermittent test failure issues and tried running them 1000 times under the test harness and rr with and without chaos mode. He noted that for 13 of these failures, MongoDB developers were able to make them reproducible on demand with manual study of the failure and trial-and-error insertion of "sleep" calls at relevant points in the code.

Unfortunately rr didn't reproduce any of his 5 not-manually-reproducible failures. However, it did reproduce 9 of the 13 manually reproduced failures. Doing many test runs under rr chaos mode is a lot less developer effort than the manual method, so it's probably a good idea to try running under rr first.

Of the 9 failures reproducible under rr, 3 also reproduced at least once in a 1000 runs without rr (with frequencies 1, 3 and 54). Of course with such low reproduction rates those failures would still be pretty hard to debug with a regular debugger or logging.

The data also shows that rr chaos mode is really effective: in almost all cases where he measured chaos mode vs rr non-chaos or running without rr, rr chaos mode dramatically increased the failure reproduction rate.

The data has some gaps but I think it's particularly valuable because it's been gathered on real-world test failures on an important real-world system, in an application domain where I think rr hasn't been used before. Max has no reason to favour rr, and I had no interaction with him between the start of the experiment and the end. As far as I know there's been no tweaking of rr and no cherry-picking of test cases.

I plan to look into the failures that rr was unable to reproduce to see if we can improve chaos mode to catch them and others like them in the future. He hit at least one rr bug as well.

I've collated the data for easier analysis here:

FailureReproduced manuallyrr-chaos reproductionsregular rr reproductionsno-rr reproductions
BF-9810--0 /1000??
BF-9958Yes71 /10002 /10000 /1000
BF-10932Yes191 /10000 /10000 /1000
BF-10742Yes97 /10000 /10000 /1000
BF-6346Yes0 /10000 /10000 /1000
BF-8424Yes1 /2321 /9730 /1000
BF-7114Yes0 /48??
BF-7588Yes193 /100096 /100054 /1000
BF-7888Yes0 /1000??
BF-8258--0 /636??
BF-8642Yes3 /1000?0 /1000
BF-9248Yes0 /1000??
BF-9426--0 /1000??
BF-9552Yes5 /563??
BF-9864--0 /687??
BF-10729Yes2 /1000?1 /1000
BF-11054Yes7 /1000?3 /1000