Tuesday, 28 July 2009

Idiots

The transition to robotic warfare scares me a lot. Soon, war will *only* kill civilians. Making war cheap for the aggressor can't be good. Most troublesome of all is the obvious problem that I rarely see discussed: computers, software and people being what they are, it's inevitable that drone command and control will be compromised by enemy agents. It will be very exciting when the US army turns around and flies back to flatten Washington, and no-one has any idea who did it. People think drones accentuate US military superiority, but it's not true; they neutralize it. This is going to be a tragic example of common sense being trumped by overconfidence and dazzling technology.



20 comments:

  1. Robotic Warfare is and was inevitable. Unlike the Atomic Age though, implementation of Robotic Warfare does not equate a mutually assured destruction but instead another step in conventional warfare. Talking about the "cheapness" of war is irrelevant I think. Economics have never really been a concern of two or more countries at war. World War 2 got the USA out of the Great Depression for example.
    Drone command being compromised? Given enough time, I'm sure it will happen periodically. Will a shipment of military weapons inevitably be stolen sometime soon? Probably from time to time. Will some rogue nation run off and build nuclear weapons in secret and then threaten to use them if it doesn't get what it wants? Oh wait... I can think of a couple cases already.
    I think the this is just another stage in human evolution, but not as potentially dangerous as nuclear weapons are at least in the short term (100 years?). Cyber and Robotic combatants will have to contend with Cyber and Robotic anti-combatants. In the near future if some drone gets hacked and taken on a joy ride, the collateral damage pales in comparison to a dirty bomb in the hands of a suicide bomber. Whats the difference between a hacker with a drone, and a suicidal terrorist with the means and the agenda to do harm? Human robots still do more harm than our mechanical creations do. (When Terminator's Skynet goes online, that will be a different story)

    ReplyDelete
  2. Be very ironic if that happened. But as it stands don't all robots carry some sort of self-destruct on them?

    ReplyDelete
  3. Robert O'Callahan28 July 2009 22:40

    Damian: I doubt it, and even if they do, I bet it can be hacked.
    TNO: the problem is that electronic attacks scale. If you find a vulnerability that lets you compromise one unit, you can probably compromise lots of units, perhaps all the units. That is where electronic warfare diverges rapidly from physical warfare.

    ReplyDelete
  4. The locusts of the apocalypse :(

    ReplyDelete
  5. 2Dave Carr: Aha... "But the day of the Lord will come like a thief. The heavens will disappear with a roar; the elements will be destroyed by fire, and the earth and everything in it will be laid bare." (Peter 3:10)
    But, anyway, this was predicted by futurologist Stanislaw Lem in the book "Peace on Earth"
    http://books.google.com/books?id=n0TDjTcGIawC&printsec=frontcover&source=gbs_navlinks_s

    ReplyDelete
  6. I think we (as humans in general), overlook the fact that electronic warfare is simply an extension of memetic warfare. It just get us to our goal(s) arguably faster.
    Lets say I, as a hacker, infect one or more drones with a virus to target and destroy abortion clinics. This isn't very different from me as a religious/political leader (Social Hacker) infecting one or more human drones with a memetic "virus" altering their behavior to seek out and destroy abortion clinics.
    I'm not discounting the potential danger of one or more compromised military craft. I'm stating that in comparison to the multitude other already well established dangers in existence, this threat still pales in comparison (Nuclear weapons, Religious fanatics, Biological agents). Let's not overlook the fact that Missile Guidance systems have been around for at least 30 years and are comparable in sophistication.

    ReplyDelete
  7. VanillaMozilla29 July 2009 13:56

    "Soon, war will *only* kill civilians."
    Uh-huh. Like WWII and carpet bombing? I don't think so.
    "It will be very exciting when the US army turns around and flies back to flatten Washington, and no-one has any idea who did it."
    Uh-huh. Maybe, but most predictions are wrong.
    "People think drones accentuate US military superiority, but it's not true; they neutralize it."
    Really? Like everything else you've said, this requires _something_ to back it up. Look, war is bad, and scary. Everyone can agree on that. But 7 nice-sounding sentences on the future of war might not be evidence of the profound thinking that you thought it was. If you really have something to say, it will require a bit of careful analysis.

    ReplyDelete
  8. Robert O'Callahan29 July 2009 19:55

    TNO: "This isn't very different" --- a parallel can be drawn, but in reality it's completely different. Once an exploit exists, software can be compromised instantly, undetectably, remotely, completely, with no per-unit effort. Minds can't, at least not until we stick software in them.
    VanillaMozilla: the analysis is very simple. Every computer system we've ever built has bugs and vulnerabilities. People are always overconfident in their systems, especially in scenarios like the military where there are major immediate benefits in deploying them, and where vendors are selling like crazy. We know attackers are targeting military systems, with some success. I can't prove the future, obviously, but it would be very surprising if things *don't* go wrong. This isn't profound at all, which makes it all the more disturbing that this isn't raised every single time the pros and cons of robotic warfare are discussed.

    ReplyDelete
  9. Robert O'Callahan:
    I think that's an excessive over generalization. Not all exploits are equal, nor do all share those attributes simultaneously. One aspect of security, as you are aware, is not just preventing exploits, but reducing the scope of their potential side-effects as well. It's not as if these systems were blindly thrown together one day, and sent off like RC toy helicopters. There has been significant amount of research in this topic. For example: http://lambda-the-ultimate.org/node/2329
    A potential exploit in Firefox, or any other form of distributed software could also compromise an untold number of computer systems as well to numerous degrees, so should we all stop using them because of that fact? I think things need to be put in perspective here.

    ReplyDelete
  10. VanillaMozilla29 July 2009 22:27

    I notice that you didn't attempt to defend the first two statements, so let's discuss the notion that everything might simply get taken over and control will collapse.
    I know where you're coming from. Software stinks. A lot of it is leaky and is poorly written, but a lot of it isn't. I suppose it's possible that they could run all the weapons on leaky Windows systems, connect them to the Internet, ignore security, and not take any countermeasures--but I'm pretty sure they don't.
    I don't claim that there won't be mistakes. There often are. But the military has had long experience with software and security, yet somehow, we don't see missiles launching at random or cruise missiles returning to attack their launch points. In theory, drones can be jammed or taken over, but this problem was thought of long before the first one was built, and I kinda think they might have thought about this. When you write that you don't know why "this isn't raised every single time the pros and cons of robotic warfare are discussed," I have to wonder why you assume it isn't. I'll bet maintaining control is just about the FIRST consideration.

    ReplyDelete
  11. Robert O'Callahan29 July 2009 22:29

    Not all exploits are equal, yes, but once you've worked out an exploit that compromises your target the way you want, you can generally engineer those attributes in.
    > I think things need to be put in perspective
    > here.
    OK: if Firefox is compromised, lots of people lose money; if military robots are compromised, lots of people get killed.
    Put it another way: if an exploitable Firefox bug could turn users into murderous killing machines, we'd turn the Internet off today.

    ReplyDelete
  12. Robert O'Callahan29 July 2009 22:29

    That's one summary I hope I never see in Bugzilla!

    ReplyDelete
  13. Robert O'Callahan29 July 2009 22:34

    > I have to wonder why you assume it isn't.
    Because it's not discussed in the article I linked to?
    > I notice that you didn't attempt to defend the
    > first two statements
    I don't play the game where the last person to comment wins.

    ReplyDelete
  14. Robert O'Callahan29 July 2009 22:51

    But just because you brought it up:
    Point 1: I think you mean firebombing, not carpet bombing, but either way, while bombing certainly removed soldiers from the action, WWII bomber crews were still at great risk. Anyway, bombing was only part of the war, and vast numbers of soldiers died in it.
    It's certainly true that as technology advances --- from melee weapons to projectiles, to firearms, to aircraft, to ballistic missiles, the people doing the killing have been increasingly removed from the action. But up till now you haven't been able to "win" a war without "grunts on the ground". Soon you'll be able to, which completes this process.
    Point 2: My statement is clearly a prediction which cannot be proved. If you want to interpret it differently, that's your perogative.

    ReplyDelete
  15. > That's one summary I hope I never see in Bugzilla!
    lol... I'm sure we'd both want to be on the CC list for that one.
    > OK: if Firefox is compromised, lots of people lose money; if military robots are compromised, lots of people get killed.
    I have first-hand knowledge of the fact that the browser is installed on a number of military networks which also contain far more than just financial information, so it's not a black and white issue. A compromised system does not equate a worse case scenario.
    As VanillaMozilla and I have alluded to, we've had pseudo AI military equipment systems in the wild for decades already without the dire results which you have mentioned. No one is denying the potential danger nor the possibility of the occurrence, but its a bit of a stretch I think to make the assumption that "this isn't raised every single time the pros and cons of robotic warfare are discussed". Just take a glance at some of the research surrounding ADA and its related system implementations.

    ReplyDelete
  16. Geoff Langdale30 July 2009 01:07

    Sorry Rob, you're way off on this one.
    I don't imagine that past performance (e.g. last 40 years or so) automatically predicts future results, but are we really anticipating a big-power war here? Frankly, the Taliban / Iraqi Army / North Korean Army / etc. (pick your plausible asymmetric foe here) doesn't seem likely to be able to deploy a sophisticated electronic countermeasure to robotic soldiers, which in any case will probably be fairly locally controlled. I'm sure these guys would like to have launch codes for ICBMs too, as has been pointed out, but that hasn't happened for years.
    As for civillian casualties, I think I'd rather be a civillian being held at gun point by a remotely-operated drone with a human operator in a bunker somewhere than a civillian being held at gun point by a bunch of jumpy Marines who have had several of their comrades killed or injured by people who look just like me.
    There's always the danger of having people 'remote from the action' treat warfare as a video game, but the people who are immersed in the action are not necessarily perfect in their judgements.
    Finally, there are some wars that should have happened but didn't, largely because of a reluctance to see US soldiers die at any cost. I'm thinking Rwanda here. It's not like the military intervention always takes place against a background of 'zero civillian casualties'; sometimes it's a choice between 'bad' and 'far worse'. An increased willingness to tolerate military actions - even if that means killing civillians - does not automatically lead to 'more civillians dying in violent ways'. Are Bill Clintons hands cleaner because no US soldier shot a Rwandan civillian during that conflict?

    ReplyDelete
  17. Robert O'Callahan30 July 2009 02:01

    Hello Geoff!
    Seems to me that hackers-for-hire are pretty well established. I don't see why asymmetric foes couldn't engage them and pull this off.
    Robotic soldiers aren't necessarily locally controlled. Predator drones are controlled from the USA, for example.
    Your last point is an interesting one that I hadn't thought about. I agree that war isn't always a bad thing. However, I'm pessimistic enough to claim that it usually is, and making it easier isn't a good direction.

    ReplyDelete
  18. Geoff Langdale30 July 2009 04:25

    I think if I want to daub nationalistic slogans on a web site somewhere, even one controlled by the US military, or if I want to get some semi-secure corporate facility hacked I can find a hacker-for-hire pretty easily. I doubt that I can find someone to hack me ICBM launch codes.
    I think there's no real 'practice' market so that people can work their way up to cracking high-security military systems. You can either do it or you can't, and developing and selling this expertise is going to be ridiculously difficult.
    Non-local control is sort of ideal. The more distant the operator, the less likely their judgement is overriden by emotion. Hopefully what this means is that they won't decide to mouse-click on a car full of kids in order to save a $10K robotic sentry point...
    Overall, though, I agree with the general case that making war easier doesn't necessarily yield good outcomes. To use another Clinton-era example, the various cruise-missile uses were 'easy' and usually wildly misdirected and inappropriate in their outcomes. Rwanda wouldn't have been 'solved' by dropping in 10,000 robot soldiers all over the place, either...

    ReplyDelete
  19. Robert O'Callahan30 July 2009 06:24

    I argue that more distant operators are probably more likely to do terrible things. The great atrocities of history have relied on the victims being dehumanized in the eyes of the perpetrators, and that's easier when they're further away and less visible, I think.

    ReplyDelete
  20. The elimination of personal danger I think would free you from a sense of desperation and allow you to be more objective. I can guarantee you that when I was in Iraq, I would have had different thought processes if I was remotely operating a drone instead of sitting there having bombs dropped on me 5 times a day and having civilians giving me scornful looks. The removal of mortal danger doesn't promote inhumane actions as much as probably more indifferent ones.

    ReplyDelete