Monday 27 July 2009
Idiots
The transition to robotic warfare scares me a lot. Soon, war will *only* kill civilians. Making war cheap for the aggressor can't be good. Most troublesome of all is the obvious problem that I rarely see discussed: computers, software and people being what they are, it's inevitable that drone command and control will be compromised by enemy agents. It will be very exciting when the US army turns around and flies back to flatten Washington, and no-one has any idea who did it. People think drones accentuate US military superiority, but it's not true; they neutralize it. This is going to be a tragic example of common sense being trumped by overconfidence and dazzling technology.
Comments
Drone command being compromised? Given enough time, I'm sure it will happen periodically. Will a shipment of military weapons inevitably be stolen sometime soon? Probably from time to time. Will some rogue nation run off and build nuclear weapons in secret and then threaten to use them if it doesn't get what it wants? Oh wait... I can think of a couple cases already.
I think the this is just another stage in human evolution, but not as potentially dangerous as nuclear weapons are at least in the short term (100 years?). Cyber and Robotic combatants will have to contend with Cyber and Robotic anti-combatants. In the near future if some drone gets hacked and taken on a joy ride, the collateral damage pales in comparison to a dirty bomb in the hands of a suicide bomber. Whats the difference between a hacker with a drone, and a suicidal terrorist with the means and the agenda to do harm? Human robots still do more harm than our mechanical creations do. (When Terminator's Skynet goes online, that will be a different story)
TNO: the problem is that electronic attacks scale. If you find a vulnerability that lets you compromise one unit, you can probably compromise lots of units, perhaps all the units. That is where electronic warfare diverges rapidly from physical warfare.
But, anyway, this was predicted by futurologist Stanislaw Lem in the book "Peace on Earth"
http://books.google.com/books?id=n0TDjTcGIawC&printsec=frontcover&source=gbs_navlinks_s
Lets say I, as a hacker, infect one or more drones with a virus to target and destroy abortion clinics. This isn't very different from me as a religious/political leader (Social Hacker) infecting one or more human drones with a memetic "virus" altering their behavior to seek out and destroy abortion clinics.
I'm not discounting the potential danger of one or more compromised military craft. I'm stating that in comparison to the multitude other already well established dangers in existence, this threat still pales in comparison (Nuclear weapons, Religious fanatics, Biological agents). Let's not overlook the fact that Missile Guidance systems have been around for at least 30 years and are comparable in sophistication.
Uh-huh. Like WWII and carpet bombing? I don't think so.
"It will be very exciting when the US army turns around and flies back to flatten Washington, and no-one has any idea who did it."
Uh-huh. Maybe, but most predictions are wrong.
"People think drones accentuate US military superiority, but it's not true; they neutralize it."
Really? Like everything else you've said, this requires _something_ to back it up. Look, war is bad, and scary. Everyone can agree on that. But 7 nice-sounding sentences on the future of war might not be evidence of the profound thinking that you thought it was. If you really have something to say, it will require a bit of careful analysis.
VanillaMozilla: the analysis is very simple. Every computer system we've ever built has bugs and vulnerabilities. People are always overconfident in their systems, especially in scenarios like the military where there are major immediate benefits in deploying them, and where vendors are selling like crazy. We know attackers are targeting military systems, with some success. I can't prove the future, obviously, but it would be very surprising if things *don't* go wrong. This isn't profound at all, which makes it all the more disturbing that this isn't raised every single time the pros and cons of robotic warfare are discussed.
I think that's an excessive over generalization. Not all exploits are equal, nor do all share those attributes simultaneously. One aspect of security, as you are aware, is not just preventing exploits, but reducing the scope of their potential side-effects as well. It's not as if these systems were blindly thrown together one day, and sent off like RC toy helicopters. There has been significant amount of research in this topic. For example: http://lambda-the-ultimate.org/node/2329
A potential exploit in Firefox, or any other form of distributed software could also compromise an untold number of computer systems as well to numerous degrees, so should we all stop using them because of that fact? I think things need to be put in perspective here.
I know where you're coming from. Software stinks. A lot of it is leaky and is poorly written, but a lot of it isn't. I suppose it's possible that they could run all the weapons on leaky Windows systems, connect them to the Internet, ignore security, and not take any countermeasures--but I'm pretty sure they don't.
I don't claim that there won't be mistakes. There often are. But the military has had long experience with software and security, yet somehow, we don't see missiles launching at random or cruise missiles returning to attack their launch points. In theory, drones can be jammed or taken over, but this problem was thought of long before the first one was built, and I kinda think they might have thought about this. When you write that you don't know why "this isn't raised every single time the pros and cons of robotic warfare are discussed," I have to wonder why you assume it isn't. I'll bet maintaining control is just about the FIRST consideration.
> I think things need to be put in perspective
> here.
OK: if Firefox is compromised, lots of people lose money; if military robots are compromised, lots of people get killed.
Put it another way: if an exploitable Firefox bug could turn users into murderous killing machines, we'd turn the Internet off today.
Because it's not discussed in the article I linked to?
> I notice that you didn't attempt to defend the
> first two statements
I don't play the game where the last person to comment wins.
Point 1: I think you mean firebombing, not carpet bombing, but either way, while bombing certainly removed soldiers from the action, WWII bomber crews were still at great risk. Anyway, bombing was only part of the war, and vast numbers of soldiers died in it.
It's certainly true that as technology advances --- from melee weapons to projectiles, to firearms, to aircraft, to ballistic missiles, the people doing the killing have been increasingly removed from the action. But up till now you haven't been able to "win" a war without "grunts on the ground". Soon you'll be able to, which completes this process.
Point 2: My statement is clearly a prediction which cannot be proved. If you want to interpret it differently, that's your perogative.
lol... I'm sure we'd both want to be on the CC list for that one.
> OK: if Firefox is compromised, lots of people lose money; if military robots are compromised, lots of people get killed.
I have first-hand knowledge of the fact that the browser is installed on a number of military networks which also contain far more than just financial information, so it's not a black and white issue. A compromised system does not equate a worse case scenario.
As VanillaMozilla and I have alluded to, we've had pseudo AI military equipment systems in the wild for decades already without the dire results which you have mentioned. No one is denying the potential danger nor the possibility of the occurrence, but its a bit of a stretch I think to make the assumption that "this isn't raised every single time the pros and cons of robotic warfare are discussed". Just take a glance at some of the research surrounding ADA and its related system implementations.
I don't imagine that past performance (e.g. last 40 years or so) automatically predicts future results, but are we really anticipating a big-power war here? Frankly, the Taliban / Iraqi Army / North Korean Army / etc. (pick your plausible asymmetric foe here) doesn't seem likely to be able to deploy a sophisticated electronic countermeasure to robotic soldiers, which in any case will probably be fairly locally controlled. I'm sure these guys would like to have launch codes for ICBMs too, as has been pointed out, but that hasn't happened for years.
As for civillian casualties, I think I'd rather be a civillian being held at gun point by a remotely-operated drone with a human operator in a bunker somewhere than a civillian being held at gun point by a bunch of jumpy Marines who have had several of their comrades killed or injured by people who look just like me.
There's always the danger of having people 'remote from the action' treat warfare as a video game, but the people who are immersed in the action are not necessarily perfect in their judgements.
Finally, there are some wars that should have happened but didn't, largely because of a reluctance to see US soldiers die at any cost. I'm thinking Rwanda here. It's not like the military intervention always takes place against a background of 'zero civillian casualties'; sometimes it's a choice between 'bad' and 'far worse'. An increased willingness to tolerate military actions - even if that means killing civillians - does not automatically lead to 'more civillians dying in violent ways'. Are Bill Clintons hands cleaner because no US soldier shot a Rwandan civillian during that conflict?
Seems to me that hackers-for-hire are pretty well established. I don't see why asymmetric foes couldn't engage them and pull this off.
Robotic soldiers aren't necessarily locally controlled. Predator drones are controlled from the USA, for example.
Your last point is an interesting one that I hadn't thought about. I agree that war isn't always a bad thing. However, I'm pessimistic enough to claim that it usually is, and making it easier isn't a good direction.
I think there's no real 'practice' market so that people can work their way up to cracking high-security military systems. You can either do it or you can't, and developing and selling this expertise is going to be ridiculously difficult.
Non-local control is sort of ideal. The more distant the operator, the less likely their judgement is overriden by emotion. Hopefully what this means is that they won't decide to mouse-click on a car full of kids in order to save a $10K robotic sentry point...
Overall, though, I agree with the general case that making war easier doesn't necessarily yield good outcomes. To use another Clinton-era example, the various cruise-missile uses were 'easy' and usually wildly misdirected and inappropriate in their outcomes. Rwanda wouldn't have been 'solved' by dropping in 10,000 robot soldiers all over the place, either...