Monday, 20 June 2005

I Have A Dream

We're seeing some amazing advances in neuroscience. Today people are taking the first steps towards mental input-output --- such as reading basic thoughts, mental control of joysticks, and stimulation of vision centers. Where could this lead?

Wouldn't it be amazing to have a surgically implanted computer capable of directly receiving thoughts and inducing sensory stimulation? Think of the applications!

  • Do away with the whole messy voice, keyboard, mouse etc
  • Work at full efficiency in any environment doing any activity
  • Differential GPS capability makes being lost an anachronism
  • Cellphones become telepathy
  • High-fidelity virtual experiences of all kinds (nudge nudge, wink wink)
  • Hook into sensor and actuator networks make your devices direct extensions of yourself: your house, your car, your pet robotic dog, all at your direct mental command at any range, and all acting as additional senses


But it doesn't stop there. Choose to share your experiences with others in real time. Experience group consciousness. Weave together many experiences of the same event into an incredibly detailed whole.

There are some risks. For protection, all this has to be optional. But who could shut themselves off from this for long? It's all humanity's dreams rolled into one. (Well, most of them.)

Now imagine that stolen nukes go off in New York, London, Beijing and Bangalore. Tens of millions are dead, hundreds of millions more at risk, and the world teeters close to an all-out nuclear exchange. World leaders are told it is possible to create a virus that will cover the world in hours and provide temporary access to the minds of the implanted, maximising the chance of detecting terrorists --- even disabling any who are already implanted --- and pulling the world back from the brink. It's the only ray of hope, and the plan is executed successfully.

After the immediate threat has passed, citizens are given the option of withdrawing from the emergency cooperative. Such objectors are a potential security threat, and become the focus of the rest of the network. They are, in fact, disconnected from the system lest they interfere with it. It's not a popular option, taken up only by a few eccentrics --- almost no-one who has adjusted to the extraordinary lifestyle of the implanted can bring themselves to withdraw.

Over time, it is judged prudent for the security of the system --- and therefore humanity --- for all implanted minds to be nudged towards approval of the system, to avoid those rare cases where there are doubts. Likewise, they are encouraged to have their children implanted at a young age. Other mental traits that cause social friction are removed ... or if they cannot be removed, their exercise is immediately detected and countered within the mind of the perpetrator. The only remaining external threat comes from the dwindling communities of unimplanted, who are therefore relocated to zones where they cannot endanger themselves or others.

Now unrestricted thought belongs only to the men and women at the root of the system, who control its software and thereby shepherd humanity. What happens to them? Do they automate their oversight and relinquish their power, joining the blissful masses? Do they turn to some new and terrible direction? Or do they err, and all succumb to some terrible catastrophe?

I admit to being paranoid --- but to my knowledge, nothing here is technically improbable. We desperately need a new Orwell for the twenty-first century, someone to terrorise us with plausible visions so we know to flee from the hint of them.


15 comments:

  1. about the last statement, so true, although 1984 was written for creating unnecessary fear, the warnings are applicable now more than ever. What we need is require this book to be read and analysed in schools. (for me it was not: Public school, Hillsborough county FL)

    ReplyDelete
  2. Anonymous Coward21 June 2005 06:43

    The Orwellian abuse of technology is unlikely to happen on a global scale. Why? I suppose for the same reasons why humanity refuses to live with dictators such as Hitler or Pinochet or Saddam Hussein for much too long, despite their technology of evil.

    ReplyDelete
  3. Robert O'Callahan21 June 2005 07:22

    Hitler and Hussein were toppled from outside, not from inside. What if there is no outside --- if a tyranny emerges that's global from the beginning?

    ReplyDelete
  4. Now imagine if all that software were written with Windows Embedded :-)

    ReplyDelete
  5. What if there is no outside --- if a tyranny emerges that's global from the beginning?
    As long as the world's split up into sovereign countries scrabbling for influence and internal stability, there'll always be an outside. Personally I don't see countries or sovereignty going away in the foreseeable future; witness the reactions of many Europeans to the loss of even a small part of their sovereignty in their individual states to the EU. Of course, when that happens it's worth starting to worry, although I think then pervasive technologies such as you describe will be the least of our worries.

    ReplyDelete
  6. Anonymous Coward21 June 2005 14:17

    That's logically possible, although I wonder if anyone has considered the dynamics of a global tyranny before. Would such a civilization be stable? How do societies work anyway? What makes one rise and the other crumble? Recently Jared Diamond, among others, has tried to answer this. Even if global tyranny arises, what conditions would enable it to last forever? Why is democracy more palatable than communism to human nature? These are important questions that must be answered. It's quite healthy to be paranoid about possible dangers, but I just happen to be optimistic about humanity these days. I'm quite sure our wisdom will overcome our stupidity.

    ReplyDelete
  7. Simon Goldsmith21 June 2005 15:55

    Wow, Rob. That all sounds like the plot of a Phillip K. Dick novel. I always though Brave New World was closer to where we are heading.

    ReplyDelete
  8. Paul Steffens21 June 2005 19:13

    IMHO, what's wrong with your worries is that humanity doesn't work like that.
    If such technologies come true there will be not one system but several. And these systems will have complex subsystems. Society is a complex mess and always will be and such a system will be just another reflection of society and its complexity.
    A book I find very interesting in this regard is Shirow Masamune's Ghost in the Shell. Yes, that's japanese manga.
    But the world he describes is a step beyond Philip K Dick and Gibson. It takes their concepts as a given and builds a complex and sometimes quite political reality on top of it. On the one hand there is little distinction between AI and human consciousness, on the other it's carrier can be virtual, mechanical or biological. All interchangable and hackable.
    Highly recommended :)

    ReplyDelete
  9. Robert O'Callahan21 June 2005 21:51

    Diversity would be great, but the fact is we have one Internet and one software monopoly. And that's not an accident of history; there are many powerful, self-reinforcing network effects driving us towards a unified technology base.
    The scenario I described starts with a unified technology base and suggests how tyranny could emerge from it. It is not overtly a top-down political imposition.
    As to whether it could be long-term stable ... obviously I don't know, but it's hard to argue that it *could not* be, if it wields powers of a scope and quality we have never seen before.
    I guess one of my points, which was also Orwell's point, is that technology puts us off history's map. We can't look to Jared Diamond for reassuring universal truths of human civilization, because the rules are different now. And my hypothetical mind interface device takes us a lot further into the unknown than Orwell went. Orwell himself wrote that the one thing his Party desired that they still lacked was the ability to read minds.

    ReplyDelete
  10. Dysfunksional.Monkey22 June 2005 15:15

    "Now imagine if all that software were written with Windows Embedded :-)"
    Does that mean you'd see people with a BFOD: Blue Face Of Death?

    ReplyDelete
  11. Well, I think the fact that we are already having a discussion about "brain security" years before the technology is even available indicates that, when the time comes to design and implant these brain chips, people are going to take security issues very seriously. One approach could be to, instead of running a whole software-based, general purpose OS on the "brain chip", have all its instructions hardcoded in hardware, logic gates, etc., so that it physically could not be hacked or infected with a virus at all. Also, it will take much longer for this "brain internet" to come of age than the regular internet, due to health and safety concerns, expense of the implant surgery, security concerns of the type you've described, etc., so society will have plenty of time to debate and test the thing. I for one am not putting anything in my head unless I have the ability to control all its inputs and outputs as well as to simply turn the thing on and off as desired. Considering that everyone's brain is wired somewhat differently, it will be difficult enough to design a chip that gave you the ability to simply send and receive text. And I don't think I'd want a brain chip that could do much more than that--what if a software error or some accidental thought turned off my heart!

    ReplyDelete
  12. Shades of Vernor Vinge (see http://en.wikipedia.org/wiki/Technological_singularity and follow the links). Have you read his SF? "A Fire Upon the Deep" is great modern Space Opera -- recommended.
    /be

    ReplyDelete
  13. Robert O'Callahan24 June 2005 22:48

    I've read it, and also "A Deepness In The Sky" which has some ideas related to what I talked about here.

    ReplyDelete
  14. Christopher Robert Jaquez28 June 2005 08:17

    Sorry for the length of this post but this is an issue I have thought a lot about and this is the trimmed-down version:
    It seems to me that, in the spirit of the internet and those who believe �Information wants to be free,� any control of such a system by an oligarchy �at the root of the system� would be somewhat unlikely, though not necessarily impossible. Leaks and hacks in such a system, whether accidental or willful, benevolent or malicious, would only help to keep the system more open, much like the internet today with its widespread sharing of information (often in the form of piracy). The hackers of the day would surely not rest until all possible avenues of restoring the equality of the system had been explored and exploited. After all, on today�s internet, do the authorities have any tools at their disposal that hackers do not and why should they in the world of the future. The system you describe sounds to be an eerie mix of my personal paradise and the collective consciousness of the Borgs. That being the case, it would be hard for an elite group at the top to make decisions independent of the rest of society as it seems that many, if not all, decisions of public policy would be instantaneously carried out through pure democratic polling of everyone in the system. Such a society would become adherent to the whims of the majority (or at least the plurality). The likelihood seems remote of even a plurality thinking it best to take away the voice of the individual, despite the ease with which it can be heard, only to hand power over to a small few when every member of society necessarily has the same capabilities as this would-be elite.
    Now, that being said, I would appeal to the incompleteness theorem and say that there is no way that we can ever flawlessly duplicate the human mind in any system since we are constrained by the bounds of the human mind itself when creating such a system. Even after creating AI and enlisting it in our endeavor, it seems logical that the same limitation would apply to them as a product of our own minds. This would, of course, leave an un-sealable door open to the outside world, giving members of the system the potential for escape even if only on a temporary basis. This is reminiscent of the premise of �The Matrix� where �if you are not one of us, you are one of them� but it is still possible to become �free� through purely mental hacking of the system. This, though, sounds more likely to be the means through which a terrorist strike would occur (short-term escape) than the path of Big Brother, since, seemingly, once he rejoined the system to take control, his intentions would become known to everyone and would instantly be vetoed.
    Now, like many others, my views are must seem quite optimistic but just as we need doomsayers to warn us of tending down the wrong path, we need others who have a firm faith in the heart of humanity so that we are not afraid to explore potentially beneficial avenues for fear of what else lies beyond the good. A balanced argument is always what is needed and for that, we all need to be heard.

    ReplyDelete
  15. Robert O'Callahan28 June 2005 11:25

    > After all, on today�s internet, do the authorities
    > have any tools at their disposal that hackers do
    > not and why should they in the world of the
    > future.
    Today we have open hardware; most of the important network endpoints are ultimately under the control of the users, if you're willing to work hard enough. In the future most important network endpoints will contain "rights management" hardware that is ultimately under the control of the hardware and/or software vendors and in practice cannot be subverted by users or hackers. There will still be bugs and hacks, but it will be a lot more difficult to repurpose the infrastructure the way, say, the Internet was turned into an MP3 distribution network, or the way the WWW grew up in the face of commercial opposition.

    ReplyDelete

Note: only a member of this blog may post a comment.