(…)a hypothetical event occurring when technological progress becomes so extremely rapid, due in most accounts to the technological creation of superhuman intelligences, that it makes the future after the singularity qualitatively different and harder to predict. It has been proposed that a technological singularity will occur in the 21st century via one or more possible technological advances.
Doctorow discusses Newitz’ post and agrees with her identifying
a common flaw in futuristic prediction: assuming that technology will go far enough to benefit us, and then stop before it disrupts us. For a prime example from recent history, see the record industry’s correct belief that technology would advance to the point where we could have optical disc readers in every room, encouraging us to buy all our music again on CD; but their failure to understand that technology would continue to advance to the point where we could rip all those CDs and share the music on them using the Internet.
I think in the media industry most of us learned to see the disruptive effects of technological change. But as mentioned in my previous post, one of the common themes in near-future science fiction is security, which goes far beyond the upheaval in particular industries and has to do with our survival.
If the bold predictions by the singularity-thinkers are even remotely true, the growth of our technological capabilities risks to enable even small groups or individuals to produce and use weapons of mass destruction. These weapons could very well be designer bioviruses (read Doctorow’s interview with Ray Kurzweil).
In order to avoid such a catastrophic event to happen, authorities could try to establish a Big Brother regime. They could try to heavily regulate the dissemination of technology and knowledge. Kurzweil does not believe that would be the right response:
In Huxley’s Brave New World, the rationale for the totalitarian system was that technology was too dangerous and needed to be controlled. But that just pushes technology underground where it becomes less stable. Regulation gives the edge of power to the irresponsible who won’t listen to the regulators anyway.
The way to put more stones on the defense side of the scale is to put more resources into defensive technologies, not create a totalitarian regime of Draconian control.
This of course acknowledges the danger in a rather optimistic way – science and technology will deliver the tools necessary to stop the ultimate evil use of that same science and technology.
We could expand this discussion to media in general. Our networks, our beloved internet and the way it allows us to spread and discuss ideas, also helps those who are sufficiently alienated to dream of mass destruction. Even discussing how difficult it is to design a biovirus capable of erupting and spreading silently with long incubation periods, could incite some disgruntled young man (for a number of reasons, it seems primarily young males have such destructive desires) to actually try it out. But then again, talking about it openly could make more people aware of the dangers ahead and stimulate ideas and policies to deal with them.
What is fascinating as well as frightening is that the blending of augmented reality, virtual reality and the physical reality is a very fundamental process. Often we think of augmented reality and virtual worlds as ‘constructed’ environments while the physical reality is more stable, more solid. In fact, what we call ‘physical reality’ changes all the time – the ancient insight of the Greek philosopher Heraclitus. We humans are working hard trying to control matter on an atomic and molecular scale, adding insights from biology and using our ever-expanding computing power – which one day could no longer be ‘our’ power.
Somehow the question in this ‘mixed realities’ world is whether we’re realizing the old dreams of ensuring the conditions for prosperity and happiness for all or whether the endgame of humanity is near.