In the book Race against the machine, written by the MIT-researchers Erik Brynjolfsson and Andrew McAfee, one of the examples of the exponential technological development is the self-driving Google car. Google claims to have safely completed over 200,000 miles of computer-led driving, even though there is some discussion about this.
Of course there is scepticism about the self-driving car (read also the post on TPM Idealab). Who is ultimately responsible if anything goes wrong? There must be some human to blame, no?
In Race Against the Machine Â the vision is optimistic. It is told how in 2004 the first DARPA Grand Challenge (in an unpopulated desert) ended miserably. In 2010 Google could announce that it had modified Toyota Priuses into fully autonomous cars. In only a few years time something seemingly impossible became possible.
So maybe that in a not too distant future we’ll consider automated truck driving normal while we will deem the practice of humans driving dangerous trucks as foolishly dangerous. But then again, isn’t driving a car or a truck a most passionate act, and can we even imagine to make computer driving the default option?
Information technology could change other things as well on our roads, even without self-driving cars. When the Internet of Things turns objects into endless streams of communication, it would be very odd to continue with weird interventions such as random speed limit controls – and even weirder, the announcement and localization of those controls by the authorities themselves. Â What we can expect are cars which constantly send out streams of information, triggering alerts when speed limits and other regulations are being violated. Those alerts could be easily communicated and the offending drivers would pay the consequences or would at least have to justify their driving behavior.
But do we actually want this to happen? Of course we can locate almost each and every one of us by way of smartphone signals, but monitoring this information and sending it to authorities seems unacceptable – except for very specific situations. Once again there is this notion of freedom and something “typically human” which seems to be challenged. Even though one could argue that hundreds of thousands of tragic traffic accidents could be avoided by combinations of computer driven cars and real-time monitoring, one can be sure that there will be stiff resistance by those claiming that fundamental privacy rights and other freedoms are being sacrificed in a kind of Big Brother system.
The wider discussion is about “radical transparency“, which not only demands that corporations and authorities are transparent, but that transparence would be the “normal” situation for every citizen. It seems to be the underlying philosophy of Facebook for instance. The idea is that such transparency would make the diversity of lifestyles obvious and as such increase tolerance. Media expert danah boyd discusses this for instance on her blog apophenia, discussing these quotes from Facebook-founder Zuckerberg:
â€œWe always thought people would share more if we didnâ€™t let them do whatever they wanted, because it gave them some order.â€ â€“ Zuckerberg, 2004
â€œYou have one identityâ€¦ The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quicklyâ€¦ Having two identities for yourself is an example of a lack of integrityâ€ â€“ Zuckerberg, 2009
She explains her position also in this recent video:
Webstock ’12: danah boyd – Culture of Fear + Attention Economy = ?!?! from Webstock on Vimeo.
What is fascinating is that these discussions are not about some distant future. All these things, the ubiquitous social networks, the internet of things, self-driving cars, the demand for radical transparency, the viral spreading of fear are realities – sometimes in very early phases, but realities nevertheless.