Bruce Sterling and the convergence of humans and machines

Bruce Sterling is a tremendously inspiring science fiction author, futurist, design thinker and cultural critic. Menno Grootveld and Koert van Mensvoort had a great interview on with him about Artificial Intelligence, the Technological Singularity and all things convergence of humans and machines.

In this interview, Sterling explains in very clear terms how we are prisoners of metaphysical ideas when we believe in ‘thinking machines’ and entities which would become like artificial super-humans.

Of course, technologists are seduced by these ideas, but then their projects get unbundled into separate products and services.

It’s not that Sterling wants to prevent us from being too ambitious regarding computers and algorithms. In fact, he explains that by trying to think about our technological futures in anthropocentric terms, we actually limit what is possible.

Whether you are obsessed by the Singularity or not, this is a must-read interview.

#SUsummit Amsterdam showcases the augmentation of everything

Virtual worlds are often weird environments. The innovators in that industry have a broad view on our future. Conferences and community conventions offer fascinating insights and discussions. I remember how futurist and technologist Ray Kurzweil gave a (virtual) presentation during the Second Life Community Convention 2009 in San Francisco. Afterwards I rushed to the Green Apple bookstore to buy his book The Singularity is Near (2005). Wikipedia explains:

Kurzweil describes his law of accelerating returns which predicts an exponential increase in technologies like computers, genetics, nanotechnology,robotics and artificial intelligence. He says this will lead to a technological singularity in the year 2045, a point where progress is so rapid it outstrips humans’ ability to comprehend it. Irreversibly transformed, people will augment their minds and bodies with genetic alterations, nanotechnology, and artificial intelligence. Once the Singularity has been reached, Kurzweil predicts machine intelligence will be infinitely more powerful than all human intelligence combined. Afterwards, Kurzweil says, intelligence will radiate outward from the planet until it saturates the universe.

Fast forward to December 2012, when Kurzweil was hired by Google “to bring natural language understanding to Google”. He was involved in various education and learning projects, one of the most interesting is the Singularity University (SU) which he co-founded with Peter Diamandis (2008).

The headquarters of the SU are at Moffett Federal Airfield (NASA Research Park), California, but in Europe we can attend two-day Summit conferences. Last year I attended the Singularity University conference in Budapest, Hungary and I (together with other participants) built a mind map about the state of the future at that time, topics of that mind map include ambient intelligence (sensors, ubiquitous computing, networks), robots, energy, artificial intelligence, 3D printing, synthetic biology, health and medical services, organizational change. In short, it’s about how the augmentation of the human intellect materializes itself and disrupts about everything.

This year my newspaper colleague Peter De Groote went to the Amsterdam Summit. He reported in De Tijd that the fully self-driving car will be available in ten years time, that robots are still toddlers but are growing up fast (and they can read your emotions), that artificial intelligence evolves from disappointing to disruptive, that we no longer should limit ourselves to wearables but that implantables are next in line to augment us:

Virtual Worlds

So… nothing about virtual reality and virtual worlds in this disruption overview? Yes there was. First let’s take a step back: in September Jason Dorrier posted on SingularityHub about Virtual Reality – will it become the next great media platform? He showed this inspiring video:

The idea is that technologies such as Oculus Rift and the new generation of virtual worlds (think High Fidelity) will make it possible to visit the worlds in the other person’s head. We make our dreams accessible, quite literally. Which brings us to brain-to-brain communication and yes, this is a Singularity topic. One example being discussed: University of Washington researchers can transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

Rob Nail talked during the conference about exciting applications like allowing a surgeon to operate from 10,000 km distance, to a pilot assisting a non-pilot to land an aircraft. Or how we network and augment our brains very literally…

Inventing a New University

One of the courses I really enjoyed these last few months was History and Future of (Mostly) Higher Education, by professor Cathy N. Davidson (Duke University) on the Coursera platform. The final assignment was the invention of a new institution of higher education. This was my answer, and yes, I did mention virtual environments… I called the thing Peeragogy University, named after a project facilitated by Howard Rheingold,


It’s a pleasure and an honor to present our Peeragogy University. We firmly believe that we live in an epoch of exponential change. The old industrial ways of thinking do no longer apply for learning and teaching (see our Duke U course). We want to help our students to become Change Masters, or rather, we want them to help each other (peer-to-peer) to become Change Masters. 

What is our Mission Statement? There are three crucial skills we want our graduates to acquire:
1) The deep understanding of the fact that this education is not about them. It’s about what they can do for humanity.
2) The deep understanding and the skill of connecting to others in order to realize our dreams. During the program students will discover how connected the big issues of our time are, and how necessary it is to break out of academic silos to work together, to celebrate diversity in our teams. “Diversity” also means that we involve people from outside the institution and from outside academia. We use the wisdom and creativity of artists to facilitate this (see Duke U course).
3) The deep understanding and the skill of learning how to learn and adapt to emerging technologies in the broadest sense of the word ‘technologies’. So the crucial skill and value here is the eagerness to learn, and to learn how to learn throughout their lives (content vs. learning, Duke U course). 

What is the structure of our institution? 
Every student gets a preliminary course during about ten weeks. Leading experts will present major breakthroughs in information technologies, biotech, management (including new ways to launch a project or a business), healthcare, robotics, nanotechnology, energy systems and the makers industries (3D printing, DIY drones etc). These are the competence clusters which form the basic structure of Peeragogy University. 

The students will actually experiment (learning by  making, see Duke U course) with bio-hacking, programming, robotics, genetic engineering, management principles… These weeks will be inspired by what the Singularity University is already doing in California. What we add: we’ll help the students to explicitly build a personal learning environment, making use of their social connections online and/or on campus and of the affordances of the internet (blogs, wikis, social bookmarks, forums, crap detection and information dashboards). See also the Digital Literacies as discussed in the Duke U course. 

After these ten weeks students will have to decide what their Major Project will be for the next years (we have a 4 year program in place). This project must make a difference for humanity (see Mission statement). Maybe something which can affect the lives of millions of people? Typically,this project will make it necessary to acquire an advanced knowledge and skill-level in several subjects. 

However, not everybody who has some healthcare project as his Major Project will need to become a surgeon. Maybe it’s more interesting to become a robotics-specialist in order to contribute to a breakthrough (think exoskeletons for paraplegics). Becoming a robotics specialist probably implies great skill in programming and algorithms. Someone else in the team will become an expert in capital markets in order to find ways to get financing and to develop a financial plan. A third person can contribute because of special knowledge regarding patient psychology and sociology (see Mission Statement aboutconnecting). For each special skill the faculty experts will not teach as Sages on a Stage, but as facilitators of project based peer-to-peer learning. 

As these students try to change the world, they will have to reflect on what they’re doing (Mission statement: meta-learning). They will discuss on an academic level, using the resources of philosophy, logical thinking and using art as a way to mobilize more people for their projects and diversify their teams. 

Frequently Asked Questions 

Who are the teachers? All students are also teachers. They work in project teams, and we’ll also organize contacts between the teams. We do havefaculty: there are recognized experts in their fields, from academia but also from outside academia. They will be facilitators of the learning. 
Who are the students? We do not require specific diplomas. We do run an Introductory MOOC (3 months), and achievements during that MOOC will be an important element for admission on the online or physical campus for the full four year program. 

Where do we meet? Our Campus is situated in Portland, Oregon, right next to some famous beer micro-breweries. However, we run an international Introductory MOOC (three months) and a Companion MOOC which runs on a permanent basis. We make heavily use of virtual environments to create an interesting online alternative for the physical campus. 

Who pays and how much? 
Peeragogy University found some generous sponsors, but nevertheless we have to ask a fee for the physical campus experience: $80,000 for one year, housing, tuition and food included. There is a considerable discount for tuition-only students. 
The Companion MOOC-version is free, except for those students who want a formal assessment of their work ($5,000 on a yearly basis). The Introductory MOOC is free, except for those who want an assessment in order to gain access to the 4 year program ($100). 
Students who have financial difficulties can apply for special sponsoring. Students will learn during the Introductory MOOC how to finance their studies (alternative financing techniques). 

Peeragogy University organizes short term programs for companies and government institutions, These programs help financing the Peeragogy University. 

Assessments and Certificates: the assessments are based on the performance during the year – compare it to assessments for company and government workers. Important elements are creativity, how people collaborate, how they learn, how impressive their skills are. We have a completion diploma, but more important even are the Peeragogy Badges (see Duke U course) which reflect the skills of the student. Important to realize: the Major Projects can become companies or institutions outside of Peeragogy University. Students learn to inform venture capitalists, government and social profit players about their Major Projects…

Beware, the Singularity Comes Nearer: Kurzweil Teams Up with Google

((I had a rather troublesome evening posting this stuff about the entry of Ray Kurzweil in Google. I tried to use Storify for this, but that service and WordPress are a difficult combination. In general, I find WordPress to be rather difficult to use, compared to other blogging platforms.))

Anyway, here we go. The author, futurist and inventor Ray Kurzweil joins Google as a Director of Engineering. The news is a few days old by now, and I wrote a column for my newspaper about the event. Here’s some of the stuff I used – videos and blog posts, and also some stuff I did not yet use.

Kurzweil and Google already had a good relationship as they work together in the Singularity University. Here is Kurzweil explaining about the Singularity and the academic program:”>< Here you find a trailer for The Singularity is New, featuring Ray Kurzweil:">< Kurzweil is known for his bold predictions, like his view that brain uploads are nearer than we think.

So do we have to conclude that Google and all the folks at the Singularity University are sharing these same convictions? I don’t think so. I guess that companies and institutions (not only Google, also NASA is involved with the Singularity University) are interested in the research linked to these visions – realizing that even when artificial intelligence and brain uploads will take more time than expected by Kurzweil or would lead to outright failures, we’ll learn lots of things which could be very useful in other contexts.

Read also these people about Kurzweil entering Google:
– Jason Dorrier on the Singularity Hub
– Jon Mitchell at ReadWrite

And now for something slightly different. What about politics and social issues in all this? I posted about a remarkable analysis by the Nationale Intelligence Council, indicating possible social and political tensions when only the rich would be able to augment themselves.

There are also consequences for national sovereignty as it discussed in this interview at Metahaven with Benjamin Bratton who is about to publish the book The Stack: On Software and Sovereignty (The MIT Press, 2013).

The Rapture of the Nerds

I’ve just finished reading The Rapture of the Nerds, written by Charlie Stross and Cory Doctorow. It’s an often almost dream-like book about life in the era of the technological singularity. Wikipedia explains:

The technological singularity is the hypothetical future emergence of greater-than-human superintelligence through technological means.Since the capabilities of such intelligence would be difficult for an unaided human mind to comprehend, the occurrence of a technological singularity is seen as an intellectual event horizon, beyond which events cannot be predicted or understood.

However, followers of Ray Kurzweil, better have some sense of humor reading this book as it is a comic novel.
It often is even hilarious, and especially those who have experienced life in user-generated virtual worlds such as Second Life or Open Sim will have a good time reading this stuff. Charlie Stross and Cory Doctorow are very interesting authors, who did some serious thinking about the issue of singularity and it seems they are very sceptical about it (Stross maybe even more than Doctorow). Here you find a wide-ranging interview with blogger, activist and author Doctorow:

Also visit Charlies’s Diary, being the blog of Charles Stross, and then try to comment there: you’ll be linked to an interesting Google Group where Stross interacts with his readers. He seems to use the Google Group to avoid the spam.
At The WELL you can participate in a forum discussion with both authors.

Kevin Slavin about those algorithms that govern our lives

How does our near future look like, as computing and fast internet access become ubiquitous, ever more digital data become available in easy to use formats? Well, it seems our world is being transformed by algorithms, and at the LIFT11 conference in Geneva, Switzerland, Kevin Slavin presented some fascinating insights about this disruptive change.

I try to summarize his talk. I added some musings of my own, such as the stuff about social capital rankings and the Singularity.

Kevin Slavin is the co-founder of Starling, a co-viewing platform for broadcast TV, specializing in real-time engagement with live television. He also works at Area/Coding, now Zynga New York, taking advantage “of today’s environment of pervasive technologies and overlapping media to create new kinds of gameplay.” He teaches Urban Computing at NYU’s Interactive Telecommunications Program, together with Adam Greenfield (author of Everyware: The dawning age of ubiquitous computing).


Slavin loves Lower Manhattan, the Financial District. It’s a place built on information. Big cities had to learn to listen, for instance London had to use a new technology during World War II, called radar, to detect incoming enemy bombers. Which would lead to the Stealth airplanes, the so-called invisible, untraceable planes – but anyway, also the Stealth plane can be located, and shot, as it appeared in Serbia.

Slavin is a master in explaining technologically complex things. For instance, the idea behind Stealth is to break up the big thing – the bomber – into a lot of small things which look like birds. But what if you don’t try to look for birds, but for big electrical signals? If you can “see” such a signal while nothing appears on your radar, well, chances are that you’re looking at an American bomber.

(Which reminds me: in this day and age, forget about privacy. If you want to hide, the only strategy is to send out lots of conflicting and eventually fake signals – I think futurist Michael Liebhold said that somewhere. His vision of the Geospatial Web: “Imagine as you walk through the world that you can see layers of information draped across the physical reality, or that you see the annotations that people have left at a place describing the attributes of that place!”

Just as was the case for the Stealth, it just takes math, pattern recognition etc to find out who or what hides behind all the bits of information one leaves behind).

The same reasoning applies for other stealthy movements, like those on financial markets. Suppose you want to process a huge financial deal through the market, without waking up other players. The stealth logic is obvious: split it up in many small parts and make them appear to move randomly.

But then again, it’s only math, which can be broken by other math. It’s a war of algorithms. As explains Wikipedia:

Starting from an initial state and initial input (perhaps null),[4] the instructions describe a computation that, when executed, will proceed through a finite [5] number of well-defined successive states, eventually producing “output”[6] and terminating at a final ending state.

Slavin says that 70 percent of all trades on Wall Street are either an algorithm trying to be invisible or an algorithm trying to find out about such algorithms. That’s what high frequency trading is about: finding those things moving through the financial skies.

Who will be the winner? It’s not only about the best algorithm or the best computer, but also about the best network – we’re talking here about milliseconds. If you’re sitting on top of a carrier hotel where all the internet pipes in a big city are surfacing, you have such an advantage. The internet is not this perfectly distributive thing floating around there, it has its physical properties which for instance determine the price of real estate in cities.


Slavin explains how it are the needs of the algorithms which can determine real estate prices and urban architecture in New York, London, Tokyo or Frankfurt. Real estate 20 blocks away from the Financial District suddenly becomes more expensive than offices which appear to be better connected in human terms. Referring to Neal Stephenson, our professor said that cities are becoming optimized as motherboards.

(Read Mother Earth Mother Board by Neal Stephenson on Wired and, also on Wired, Netscapes: Tracing the Journey of a Single Bit by Andrew Blum. Which also brings us back to Adam Greenfield, who gave a great talk at the Web and Beyond conference in Amsterdam, showing how web design principles and discussions are becoming largely relevant in urbanism – the city as a mother board or as a web site, to be organized as such and where the same concepts and algorithms can be used. Just think about the application of access and permissioning regimes in a world where the overwhelming majority of the citizens is perfectly traceable by their cell and smartphones. Which means that design becomes a very political matter).

Algorithms determine what we hear on the radio and what movies we see – and also what we won’t hear or see. They claim to predict what we want to read or watch, organize traffic, investment decisions, research decisions, and determine which conversations or searches on the web point to terrorist plots and who should be monitored and/or arrested by the security services.

Sixty percent of all movies rented on Netflix are rented because that company recommended those movies to the individual customers. The algorithms Netflix uses even take into account the unreliability of the human brain (we are rather bad in consistently rating things. Epagogix helps studios to determine the box office potential of a script – and influences in that way what will actually be produced.

There is an opacity at work here. Slavin showed a slide depicting the trajectory of the cleaning robot Roomba, which made it obvious that the logic applied here does not match with a typical human way of cleaning a floor.

Crashing black boxes

One may think that an algorithm is just a formalization of human expert knowledge. After all, a content producer knows what has the biggest chances to succeed in terms of box office revenue, clicks, comments and publicity. Isn’t an algorithm not just the automated application of that same knowledge? Not really. In fact, competing algorithms will be tweaked so as to produce better results, or they will tweak themselves. The algorithm often is a black box.

Genetic algorithms seem to mimic the process of natural evolution using mutations, selections, inheritances. Tell the algorithm that a certain weight has to travel from A to B, and provide some elements such as wheels, and the algorithm will reinvent the car for you – but the way in which it works is beyond are human comprehension (it does not even realize from the start that the wheels go on the bottom, it just determines that later on in its iterations): “they don’t relate back to how we humans think.”

Which is important, because think about it: algorithms determine which movies will be produced, and algorithms will provide a rating saying whether a movie is recommended for you. Where is the user in all this? Slavin: “maybe it’s not you.”

Maybe these algorithms smooth things out until it all regresses toward the mean, or maybe they cause panic when all of a sudden financial algorithms encounter something they weren’t supposed to encounter and start trading stocks all of a sudden at insane prices. This happened on May 6 2010. Wikipedia about this Flash Crash:

On May 6, US stock markets opened down and trended down most of the day on worries about the debt crisis in Greece. At 2:42 pm, with the Dow Jones down more than 300 points for the day, the equity market began to fall rapidly, dropping more than 600 points in 5 minutes for an almost 1000 point loss on the day by 2:47 pm. Twenty minutes later, by 3:07 pm, the market had regained most of the 600 point drop.

Humans make errors, but those are human errors. algorithms are far more difficult to “read”, they do their job well – most of the time – but it’s often impossible to make sense in a human, story-telling way of what they do.

There is no astronomy column in the newspaper, there is astrology. Because humans like the distort facts and figures and tell stories. That’s what they do in astrology, but also on Wall Street – because we want to make sense to ourselves, even if means we’ve to distort the facts.

Now what does a flash crash look like in the entertainment industry? In criminal investigations? In the rating of influence on social networks? Maybe it happened already.

Social Capital

Some other presentations at LIFT are also relevant in this context. Algorithms are for instance increasingly being used to determine your personal ‘value’ – for instance your value as an ‘influencer’ on social media. Klout is a company which uses its algorithm to measure the size of a person’s network, the content created, and how other people interact with that content. PeerIndex is also working with social network data to determine your ‘social capital’.

This is not just a weird vanity thing. Some hotels will give people with a high Klout ranking a VIP-treatment, hoping on favorable comments on the networks. Social influence and capital can be used as an element in the financial rating of a person or a company.

This in turn will incite companies but also individuals to manage their online networks. At the LIFT11 conference, Azeem Azhar, founder of PeerIndex, gave a great presentation about online communities and reputations management while social media expert Brian Solis talked about social currencies. Of course, people will try to game social ranking algorithms, just as they try to game search algorithms on the web.


Rapidly increasing computer and network power, an avalanche of digital data and self-learning networks, ambient intelligence could lead to what some call the Singularity: “a hypothetical event occurring when technological progress becomes so rapid and the growth of artificial intelligence is so great that the future after the singularity becomes qualitatively different and harder to predict” (Wikipedia).

Many scientists dispute the spectacular claims of Singularity thinkers such as Ray Kurzweil. There is also controversy about whether, if the Singularity would take place, this would be good or bad for humanity. Slavin points out the opacity of the algorithms. They can be efficient, but don’t tell stories and we cannot tell a good story about the inner workings of black boxes. Now already algorithms are capable of taking into account our weird human imperfections and inconsistencies, while humans also respond by trying to game algorithms. In that sense we’re witnessing not one spectacular moment of a transition to Singularity, but a gradual shift where algorithms become a crucial part of our endeavours and societies.

Rehearsing the day that we’ll program matter

So I’ve been monitoring that State of the World 2011 conversation on The WELL (with Bruce Sterling and Jon Lebkowski), learning about design fiction (see also critical design) and watching an awesome video about Claytronics (programmable matter). Sterling, on his blog Beyond the Beyond: “Check out this bonkers Discovery Channel treatment of some Carnegie Mellon nano-visionary weirdness.”

The video made me think that what we currently do in Second Life and OpenSim is sometimes a kind of design fiction, making people used to programming virtual matter, preparing them for the day when we actually program ‘physical’ matter. When could it become feasible, when will it have an impact on the economy? I’ve no idea, but I asked on Quora (skipping for now the question whether we’ll be happy in such a world, and how long it will last before doomsday becomes reality).


On an even more general level: if programmable matter would become mainstream, would that be an event which we could call ‘technological singularity‘, would it be part of that? Talking of which I recently stumbled upon a fascinating conversation between Robin Hanson (George Mason University) and the economist Robert Russell, contemplating a situation in which worldwide output would double every two weeks instead of every 15 years (the current situation). What would it mean for the return on capital, on labor? What about the environment, security and so many other crucial aspects? You’ll find the long audio discussion on the Library of Economics and Liberty.

That dangerous Singularity

Talking about the future: I’ve been reading insightful posts by Cory Doctorow on BoingBoing and Annalee Newitz on i09 about the Technological Singularity, described by Wikipedia as

(…)a hypothetical event occurring when technological progress becomes so extremely rapid, due in most accounts to the technological creation of superhuman intelligences, that it makes the future after the singularity qualitatively different and harder to predict. It has been proposed that a technological singularity will occur in the 21st century via one or more possible technological advances.

Doctorow discusses Newitz’ post and agrees with her identifying

a common flaw in futuristic prediction: assuming that technology will go far enough to benefit us, and then stop before it disrupts us. For a prime example from recent history, see the record industry’s correct belief that technology would advance to the point where we could have optical disc readers in every room, encouraging us to buy all our music again on CD; but their failure to understand that technology would continue to advance to the point where we could rip all those CDs and share the music on them using the Internet.

I think in the media industry most of us learned to see the disruptive effects of technological change. But as mentioned in my previous post, one of the common themes in near-future science fiction is security, which goes far beyond the upheaval in particular industries and has to do with our survival.

If the bold predictions by the singularity-thinkers are even remotely true, the growth of our technological capabilities risks to enable even small groups or individuals to produce and use weapons of mass destruction. These weapons could very well be designer bioviruses (read Doctorow’s interview with Ray Kurzweil).

In order to avoid such a catastrophic event to happen, authorities could try to establish a Big Brother regime. They could try to heavily regulate the dissemination of technology and knowledge. Kurzweil does not believe that would be the right response:

In Huxley’s Brave New World, the rationale for the totalitarian system was that technology was too dangerous and needed to be controlled. But that just pushes technology underground where it becomes less stable. Regulation gives the edge of power to the irresponsible who won’t listen to the regulators anyway.

The way to put more stones on the defense side of the scale is to put more resources into defensive technologies, not create a totalitarian regime of Draconian control.

This of course acknowledges the danger in a rather optimistic way – science and technology will deliver the tools necessary to stop the ultimate evil use of that same science and technology.

We could expand this discussion to media in general. Our networks, our beloved internet and the way it allows us to spread and discuss ideas, also helps those who are sufficiently alienated to dream of mass destruction. Even discussing how difficult it is to design a biovirus capable of erupting and spreading silently with long incubation periods, could incite some disgruntled young man (for a number of reasons, it seems primarily young males have such destructive desires) to actually try it out. But then again, talking about it openly could make more people aware of the dangers ahead and stimulate ideas and policies to deal with them.

What is fascinating as well as frightening is that the blending of augmented reality, virtual reality and the physical reality is a very fundamental process. Often we think of augmented reality and virtual worlds as ‘constructed’ environments while the physical reality is more stable, more solid. In fact, what we call ‘physical reality’ changes all the time – the ancient insight of the Greek philosopher Heraclitus. We humans are working hard trying to control matter on an atomic and molecular scale, adding insights from biology and using our ever-expanding computing power – which one day could no longer be ‘our’ power.

Somehow the question in this ‘mixed realities’ world is whether we’re realizing the old dreams of ensuring the conditions for prosperity and happiness for all or whether the endgame of humanity is near.