Beware, the Singularity Comes Nearer: Kurzweil Teams Up with Google

((I had a rather troublesome evening posting this stuff about the entry of Ray Kurzweil in Google. I tried to use Storify for this, but that service and WordPress are a difficult combination. In general, I find WordPress to be rather difficult to use, compared to other blogging platforms.))

Anyway, here we go. The author, futurist and inventor Ray Kurzweil joins Google as a Director of Engineering. The news is a few days old by now, and I wrote a column for my newspaper about the event. Here’s some of the stuff I used – videos and blog posts, and also some stuff I did not yet use.

Kurzweil and Google already had a good relationship as they work together in the Singularity University. Here is Kurzweil explaining about the Singularity and the academic program:

Here you find a trailer for The Singularity is New, featuring Ray Kurzweil:

Kurzweil is known for his bold predictions, like his view that brain uploads are nearer than we think.

So do we have to conclude that Google and all the folks at the Singularity University are sharing these same convictions? I don’t think so. I guess that companies and institutions (not only Google, also NASA is involved with the Singularity University) are interested in the research linked to these visions – realizing that even when artificial intelligence and brain uploads will take more time than expected by Kurzweil or would lead to outright failures, we’ll learn lots of things which could be very useful in other contexts.

Read also these people about Kurzweil entering Google:
- Jason Dorrier on the Singularity Hub
- Jon Mitchell at ReadWrite

And now for something slightly different. What about politics and social issues in all this? I posted about a remarkable analysis by the Nationale Intelligence Council, indicating possible social and political tensions when only the rich would be able to augment themselves.

There are also consequences for national sovereignty as it discussed in this interview at Metahaven with Benjamin Bratton who is about to publish the book The Stack: On Software and Sovereignty (The MIT Press, 2013).

The Rapture of the Nerds

I’ve just finished reading The Rapture of the Nerds, written by Charlie Stross and Cory Doctorow. It’s an often almost dream-like book about life in the era of the technological singularity. Wikipedia explains:

The technological singularity is the hypothetical future emergence of greater-than-human superintelligence through technological means.Since the capabilities of such intelligence would be difficult for an unaided human mind to comprehend, the occurrence of a technological singularity is seen as an intellectual event horizon, beyond which events cannot be predicted or understood.

However, followers of Ray Kurzweil, better have some sense of humor reading this book as it is a comic novel.
It often is even hilarious, and especially those who have experienced life in user-generated virtual worlds such as Second Life or Open Sim will have a good time reading this stuff. Charlie Stross and Cory Doctorow are very interesting authors, who did some serious thinking about the issue of singularity and it seems they are very sceptical about it (Stross maybe even more than Doctorow). Here you find a wide-ranging interview with blogger, activist and author Doctorow:

Also visit Charlies’s Diary, being the blog of Charles Stross, and then try to comment there: you’ll be linked to an interesting Google Group where Stross interacts with his readers. He seems to use the Google Group to avoid the spam.
At The WELL you can participate in a forum discussion with both authors.

Kevin Slavin about those algorithms that govern our lives

How does our near future look like, as computing and fast internet access become ubiquitous, ever more digital data become available in easy to use formats? Well, it seems our world is being transformed by algorithms, and at the LIFT11 conference in Geneva, Switzerland, Kevin Slavin presented some fascinating insights about this disruptive change.

I try to summarize his talk. I added some musings of my own, such as the stuff about social capital rankings and the Singularity.

Kevin Slavin is the co-founder of Starling, a co-viewing platform for broadcast TV, specializing in real-time engagement with live television. He also works at Area/Coding, now Zynga New York, taking advantage “of today’s environment of pervasive technologies and overlapping media to create new kinds of gameplay.” He teaches Urban Computing at NYU’s Interactive Telecommunications Program, together with Adam Greenfield (author of Everyware: The dawning age of ubiquitous computing).

Stealth

Slavin loves Lower Manhattan, the Financial District. It’s a place built on information. Big cities had to learn to listen, for instance London had to use a new technology during World War II, called radar, to detect incoming enemy bombers. Which would lead to the Stealth airplanes, the so-called invisible, untraceable planes – but anyway, also the Stealth plane can be located, and shot, as it appeared in Serbia.

Slavin is a master in explaining technologically complex things. For instance, the idea behind Stealth is to break up the big thing – the bomber – into a lot of small things which look like birds. But what if you don’t try to look for birds, but for big electrical signals? If you can “see” such a signal while nothing appears on your radar, well, chances are that you’re looking at an American bomber.

(Which reminds me: in this day and age, forget about privacy. If you want to hide, the only strategy is to send out lots of conflicting and eventually fake signals – I think futurist Michael Liebhold said that somewhere. His vision of the Geospatial Web: “Imagine as you walk through the world that you can see layers of information draped across the physical reality, or that you see the annotations that people have left at a place describing the attributes of that place!”

Just as was the case for the Stealth, it just takes math, pattern recognition etc to find out who or what hides behind all the bits of information one leaves behind).

The same reasoning applies for other stealthy movements, like those on financial markets. Suppose you want to process a huge financial deal through the market, without waking up other players. The stealth logic is obvious: split it up in many small parts and make them appear to move randomly.

But then again, it’s only math, which can be broken by other math. It’s a war of algorithms. As explains Wikipedia:

Starting from an initial state and initial input (perhaps null),[4] the instructions describe a computation that, when executed, will proceed through a finite [5] number of well-defined successive states, eventually producing “output”[6] and terminating at a final ending state.

Slavin says that 70 percent of all trades on Wall Street are either an algorithm trying to be invisible or an algorithm trying to find out about such algorithms. That’s what high frequency trading is about: finding those things moving through the financial skies.

Who will be the winner? It’s not only about the best algorithm or the best computer, but also about the best network – we’re talking here about milliseconds. If you’re sitting on top of a carrier hotel where all the internet pipes in a big city are surfacing, you have such an advantage. The internet is not this perfectly distributive thing floating around there, it has its physical properties which for instance determine the price of real estate in cities.

Motherboards

Slavin explains how it are the needs of the algorithms which can determine real estate prices and urban architecture in New York, London, Tokyo or Frankfurt. Real estate 20 blocks away from the Financial District suddenly becomes more expensive than offices which appear to be better connected in human terms. Referring to Neal Stephenson, our professor said that cities are becoming optimized as motherboards.

(Read Mother Earth Mother Board by Neal Stephenson on Wired and, also on Wired, Netscapes: Tracing the Journey of a Single Bit by Andrew Blum. Which also brings us back to Adam Greenfield, who gave a great talk at the Web and Beyond conference in Amsterdam, showing how web design principles and discussions are becoming largely relevant in urbanism – the city as a mother board or as a web site, to be organized as such and where the same concepts and algorithms can be used. Just think about the application of access and permissioning regimes in a world where the overwhelming majority of the citizens is perfectly traceable by their cell and smartphones. Which means that design becomes a very political matter).

Algorithms determine what we hear on the radio and what movies we see – and also what we won’t hear or see. They claim to predict what we want to read or watch, organize traffic, investment decisions, research decisions, and determine which conversations or searches on the web point to terrorist plots and who should be monitored and/or arrested by the security services.

Sixty percent of all movies rented on Netflix are rented because that company recommended those movies to the individual customers. The algorithms Netflix uses even take into account the unreliability of the human brain (we are rather bad in consistently rating things. Epagogix helps studios to determine the box office potential of a script – and influences in that way what will actually be produced.

There is an opacity at work here. Slavin showed a slide depicting the trajectory of the cleaning robot Roomba, which made it obvious that the logic applied here does not match with a typical human way of cleaning a floor.

Crashing black boxes

One may think that an algorithm is just a formalization of human expert knowledge. After all, a content producer knows what has the biggest chances to succeed in terms of box office revenue, clicks, comments and publicity. Isn’t an algorithm not just the automated application of that same knowledge? Not really. In fact, competing algorithms will be tweaked so as to produce better results, or they will tweak themselves. The algorithm often is a black box.

Genetic algorithms seem to mimic the process of natural evolution using mutations, selections, inheritances. Tell the algorithm that a certain weight has to travel from A to B, and provide some elements such as wheels, and the algorithm will reinvent the car for you – but the way in which it works is beyond are human comprehension (it does not even realize from the start that the wheels go on the bottom, it just determines that later on in its iterations): “they don’t relate back to how we humans think.”

Which is important, because think about it: algorithms determine which movies will be produced, and algorithms will provide a rating saying whether a movie is recommended for you. Where is the user in all this? Slavin: “maybe it’s not you.”

Maybe these algorithms smooth things out until it all regresses toward the mean, or maybe they cause panic when all of a sudden financial algorithms encounter something they weren’t supposed to encounter and start trading stocks all of a sudden at insane prices. This happened on May 6 2010. Wikipedia about this Flash Crash:

On May 6, US stock markets opened down and trended down most of the day on worries about the debt crisis in Greece. At 2:42 pm, with the Dow Jones down more than 300 points for the day, the equity market began to fall rapidly, dropping more than 600 points in 5 minutes for an almost 1000 point loss on the day by 2:47 pm. Twenty minutes later, by 3:07 pm, the market had regained most of the 600 point drop.

Humans make errors, but those are human errors. algorithms are far more difficult to “read”, they do their job well – most of the time – but it’s often impossible to make sense in a human, story-telling way of what they do.

There is no astronomy column in the newspaper, there is astrology. Because humans like the distort facts and figures and tell stories. That’s what they do in astrology, but also on Wall Street – because we want to make sense to ourselves, even if means we’ve to distort the facts.

Now what does a flash crash look like in the entertainment industry? In criminal investigations? In the rating of influence on social networks? Maybe it happened already.

Social Capital

Some other presentations at LIFT are also relevant in this context. Algorithms are for instance increasingly being used to determine your personal ‘value’ – for instance your value as an ‘influencer’ on social media. Klout is a company which uses its algorithm to measure the size of a person’s network, the content created, and how other people interact with that content. PeerIndex is also working with social network data to determine your ‘social capital’.

This is not just a weird vanity thing. Some hotels will give people with a high Klout ranking a VIP-treatment, hoping on favorable comments on the networks. Social influence and capital can be used as an element in the financial rating of a person or a company.

This in turn will incite companies but also individuals to manage their online networks. At the LIFT11 conference, Azeem Azhar, founder of PeerIndex, gave a great presentation about online communities and reputations management while social media expert Brian Solis talked about social currencies. Of course, people will try to game social ranking algorithms, just as they try to game search algorithms on the web.

Singularity

Rapidly increasing computer and network power, an avalanche of digital data and self-learning networks, ambient intelligence could lead to what some call the Singularity: “a hypothetical event occurring when technological progress becomes so rapid and the growth of artificial intelligence is so great that the future after the singularity becomes qualitatively different and harder to predict” (Wikipedia).

Many scientists dispute the spectacular claims of Singularity thinkers such as Ray Kurzweil. There is also controversy about whether, if the Singularity would take place, this would be good or bad for humanity. Slavin points out the opacity of the algorithms. They can be efficient, but don’t tell stories and we cannot tell a good story about the inner workings of black boxes. Now already algorithms are capable of taking into account our weird human imperfections and inconsistencies, while humans also respond by trying to game algorithms. In that sense we’re witnessing not one spectacular moment of a transition to Singularity, but a gradual shift where algorithms become a crucial part of our endeavours and societies.

Rehearsing the day that we’ll program matter

So I’ve been monitoring that State of the World 2011 conversation on The WELL (with Bruce Sterling and Jon Lebkowski), learning about design fiction (see also critical design) and watching an awesome video about Claytronics (programmable matter). Sterling, on his blog Beyond the Beyond: “Check out this bonkers Discovery Channel treatment of some Carnegie Mellon nano-visionary weirdness.”

The video made me think that what we currently do in Second Life and OpenSim is sometimes a kind of design fiction, making people used to programming virtual matter, preparing them for the day when we actually program ‘physical’ matter. When could it become feasible, when will it have an impact on the economy? I’ve no idea, but I asked on Quora (skipping for now the question whether we’ll be happy in such a world, and how long it will last before doomsday becomes reality).

Singularity

On an even more general level: if programmable matter would become mainstream, would that be an event which we could call ‘technological singularity‘, would it be part of that? Talking of which I recently stumbled upon a fascinating conversation between Robin Hanson (George Mason University) and the economist Robert Russell, contemplating a situation in which worldwide output would double every two weeks instead of every 15 years (the current situation). What would it mean for the return on capital, on labor? What about the environment, security and so many other crucial aspects? You’ll find the long audio discussion on the Library of Economics and Liberty.

That dangerous Singularity

Talking about the future: I’ve been reading insightful posts by Cory Doctorow on BoingBoing and Annalee Newitz on i09 about the Technological Singularity, described by Wikipedia as

(…)a hypothetical event occurring when technological progress becomes so extremely rapid, due in most accounts to the technological creation of superhuman intelligences, that it makes the future after the singularity qualitatively different and harder to predict. It has been proposed that a technological singularity will occur in the 21st century via one or more possible technological advances.

Doctorow discusses Newitz’ post and agrees with her identifying

a common flaw in futuristic prediction: assuming that technology will go far enough to benefit us, and then stop before it disrupts us. For a prime example from recent history, see the record industry’s correct belief that technology would advance to the point where we could have optical disc readers in every room, encouraging us to buy all our music again on CD; but their failure to understand that technology would continue to advance to the point where we could rip all those CDs and share the music on them using the Internet.

I think in the media industry most of us learned to see the disruptive effects of technological change. But as mentioned in my previous post, one of the common themes in near-future science fiction is security, which goes far beyond the upheaval in particular industries and has to do with our survival.

If the bold predictions by the singularity-thinkers are even remotely true, the growth of our technological capabilities risks to enable even small groups or individuals to produce and use weapons of mass destruction. These weapons could very well be designer bioviruses (read Doctorow’s interview with Ray Kurzweil).

In order to avoid such a catastrophic event to happen, authorities could try to establish a Big Brother regime. They could try to heavily regulate the dissemination of technology and knowledge. Kurzweil does not believe that would be the right response:

In Huxley’s Brave New World, the rationale for the totalitarian system was that technology was too dangerous and needed to be controlled. But that just pushes technology underground where it becomes less stable. Regulation gives the edge of power to the irresponsible who won’t listen to the regulators anyway.

The way to put more stones on the defense side of the scale is to put more resources into defensive technologies, not create a totalitarian regime of Draconian control.

This of course acknowledges the danger in a rather optimistic way – science and technology will deliver the tools necessary to stop the ultimate evil use of that same science and technology.

We could expand this discussion to media in general. Our networks, our beloved internet and the way it allows us to spread and discuss ideas, also helps those who are sufficiently alienated to dream of mass destruction. Even discussing how difficult it is to design a biovirus capable of erupting and spreading silently with long incubation periods, could incite some disgruntled young man (for a number of reasons, it seems primarily young males have such destructive desires) to actually try it out. But then again, talking about it openly could make more people aware of the dangers ahead and stimulate ideas and policies to deal with them.

What is fascinating as well as frightening is that the blending of augmented reality, virtual reality and the physical reality is a very fundamental process. Often we think of augmented reality and virtual worlds as ‘constructed’ environments while the physical reality is more stable, more solid. In fact, what we call ‘physical reality’ changes all the time – the ancient insight of the Greek philosopher Heraclitus. We humans are working hard trying to control matter on an atomic and molecular scale, adding insights from biology and using our ever-expanding computing power – which one day could no longer be ‘our’ power.

Somehow the question in this ‘mixed realities’ world is whether we’re realizing the old dreams of ensuring the conditions for prosperity and happiness for all or whether the endgame of humanity is near.