Quants aren’t like regular people. Neither are algorithms.

“Everyone there was a “Quant.” No one cared what the underlying company represented by a given stock actually did. Apple or General Motors, CAT or IBM… Everything boiled down to a set of statistical observations that, when assembled into the proper algorithm, delivered a portfolio that beats the market.”

I just love the title of that conference: Alpha Generation – Using News Sentiment Data
via Diigo http://ftalphaville.ft.com/blog/2012/09/28/1183391/as-the-only-person-in-the-room-who-has-apparently-never-written-a-line-of-computer-code/

Finding reality while looking through code

Our newspaper site www.tijd.be exists 15 years now. In May 1996 someone who had a 128 kbit connection was a rather fortunate citizen, while nowadays we consider 100 megabit normal (in Belgium anyway). In May 2026 speed will no longer be an issue. Access to networks, information streams, databases will be ubiquitous and instantaneously. The internet will be as self-evident as the air and people will deal with news in very new ways.

Access to news will be ubiquitous. Today smartphones and tablets enable us to be “always on” but in 2026 those devices will be as archaic as the Remington typewriter today.

Companies such as Apple are licensing wearable electronics – computer power which you will carry with you, embedded in your clothes, maybe even in your body.

As is often the case in technology, the new developments emerge in a military context: pilots of fighter jets have to analyze lots of data almost instantaneously, and keep their hands free, so they use head-up displays (HUD). The next step is integrating this technology in luxury cars, and finally it’ll go mainstream.

Keyboards will be replaced by voice commands, touch en gestures. Screens become projections which you can manipulate as you wish, in 2D or 3D. Information will even more become a layer projected upon the physical reality. Or it will transform that reality into a virtual realm, where mixed realities games will be played.

However, the future is not just about new gadgets. The nature of that ubiquitous news will change, and there’ll be some crucial discussions about how to organize our news streams.

Filters

Many apps, especially those produced by mainstream media, propose news selected and produced by their editorial staff. Apps such as FlipBoard – not produced by mainstream media – change this. They transform the articles, videos and pictures selected by your online contacts into a glossy online magazine.

Some articles will come from The New York Times, others from The Wall Street Journal or TechCrunch. Algorithms will also check which articles you read and how long you consult them. The news selection becomes personalized.

Facebook for instance shows you status updates from those people who are most crucial to you – or at least, that’s what the algorithm tries to detect.

Eli Pariser explains in his book The Filter Bubble how Google yields search results not just based on what you’re looking for, but taking into account which computer you use, which browser, where you are and tens of other criteria. Which means that your friends, looking for exactly the same topic on Google, will get different results. More in general, it will become very difficult to find something on the web which is not personalized, customized etc.

This seems to be an advance compared to traditional mass-media which paternalistically suggested the same news for everyone, because a priesthood of journalists decided what was essential, what was just ‘nice to know’ and what was unnecessary. But, as Pariser explain in this TEDtalk, the danger is that we lock ourselves inside information bubbles offering an environment of news we ‘like’ or consider interesting but which is not necessarily the news we should know.

Transparency

So we have human gatekeepers and algorithmic ones. We know even less about those algorithms than we know about the human editors. We can have an idea about the news selection at The New York Times, but many people are even not aware of the fact that Google shows them different results depending on supposedly personal criteria, or they are not always aware of the selection Facebook makes of status updates.

The code used by those major corporations in order to filter what we see is politically important. If we want to keep an internet which confronts us with a diversity of viewpoints and with facts and stories which surprise and enlighten us, we need to be aware of those discussions about algorithms and filters. If we don’t pay attention to those codes, we’ll be programmed behind our backs.

Beneath all those human, network and algorithmic filters we find an ever-increasing stream of information. Tweets, status updates, blogposts by experts, witnesses and actors are flooding us, second by second.

I’m sure that in 2026 there will be something we could call journalism: people who have a passion for certain subjects, making selections, verifying and commenting, providing context. The BBC already has a specialized desk analyzing images and texts which are distributed via social media: they check whether a specific picture could be taken where and when people claim it is taken, to give but one example. Almost every day there are new curating tools for journalists and bloggers, facilitating the use of social media.

This ‘curating’ of the news is an activity with high added value. Whether those curators call themselves ‘journalists’, ‘bloggers’, ‘newspaper editors’, ‘internet editors’ is not important: most important is the quality of the curation and the never-ending discussion about these practices.

Everyone who has the energy and time to have a look at the raw information streams, will be able to see how the curation added elements, omitted or changed stuff. Not only will we be able to check this out, many curation projects will invite us in to suggest improvements or to participate directly (e.g. Quora).

Bloggers and journalists who clearly state what their position is regarding the issues they cover, even though they also promise to represent other viewpoints faithfully, will be considered more credible. Those being open about their curation practice, will gain an advantage. As Jeff Jarvis says: “transparency is the new objectivity.”

In May 2026 the editorial news of my newspaper will reach our community in many different ways. I very much doubt that the print newspaper will be as relevant as today, and people will smile when they look at screenshots of today’s site. But there will always be news and discussion, and people trying to cover what is essential in the information flood and who try to find reality through the algorithmic codes.

In preparing this post I learned a lot while having discussions on Twitter, Facebook, LinkedIn, The Well, Quora… In order to be transparent I posted about these preparations. You’ll find links to the original articles and videos, and to stuff I finally did not use for this post, but which could be interesting for other explorations.

Roland Legrand

Kevin Slavin about those algorithms that govern our lives

How does our near future look like, as computing and fast internet access become ubiquitous, ever more digital data become available in easy to use formats? Well, it seems our world is being transformed by algorithms, and at the LIFT11 conference in Geneva, Switzerland, Kevin Slavin presented some fascinating insights about this disruptive change.

I try to summarize his talk. I added some musings of my own, such as the stuff about social capital rankings and the Singularity.

Kevin Slavin is the co-founder of Starling, a co-viewing platform for broadcast TV, specializing in real-time engagement with live television. He also works at Area/Coding, now Zynga New York, taking advantage “of today’s environment of pervasive technologies and overlapping media to create new kinds of gameplay.” He teaches Urban Computing at NYU’s Interactive Telecommunications Program, together with Adam Greenfield (author of Everyware: The dawning age of ubiquitous computing).

Stealth

Slavin loves Lower Manhattan, the Financial District. It’s a place built on information. Big cities had to learn to listen, for instance London had to use a new technology during World War II, called radar, to detect incoming enemy bombers. Which would lead to the Stealth airplanes, the so-called invisible, untraceable planes – but anyway, also the Stealth plane can be located, and shot, as it appeared in Serbia.

Slavin is a master in explaining technologically complex things. For instance, the idea behind Stealth is to break up the big thing – the bomber – into a lot of small things which look like birds. But what if you don’t try to look for birds, but for big electrical signals? If you can “see” such a signal while nothing appears on your radar, well, chances are that you’re looking at an American bomber.

(Which reminds me: in this day and age, forget about privacy. If you want to hide, the only strategy is to send out lots of conflicting and eventually fake signals – I think futurist Michael Liebhold said that somewhere. His vision of the Geospatial Web: “Imagine as you walk through the world that you can see layers of information draped across the physical reality, or that you see the annotations that people have left at a place describing the attributes of that place!”

Just as was the case for the Stealth, it just takes math, pattern recognition etc to find out who or what hides behind all the bits of information one leaves behind).

The same reasoning applies for other stealthy movements, like those on financial markets. Suppose you want to process a huge financial deal through the market, without waking up other players. The stealth logic is obvious: split it up in many small parts and make them appear to move randomly.

But then again, it’s only math, which can be broken by other math. It’s a war of algorithms. As explains Wikipedia:

Starting from an initial state and initial input (perhaps null),[4] the instructions describe a computation that, when executed, will proceed through a finite [5] number of well-defined successive states, eventually producing “output”[6] and terminating at a final ending state.

Slavin says that 70 percent of all trades on Wall Street are either an algorithm trying to be invisible or an algorithm trying to find out about such algorithms. That’s what high frequency trading is about: finding those things moving through the financial skies.

Who will be the winner? It’s not only about the best algorithm or the best computer, but also about the best network – we’re talking here about milliseconds. If you’re sitting on top of a carrier hotel where all the internet pipes in a big city are surfacing, you have such an advantage. The internet is not this perfectly distributive thing floating around there, it has its physical properties which for instance determine the price of real estate in cities.

Motherboards

Slavin explains how it are the needs of the algorithms which can determine real estate prices and urban architecture in New York, London, Tokyo or Frankfurt. Real estate 20 blocks away from the Financial District suddenly becomes more expensive than offices which appear to be better connected in human terms. Referring to Neal Stephenson, our professor said that cities are becoming optimized as motherboards.

(Read Mother Earth Mother Board by Neal Stephenson on Wired and, also on Wired, Netscapes: Tracing the Journey of a Single Bit by Andrew Blum. Which also brings us back to Adam Greenfield, who gave a great talk at the Web and Beyond conference in Amsterdam, showing how web design principles and discussions are becoming largely relevant in urbanism – the city as a mother board or as a web site, to be organized as such and where the same concepts and algorithms can be used. Just think about the application of access and permissioning regimes in a world where the overwhelming majority of the citizens is perfectly traceable by their cell and smartphones. Which means that design becomes a very political matter).

Algorithms determine what we hear on the radio and what movies we see – and also what we won’t hear or see. They claim to predict what we want to read or watch, organize traffic, investment decisions, research decisions, and determine which conversations or searches on the web point to terrorist plots and who should be monitored and/or arrested by the security services.

Sixty percent of all movies rented on Netflix are rented because that company recommended those movies to the individual customers. The algorithms Netflix uses even take into account the unreliability of the human brain (we are rather bad in consistently rating things. Epagogix helps studios to determine the box office potential of a script – and influences in that way what will actually be produced.

There is an opacity at work here. Slavin showed a slide depicting the trajectory of the cleaning robot Roomba, which made it obvious that the logic applied here does not match with a typical human way of cleaning a floor.

Crashing black boxes

One may think that an algorithm is just a formalization of human expert knowledge. After all, a content producer knows what has the biggest chances to succeed in terms of box office revenue, clicks, comments and publicity. Isn’t an algorithm not just the automated application of that same knowledge? Not really. In fact, competing algorithms will be tweaked so as to produce better results, or they will tweak themselves. The algorithm often is a black box.

Genetic algorithms seem to mimic the process of natural evolution using mutations, selections, inheritances. Tell the algorithm that a certain weight has to travel from A to B, and provide some elements such as wheels, and the algorithm will reinvent the car for you – but the way in which it works is beyond are human comprehension (it does not even realize from the start that the wheels go on the bottom, it just determines that later on in its iterations): “they don’t relate back to how we humans think.”

Which is important, because think about it: algorithms determine which movies will be produced, and algorithms will provide a rating saying whether a movie is recommended for you. Where is the user in all this? Slavin: “maybe it’s not you.”

Maybe these algorithms smooth things out until it all regresses toward the mean, or maybe they cause panic when all of a sudden financial algorithms encounter something they weren’t supposed to encounter and start trading stocks all of a sudden at insane prices. This happened on May 6 2010. Wikipedia about this Flash Crash:

On May 6, US stock markets opened down and trended down most of the day on worries about the debt crisis in Greece. At 2:42 pm, with the Dow Jones down more than 300 points for the day, the equity market began to fall rapidly, dropping more than 600 points in 5 minutes for an almost 1000 point loss on the day by 2:47 pm. Twenty minutes later, by 3:07 pm, the market had regained most of the 600 point drop.

Humans make errors, but those are human errors. algorithms are far more difficult to “read”, they do their job well – most of the time – but it’s often impossible to make sense in a human, story-telling way of what they do.

There is no astronomy column in the newspaper, there is astrology. Because humans like the distort facts and figures and tell stories. That’s what they do in astrology, but also on Wall Street – because we want to make sense to ourselves, even if means we’ve to distort the facts.

Now what does a flash crash look like in the entertainment industry? In criminal investigations? In the rating of influence on social networks? Maybe it happened already.

Social Capital

Some other presentations at LIFT are also relevant in this context. Algorithms are for instance increasingly being used to determine your personal ‘value’ – for instance your value as an ‘influencer’ on social media. Klout is a company which uses its algorithm to measure the size of a person’s network, the content created, and how other people interact with that content. PeerIndex is also working with social network data to determine your ‘social capital’.

This is not just a weird vanity thing. Some hotels will give people with a high Klout ranking a VIP-treatment, hoping on favorable comments on the networks. Social influence and capital can be used as an element in the financial rating of a person or a company.

This in turn will incite companies but also individuals to manage their online networks. At the LIFT11 conference, Azeem Azhar, founder of PeerIndex, gave a great presentation about online communities and reputations management while social media expert Brian Solis talked about social currencies. Of course, people will try to game social ranking algorithms, just as they try to game search algorithms on the web.

Singularity

Rapidly increasing computer and network power, an avalanche of digital data and self-learning networks, ambient intelligence could lead to what some call the Singularity: “a hypothetical event occurring when technological progress becomes so rapid and the growth of artificial intelligence is so great that the future after the singularity becomes qualitatively different and harder to predict” (Wikipedia).

Many scientists dispute the spectacular claims of Singularity thinkers such as Ray Kurzweil. There is also controversy about whether, if the Singularity would take place, this would be good or bad for humanity. Slavin points out the opacity of the algorithms. They can be efficient, but don’t tell stories and we cannot tell a good story about the inner workings of black boxes. Now already algorithms are capable of taking into account our weird human imperfections and inconsistencies, while humans also respond by trying to game algorithms. In that sense we’re witnessing not one spectacular moment of a transition to Singularity, but a gradual shift where algorithms become a crucial part of our endeavours and societies.