Kitely asks for some help to get virtual worlds on the web

I’ve been very busy covering the European debt crisis, but now it’s time for something completely different: the future of virtual worlds. At MetaMeets in Amsterdam, almost two weeks ago now, I interviewed some very inspiring people. I’m working on a story for a mainstream audience about virtual worlds, but I’ll publish some stuff now already on this blog.

Ilan Tochner is the founder of Kitely, a company which works on a project allowing users to create their OpenSim-based virtual world as easily as posting a YouTube-video. Just give it a try, it actually works!

Tochner however realizes that also the guests visiting those virtual worlds should have a very smooth, basically one-click kind of experience when entering those environments. He claims it’s possible to build web-based virtual world viewers – all Tochner needs is a little help from his virtual world friends.

I hope he is right about this – it would help tremendously offering a web-based access. People are used to the web, they want frictionless, instantaneous gratification. Every click extra, every hurdle they have to take means that your number of visitors dramatically decreases.

Remember: it’s not about convincing those who are willing to experiment with virtual worlds. It’s about offering some activity which interests a specific audience – an activity which probably has nothing to do with virtual worlds as such, virtual worlds would just be a cool platform. So getting people into your venue should be totally non-geeky and straightforward.

Tweet about this on TwitterShare on FacebookShare on Google+Share on LinkedInShare on TumblrPin on PinterestShare on RedditShare on StumbleUponEmail this to someone

MetaMeets Day 2: going beyond virtual worlds, machinima, avatars…

Beyond the beyond is the name of Bruce Sterling’s famous blog on Wired. It’s a habit of sci-fi people to think beyond what is anticipated by the mainstream, eventually to think about how ‘change‘ or ‘beyond’ itself gets new meanings.

It seems also virtual people love to think ‘beyond’: beyond virtual worlds, avatars, machinima. That at least is the conviction I have after attending the MetaMeets conference about virtual worlds, augmented reality and video/machinima in Amsterdam. I’ll give a very fast overview of the second and last day of the conference to illustrate this.

Heidi Foster is involved in the management of a new breedable pet in Second Life, Meeroos, with a large customer base. Meeroos are mythical animals, Foster explained, but they are mostly very cute and they ask to be picked up. To be precise: the project launched on May 21 and now there are 22,000 players and 250,000 Meeroos in Second Life. It’s conceivable that the Meeroos will invade the rest of the Metaverse by spreading to other virtual worlds such as OpenSim. In the discussion it was suggested to expand to mobile devices as well. That would be awesome I think: develop and launch on Second Life, spreading throughout other virtual places and ending up on smartphones and tablets.

Not a potential but a real move to mobile devices was presented by Timo Mank, an artist-curator at the Archipel Medialab. In 1999 he co-founded Art Hotel Dit Eiland (This Island) in the Dutch village of Hollum on Ameland. The Medialab initiates Artist In Residences focused on cross reality projects. Many artists from PARK 4DTV worked on Ameland creating content for web based virtual islands. Until recently Timo was curating Playground Ameland Secondlife.

Early this year the Foundation Archipel Ameland shifted focus from yearly media art interventions to transmedia story telling for iPad. The project is called TMSP TV and it connects twitter with guests at the TMSP studio in Diabolus Artspace Secondlife. The LiveLab uses the daily on goings in the World Herritage Waddensea and brings this material as live feed to virtual space where it’s playfully reevaluated, mixed and redistilled by guests and performers.

Toni Alatalo is the CTO of a small games company, Playsign, and the current lead architect of the open source realXtend platform. He explained that not every virtual world needs avatars. Imagine a virtual environment allowing to explore the human body by traveling through the veins, or just think Google Earth. Technologically speaking avatars do not need to be part of the core code of the virtual environment, instead the code could be modular. Which could lead us indeed to virtual worlds without avatars, or to avatars in environments which are not perceived as classical virtual worlds (think augmented reality, smartphones).

metameets audience looking at 3D video

Of course there were things which seemed very familiar to seasoned users of Second Life or Open Sim. Melanie Thielker (Avination) talked about roleplaying, commenting a video depicting the awesomeness of user-generated content. ‘Content’ is an awful word used by publishers when they mean all kinds of stuff such as texts, videos, infographics, images. In this case it refers to impressive builds made by users of the virtual worlds, but Melanie emphasized rightfully that the most important content items are the storylines people create, the characters they build, the backstories they provide, the communities they form. They write their own books in a very experimental, fluid, ever-changing setting.

But even this well-known practice is going somehow ‘beyond’ as it takes place in Melanie’s own virtual world, independently from Second Life. Melanie is an entrepreneur in the Metaverse.

Karen Wheatley is the director of the Jewell Theatre in Second Life. She goes beyond theatres and beyond some existing Second Life subcultures. She runs a theatre in Gor. The Gorean subculture is known for its traditions (based on novels by John Norman), is fond of a warrior ethos, (mostly) female slaves and dislikes furries (avatars with animal-like features) and kid-avatars. All of which does not prevent Wheatley to organize her Shakespearian performances in Gor, open for all avatars. She gets sponsoring and so we could consider her being an entrepreneur too.

Draxtor Despres goes beyond in various ways. In his video reportages he combines ‘real’ footage with video shot in virtual environments. He presented his newest big project: a documentary for the German public television ZDF, Login2Life which will come out mid-July. It goes beyond Second Life as it also shows World of Warcraft.

Stephen M. Zapytowski, Professor of Design and Technology at the School of Theatre and Dance of the Kent State University presented another example of crossing boundaries: April 2011 saw the premier of his avatar ghost for Kent State’s production of Hamlet. This ghost played “live” on stage with real life actors in a blend of virtual and real worlds. Which of course made the audience dream of avatars and humans playing nicely together in the augmented reality (please stay calm: we’re not there yet).

Talking about playing together: that’s what the music panel with JooZz & Al Hofmann talked about. They want even more sophisticated means for people from all over the planet to jam together in perfect synchronicity.

Chantal Gerards showed us a few machinima videos, and I sensed a bit of frustration. In one of her creations she used music from the director David Lynch. Unfortunately, he did not even want to watch the video as ‘he does not like machinima’.

Chantal said: “I have a scoop for you today. I stop making machinima”, adding a bit mysteriously that she will move ‘beyond machinima’. Her advice goes beyond machinima as well: create together, with all kinds of people and platforms, move beyond the platform so that what you create gains wider relevance.

Read also my write-up of the first day: “we are at the beginning

Tweet about this on TwitterShare on FacebookShare on Google+Share on LinkedInShare on TumblrPin on PinterestShare on RedditShare on StumbleUponEmail this to someone

MetaMeets: “We are at the beginning”

logo metameets“We are not at the end of the road but at the beginning;” That was how Tim Gorree, IT Architect, Web Technologies at Nokia, concluded the first day of MetaMeets in Amsterdam.

The conference was started by another Nokia person, the director of organization development Ian Gee. He told us that the concept of “change” is changing. Television shows deal with spectacular changes of individuals and help define how people look these days at change. He also challenged the audience to think out of the box, to give one example: why don’t we stop working at 40 and come back at 60? He learned me a new word: metanoia, change beyond that what can be anticipated and predicted.

Noah Felstein showed us how difficult it is to make predictions about change, commenting on the bewildering variety of life forms in the early stages of evolutions, then showing us Habitat, the online role-playing game developed by Lucasfilm Games and made available as a beta test in 1986 by Quantum Link, an online service for the Commodore 64 computer and the corporate progenitor to America Online.

Felstein was among the first ten employees at Lucasfilm Games (now LucasArts Entertainment), The 3DO Company, and Dreamworks Interactive. In his latest venture he has become a co-founder of a start-up company, where he is helping create software to enable speedy massively-multiplayer game capabilities across both mobile and web based platforms. He is a strong believer in presence and synchronous interaction.

As those topics demonstrate, MetaMeets is by no means a Second Life-centered conference. Justin Clarke Casey demonstrated OpenSim and Ilan Tochner showed us Kitely, a venture which enables people to launch real fast virtual worlds “on demand”, based on OpenSim (more about his ideas about ‘virtual worlds as apps’ and easy access for the end-user tomorrow).

As usual at these conferences about virtual environments, education is one of the most convincing useful applications, as demonstrated today again by various specialists. Lars Dijkema and Mathijs Hamers from Elde College presented a project for an ecologically sustainable school, which they visualized in 3D and in a virtual environment (FrancoGrid). A major reason for building in a virtual environment? The social interaction and feedback (their institution, Elde college, also encouraged them to use social media in order to get help and feedback from outside).

Social interaction in virtual environments is not always self-evident and can be very different from what teachers and students are used to in traditional settings. Jolanda Verleg from Insperion thinks up didactic concepts for schools or companies and helps them use visualizing them in virtual environments. She admits that some people are “dysvirtual” and will “never get it”, but points out that virtual training exists alongside the more traditional approaches.

Ineke Verheul from GameOn/Surfnet/Virtuality illustrated the educational importance of roleplay in virtual settings by the Chatterdale project, a virtual language learning village, where students had to investigate a bomb threat.

One of the impressive aspects of all these presentations is how virtual environments seem to incite people to become entrepreneurs. This was very obviously the case for yet another presenter, Melanie Thielker, who is the founder of Avination and an OpenSim Core Developer with a special interest in roleplay combat systems.

There are exceptions however. Lee Quick is the developer of the Kirstens Viewer, one of the longest established third party viewers (user interfaces) for Second Life. His business model? Just a passion for photography and images. Third party viewers are not really competition for the official viewer, so he explained. They just offer different tools for different jobs and so the Kirstens Viewer boasts 3D viewing, night vision, color filters and extra camera viewpoints – which makes it interesting for machinima-makers.

But maybe, just maybe, the virtual environments – Second Life or OpenSim – are not the endpoint of the technological evolution? What about augmented reality – putting layers of digital information on top of the physical reality? Meet Fred van Rijswijk, owner of C2K, a provider of “high end layar solutions” (Layar is a mobile browser for augmented reality).

The audience went wild, blending the virtual and the physical in an augmented reality. Just imagine (they’re really good in imagining things, those virtual worlds types) that avatars could “sit” in the conference room, visible through smartphones or other devices… Or maybe the devices should retreat in the background, offering us an immediate access to an augmented reality…

Tim Gorree said Microsoft is developing hyper realistic avatars and of course developed the Kinect. Why not use avatars as identity carriers, dealing with the typical problem of lost passwords?

“Count up all the virtual worlds user hours, gaming user hours, chances are all this is more important than the web”, so Tim continued. “Avatars have been used to validate transactions for hundreds of years – think stamps, coins for example. These days there are billions of (virtual) avatars out there, why not use them to change society?”

 

Tweet about this on TwitterShare on FacebookShare on Google+Share on LinkedInShare on TumblrPin on PinterestShare on RedditShare on StumbleUponEmail this to someone

HTML5 becomes a crucial tool for publishers

html5 logo In the previous post I mentioned HTML5 – it could be an important element of a solution running Second Life and OpenSim viewers on the web. It would be combined with WebGL (Web-based Graphics Library) which extends the capability of the JavaScript programming language to allow it to generate interactive 3D graphics within any compatible web browser (see Wikipedia).

There’s yet another context in which HTML5 is rather crucial: that of the creation of apps for mobile devices.

I posted about this on PBS MediaShift – our newspaper wants to follow the example of the Financial Times, launching a web-based application for smartphones and tablet computers written in HTML5 — allowing it to bypass Apple’s App Store and Google’s Android Market, as well as other distributors. If you’re interested, you can find my post and a video interview on MediaShift.

Read also MG Siegler at TechCrunch about Project Spartan: Facebook’s Hush-Hush Plan To Take On Apple On Their Own Turf: iOS.

Tweet about this on TwitterShare on FacebookShare on Google+Share on LinkedInShare on TumblrPin on PinterestShare on RedditShare on StumbleUponEmail this to someone

MetaMeets: exploring virtual worlds and augmented reality in Amsterdam

metameets logoTomorrow I’ll be in Amsterdam for the MetaMeets 3D Internet & Virtual Worlds conference. What do I hope to learn?

In my media practice I have daily chat sessions for my newspaper & site & blog, using CoverItLive. I embed that tool in our site, it’s easy to use and rather sophisticated – allowing for moderation, integration of all kinds of media types. It’s text chat based, so no fancy 3D avatar stuff in virtual settings.

I can imagine that some chat sessions could benefit from a virtual setting. It would facilitate deeper discussions, longer attention spans, serendipitous encounters. But at the same time it’s crucial that people can enter such environment as frictionless as possible. That means no downloads, getting an avatar must be fun and real easy, no steep learning curve. In other words, browser based virtual environments.

In Amsterdam I’ll attend a presentation by Ilan Tochner, the CEO of Kitely, about Virtual Worlds on Demand. They make it very easy to launch your own OpenSim-based virtual world. However, I think those visiting your world will have to download a Second Life compatible viewer – which means it’s not really what I’m looking for. Tochner realizes the importance of this issue. He told Hypergrid Business that the Second Life and OpenSim viewer can be ported to HTML 5 and Web GL in a matter of months — and he’s looking for people to help accomplish that.

Even if we have browser-based virtual settings, I’m not convinced the mainstream audience will embrace these possibilities. For quite some time I hear that the younger generations are so used to interact in virtual gaming environments, using avatars, that doing so in a professional context will be a logical step for them. I really think that’s way too optimistic.

So what could be the future? Maybe augmented reality? That’s not a virtual world such as Second Life or World of Warcraft, but a way to put digital information on top of the physical reality (and one of the possibilities might be blending the virtual and the physical).  In Amsterdam there’ll be a presentation about the mobile augmented reality browser Layar, I have Layar on my iPhone, and there are some layers which I really like such as streetARt and of course the Wikipedia layer. Looking at how my colleagues and friends use their smartphones, I must admit there seems to be not much traction for augmented reality as it exists now – essentially staring in a funny way through your smartphone camera and ending up using the  2D map. But, being a geek and loving sci-fi, I hope the Layar-enthusiasts at the conference will convince me.

I use Second Life as a place to meet very creative innovators, and I try out some very simple experiments such as 3D mindmaps. In Amsterdam one of the discussions will deal with immersive 3D worlds as innovative platform for co-creation.

Other aspects which interest me are community management and making videos (machinima) and documentaries in virtual environments or broadcasting from within those environments. This being said, MetaMeets will be combined with the MaMachinima International Festival (MMIF).

More about all this in the next few days and if you have questions about all this, don’t hesitate asking them here so I can try and get some answers!

Follow me on Twitter @rolandlegrand and for more extensive coverage of the conference on @mixed_realities

 

Tweet about this on TwitterShare on FacebookShare on Google+Share on LinkedInShare on TumblrPin on PinterestShare on RedditShare on StumbleUponEmail this to someone

Researching the philosophers of Silicon Valley, using mindmaps in 2D and 3D

What are the philosophical and cultural underpinnings of Silicon Valley? I’m trying to find out, reading and watching thinkers, historians, sci-fi literature, visiting virtual environments.

I’m trying to put some structure in my work using a mindmap, partially based on the book From Counterculture to Cyberculture (by Fred Turner):

In Second Life I’m putting up some media panels with websites or videos illustrating this – it helps me generating new ideas shifting those things around and walking around there, or looking at the panels with other avatars and commenting them.

My mindmap-installation in Second Life

Tweet about this on TwitterShare on FacebookShare on Google+Share on LinkedInShare on TumblrPin on PinterestShare on RedditShare on StumbleUponEmail this to someone

The “making of” of “the making of”

I posted on PBS MediaShift about my experiences writing “Finding reality while looking through code” – about my asking around on various networks and my posting a ‘making of’ while I was preparing the post. So the piece on MediaShift is Uber Meta: it’s” the making of” of ” the making of”.

Posting on MediaShift helps me to ask myself tough questions about my practice as a journalist and blogger, and the folks at MediaShift help me a lot to tell my stories. So if you’re interested, have a look at “How to Use Social Tools to Curate, Research and Expand Sources for a Story” and let me know what you think!

Roland Legrand

Tweet about this on TwitterShare on FacebookShare on Google+Share on LinkedInShare on TumblrPin on PinterestShare on RedditShare on StumbleUponEmail this to someone

Finding reality while looking through code

Our newspaper site www.tijd.be exists 15 years now. In May 1996 someone who had a 128 kbit connection was a rather fortunate citizen, while nowadays we consider 100 megabit normal (in Belgium anyway). In May 2026 speed will no longer be an issue. Access to networks, information streams, databases will be ubiquitous and instantaneously. The internet will be as self-evident as the air and people will deal with news in very new ways.

Access to news will be ubiquitous. Today smartphones and tablets enable us to be “always on” but in 2026 those devices will be as archaic as the Remington typewriter today.

Companies such as Apple are licensing wearable electronics – computer power which you will carry with you, embedded in your clothes, maybe even in your body.

As is often the case in technology, the new developments emerge in a military context: pilots of fighter jets have to analyze lots of data almost instantaneously, and keep their hands free, so they use head-up displays (HUD). The next step is integrating this technology in luxury cars, and finally it’ll go mainstream.

Keyboards will be replaced by voice commands, touch en gestures. Screens become projections which you can manipulate as you wish, in 2D or 3D. Information will even more become a layer projected upon the physical reality. Or it will transform that reality into a virtual realm, where mixed realities games will be played.

However, the future is not just about new gadgets. The nature of that ubiquitous news will change, and there’ll be some crucial discussions about how to organize our news streams.

Filters

Many apps, especially those produced by mainstream media, propose news selected and produced by their editorial staff. Apps such as FlipBoard – not produced by mainstream media – change this. They transform the articles, videos and pictures selected by your online contacts into a glossy online magazine.

Some articles will come from The New York Times, others from The Wall Street Journal or TechCrunch. Algorithms will also check which articles you read and how long you consult them. The news selection becomes personalized.

Facebook for instance shows you status updates from those people who are most crucial to you – or at least, that’s what the algorithm tries to detect.

Eli Pariser explains in his book The Filter Bubble how Google yields search results not just based on what you’re looking for, but taking into account which computer you use, which browser, where you are and tens of other criteria. Which means that your friends, looking for exactly the same topic on Google, will get different results. More in general, it will become very difficult to find something on the web which is not personalized, customized etc.

This seems to be an advance compared to traditional mass-media which paternalistically suggested the same news for everyone, because a priesthood of journalists decided what was essential, what was just ‘nice to know’ and what was unnecessary. But, as Pariser explain in this TEDtalk, the danger is that we lock ourselves inside information bubbles offering an environment of news we ‘like’ or consider interesting but which is not necessarily the news we should know.

Transparency

So we have human gatekeepers and algorithmic ones. We know even less about those algorithms than we know about the human editors. We can have an idea about the news selection at The New York Times, but many people are even not aware of the fact that Google shows them different results depending on supposedly personal criteria, or they are not always aware of the selection Facebook makes of status updates.

The code used by those major corporations in order to filter what we see is politically important. If we want to keep an internet which confronts us with a diversity of viewpoints and with facts and stories which surprise and enlighten us, we need to be aware of those discussions about algorithms and filters. If we don’t pay attention to those codes, we’ll be programmed behind our backs.

Beneath all those human, network and algorithmic filters we find an ever-increasing stream of information. Tweets, status updates, blogposts by experts, witnesses and actors are flooding us, second by second.

I’m sure that in 2026 there will be something we could call journalism: people who have a passion for certain subjects, making selections, verifying and commenting, providing context. The BBC already has a specialized desk analyzing images and texts which are distributed via social media: they check whether a specific picture could be taken where and when people claim it is taken, to give but one example. Almost every day there are new curating tools for journalists and bloggers, facilitating the use of social media.

This ‘curating’ of the news is an activity with high added value. Whether those curators call themselves ‘journalists’, ‘bloggers’, ‘newspaper editors’, ‘internet editors’ is not important: most important is the quality of the curation and the never-ending discussion about these practices.

Everyone who has the energy and time to have a look at the raw information streams, will be able to see how the curation added elements, omitted or changed stuff. Not only will we be able to check this out, many curation projects will invite us in to suggest improvements or to participate directly (e.g. Quora).

Bloggers and journalists who clearly state what their position is regarding the issues they cover, even though they also promise to represent other viewpoints faithfully, will be considered more credible. Those being open about their curation practice, will gain an advantage. As Jeff Jarvis says: “transparency is the new objectivity.”

In May 2026 the editorial news of my newspaper will reach our community in many different ways. I very much doubt that the print newspaper will be as relevant as today, and people will smile when they look at screenshots of today’s site. But there will always be news and discussion, and people trying to cover what is essential in the information flood and who try to find reality through the algorithmic codes.

In preparing this post I learned a lot while having discussions on Twitter, Facebook, LinkedIn, The Well, Quora… In order to be transparent I posted about these preparations. You’ll find links to the original articles and videos, and to stuff I finally did not use for this post, but which could be interesting for other explorations.

Roland Legrand

Tweet about this on TwitterShare on FacebookShare on Google+Share on LinkedInShare on TumblrPin on PinterestShare on RedditShare on StumbleUponEmail this to someone

What will media look like in 15 years time?

This blogpost is an experiment. I’ve been writing an opinion editorial about the future of news media, which you can find here  (in Dutch, in the next few days I’ll publish it in English), and it will also be available in our print newspaper De Tijd on Saturday, May 28. I talk about the dramatic changes in connectedness (ubiquitous broadband internet access, wearable devices… ), but also about filters which influence the way we look at the news: human editors (journalists, bloggers… ) but increasingly also algorithmic ones.

I’m convinced that being transparent about the way in which we select information and provide context will become even more crucial for our democracies. That’s why I did not want to just write an article, but to report about the whole research and writing process itself.

In this post you’ll find how I proceeded but also links and videos related to the article. I could not use all the precious stuff people suggested, but by publishing it all here I hope to facilitate further research by others.

This blogpost is in English, because I asked and got suggestions worldwide. The opinion editorial is in Dutch, I’ll translate it in the next few days.

Tuesday, May 24

Our website www.tijd.be exists 15 years now… and I’m working on a blogpost/article answering the question “what will the news media look like in 15 years time”. Any ideas, suggestions, links would be highly appreciated!

I asked the question on Twitter of course, on Facebook  (Dutch) and Facebook (English), on LinkedIn and Quora.

I’ll report here about the answers I’ll get and the ideas I come up with myself.

If you want to participate, please feel free to use any language you’re comfortable with.

Wednesday, May 25

 I got some interesting comments in this distributed discussion. Most of the answers came from Quora (the Silicon Valley based questions&answers forum) and from a network I did not mention yesterday: The Well (based in San Francisco, in internet-time ancient social network where people use real identities and which charges a fee to participate). 

I’m trying to organize my thoughts in this mindmap, it’s a wikimap, feel free to change and add stuff. 

Several people suggested this thought-provocative video about the future of media:

Another video features Eli Pariser discussing  the ‘Filter bubble’ – about the dangers of only finding information which suits you, rather than finding what you ‘should’ know. Pariser is the author of The Filter Bubble, What the Internet Is Hiding from You, published by Viking (an imprint of Penguin Books), 2011.

Thursday, May 26

Getting lots of suggestions now, and my deadline is coming closer. Here a quick overview, using Storify. Not mentioned in this list are helpful suggestions on the Facebook group Newslab – Exploring News3.0. On LinkedIn Answers I got a suggestion to explore the ‘attention span’ issue, and this link to a search of scholarly articles on Generational differences in attention span since 2008. I did not explore this any further in the context of my article (one has to make choices), but I’m glad to mention it here.


Friday, May 27

My text for publication tomorrow is being edited as I write this. In the print newspaper I’ll put a reference to this blogpost and I added an introduction to this post. While there was no discussion on this blog (maybe I did not explain well enough what I was up to), I learned a lot discussing on Twitter, Facebook, The Well, Quora etc. Sometimes it was enough to explain the project in order to get feedback (The Well, Quora, Facebook), sometimes (Twitter) I had to ask specific persons in my network. It was fun experiencing how at one point someone on Facebook mentioned a discussion I had on Twitter – life in a distributed media world is not always easy, but it really is fascinating!

Roland Legrand

 

Tweet about this on TwitterShare on FacebookShare on Google+Share on LinkedInShare on TumblrPin on PinterestShare on RedditShare on StumbleUponEmail this to someone

Entering a fluid state

Fluid. Liquid. Streaming. It are words often used to describe the new reality the more affluent part of humanity lives in. We are always on now, social, webbed, mobile, connected. As Om Malik says in Will We Define or Limit the Future:

Mobile phones of today might have innards of a PC, but they are not really computers. They are able to sense things, they react to touch and sound and location. Mobile phones are not computers, but they are an extension of us.

Stowe Boyd tells us what this means for media:

We are sliding into a liquid state from a former, more solid one. Our devices and software is where we are seeing this first, but it is already transforming the media world. Witness the headlong transition from solid media (media destination sites with their proprietary organization, with inward-focused links, concrete layout, and editorial curation) to liquid media (media content is just URL flotsam in the streaming apps we use, rendered by readering tools we choose and configure, and social curation).

Boyd sees beyond media and into the near future:

What is over the near horizon is a liquid world, in which social nets, ubiquitous connectivity, mobility, and web are all givens, forming the cornerstones of a vastly different world of user experience, participation, and utility. This is the new liquid world, just a few degrees away.

As I read about this and post stuff, services are being announced in rapid succession. On the readering side there are services such as FlipBoard, Zite and Pulse. I posted about the curation side of things a few days ago and new tools are being launched about every day, so have a look at my Scoop.It to see some of the latest developments – or join my curating team about curation on Pearltrees.

At The Next Web Robert Scoble gave a presentation about Humans + Reality + Virtual. He talked about experiments combining the physical with the virtual. Now, for me ‘virtual’ is more like environments such as Second Life or World of Warcraft, but Scoble uses it in a broader sense: ‘the digital’, ‘that what you see on you tablet or smartphone.’

He refers to apps such as Photosynth which allows you to make 3D pictures where you can look around and zoom. Mealsnap processes pictures you make of your food and tells you what you’re eating and how many calories that represents – again that mixture of human, real and virtual. Foodspotting allows you to share the places where you eat and what you eat, Cyclemeter tracks your walking, cycling, skiing, running, tells you how you’re doing, shares it on networks etc. Another application checks what you’re watching on tv and share that precious information with your friends. It makes pretty clear what Scoble means with human + real + virtual.

All of which sounds a bit frightening. What about my privacy? Isn’t all this Big Brother? Think about stuff such as Klout score, or PeerIndex, or internal measurement by Twitter telling how connected your are, what your online social capital is and how all this could be used as an asset on the job market or even in financial ratings.
Malik says in Will We Define of Limit the Future:

With this revolution, it has become easier to share our moments and other details of our life that have so far been less exposed. The sharing of location data becomes a cause of concern because it is the unknown. The situation is only going to get more complicated — we are after all entering a brave new world of sensor driven mobile experiences, as I wrote in an earlier newsletter. No, this is not science fiction stuff.

I’m getting more and more immersed in this mobile, connected, liquid state. I use an iPhone and things only got more intense now that I own a iPad. It’s a gradual process of discovering new apps, new social tools linking various aspects of your life and connecting you in many different ways with other people.

Even the ‘real virtual’ stuff is going mobile. First I discovered the massively multiplayer online role-playing game (MMORPG) Pocket Legends and now Gameloft launched Order & Chaos Online, a kind of mobile version of World of Warcraft. It’s not hard to imagine some augmented reality enabled mixed realities environment, combining the real, the human, the virtual and what we usually call the virtual. When we get there in an increasingly convincing and integrated way, expect some profound changes in how we live, love, work and connect.

 

Tweet about this on TwitterShare on FacebookShare on Google+Share on LinkedInShare on TumblrPin on PinterestShare on RedditShare on StumbleUponEmail this to someone