Imagine 3D-sensors…

… in your phone, and what you could do with it as a developer… Imagine the games, the education projects, consumer and business projects…. These are exciting times, as Google says about its Project Tango. Google has built a prototype Android smartphone that can learn and map the world around it – what would you do with it?


Seth Rosenblatt on CNET has pretty interesting background information. Movidius’ Remi El-Ouazzane explains in an interview how his chip firm is more than just another partner in Google’s mobile 3D-mapping project — it’s at the center of a revolution in how computers process visuals. The chips can be used far beyond smartphones and tablets: think wearables, robots, autonomous cars, drones…

Google itself mentions various possible applications: interior design, helping the visually impaired, but also immersive gaming – mixed reality style.

The Singularity was in Budapest, Hungary

I attended the Singularity University Summit in Budapest, Hungary. It was like two days of total immersion in discussions about the concept of exponential growth, artificial intelligence, robotics, 3D printing, bio-hacking, medical breakthroughs and organisational change. I tried to bring some elements together in a wiki mindmap, people have added links and stuff:


Create your own mind maps at MindMeister

I also talked to CEO Rob Nail of the Singularity University:



How to replicate online the hands on experience of the SU and the intense social experience? Well, maybe by using a virtual environment? It seems the SU is exploring that possibility, but it’s too soon to tell. Which reminds me that Philip Rosedale, the founder of Second Life, is one of those who inspires the SU people (just to be clear: he was not at the Summit). He’s working on a high fidelity virtual world, in which avatars would reflect the expressions of their real life typists:



Wouldn’t it be nice to replicate the Singularity University campus experience in a high fidelity virtual environment? It would be a welcome variation on Massive Open Online Course environments of Coursera, Udacity or edX…

More videos and links can be found in my coverage for my newspaper De Tijd (in Dutch).

Social media are (also) learning networks

Social media can be learning networks. Self-evident? Maybe so, but these last few months I gave a few presentations for young, somewhat less young and more senior people – all of them well-educated – and they seemed to be surprised about stuff such as Massive Open Online Courses (MOOCs), the fact that we can consider Wikipedia, Linux or Arduino as learning networks, the Maker Movement and related topics.
Mentioning Facebook often results in discussions about privacy and the NSA (older folks), about looking for alternatives such as Twitter (younger people), but Facebook as part of a personal learning environment is new for many people ‘out there’.

Of course, the only solution is to talk even more about it. Especially because the ‘digital world’ is merging rapidly with what we used to consider as a purely ‘physical’ world – sensors, social media, data, mobile internet, location aware devices, it all permeates that so-called ‘physical world’, turning it effectively into a mixed reality.

Once people start to realize the opportunities and dangers they start asking ‘how do I start learning about this’, on a rather practical level. I’ll limit myself to three books:

- Net Smart by Howard Rheingold in order to learn to use social media intelligently, mindfully and humanely.
- Peeragogy.org, a handbook for all those wanting to engage themselves into peer2peer learning (a collective work in which I participated).
- The Age of Context by Robert Scoble and Shel Israel about mobile, social media, data, sensors and location services.

In case you wonder what I talked about during the presentation:


The Age of Context shows us the storm ahead for news media

What happens if we apply the lessons of the book The Age of Context (Robert Scoble and Shel Israel) to news media? Well, I tried it today for a group of communication experts invited by the Belgian company Outsource and we got an intense debate.

The Age of Context analyzes five forces which are developing rapidly and interacting with each other: mobile, social, data, sensors and location-based technology. What could it mean for news media?

1) Mobile: we’re still finding out how to use tablets and smartphones to the fullest extent. More often than not newspapers transfer their print content to the mobile device, making it swipable, adding some videos and links. I think tablets offer new ways of telling stories. Remember movies: these are not just recordings of theater plays, using techniques such as cuts we can deliver a new media form – we’re still in the early phase of discovering the ‘cut’ which unlocks the unique possibilities of the tablet.

While we’re doing that, a new kind of mobile devices is about to be launched: wearable stuff such as Google Glass, making it even harder to stay in the print newspaper paradigm.

2) Social Media: meaning new curation practices for journalists but also new distribution challenges. Flipboard and Zite for instance convert social streams into customized news magazines. People re-assemble the content of very different providers through the filter of their social graph and preferences.

3) Data: do news media use the data on social media and on their own digital platforms to get to know the needs and intentions of their communities? They try to do so, but much more could be done.

4) Sensors. If sensors make devices aware of what their owner is doing (traveling, running, relaxing…) one could imagine that news will be selected and transmitted in a way which suits the user.

5) Location. There is no reason to assume that a user of a Belgian newspaper who happens to be in New York City needs the same information as someone who is in Brussels.

I added some ideas about communities, which in part can mitigate the conclusions mentioned above. If a newsroom can determine efficiently what really matters for a certain community, they’ll be more able to produce a common news selection which is relevant for the users as members of that community. The news provides a common background for the social interactions in that community. Real life meetings, forums and chat sessions help the newsroom to open up and to gain deeper insight in the needs of the community.

Of course we also discussed privacy. The Age of Context is optimistic: respect for privacy concerns will be a competitive advantage for makers of devices or service providers. Not everyone is that convinced – maybe the new generation cares less about privacy.

There was quite some discussion about ‘who determines what the individual wants’. I have the feeling that it’s not the newsroom, but not the individual either. It will be an algorithm, which makes a selection for the individual on the basis of revealed preferences, social graph, sensor and location data, and expressed preferences (explicit likes and dislikes).

The changes ahead are tremendous (we only discussed news production and distribution, but then there’s also the impact on advertising which adds another layer of complications) and very hard to predict. Exactly the kind of situation journalists like…

 

 

Are our attention spans becoming longer again?

There has been an eerie silence on this blog for the past weeks. I was immersed in various learning projects. I had to focus for longer times, and this made me switch my attention away from social media streams, unless I could focus on certain topics via Twitter lists for instance.

- howard rheingoldSo what is the learning about? I’m still absorbing stuff I learned at the various courses facilitated by Howard Rheingold (there’s a new one coming up about Mind Amplifiers). Also, I attended a real life class featuring Howard in the Netherlands (more about this in a later post, but that’s where I took the picture), where he discussed the major findings of his book Net Smart (which can be considered as a long and deep study of attention practices). In this part of the learning it’s all about forums, blogs, wikis, mindmaps, social bookmarks, synchronous audio, video, chat and Twitter.

- The other part of my learning is about tools for digital stortytelling and data journalism. I made a good start on Codeacademy, but somehow I need the intervention of real tutors to continue the learning process. So I decided to take courses at the O’Reilly School of Technology. They even deliver certificates for professional developments. I do realize it are not the certificates which are that important, but it’s a kind of an interesting gamification element. The ‘school’ offers a nice interactive coding environment and tutors evaluate the homework and give feedback.

Crucial technologies I want to master: the components of HTML5 (HTML, CSS, JavaScript), jQuery, and for stuff such as web scraping I need a language such as Python.

Data Journalism is something we’re learning at our media company, and our teacher is Peter Verweij (who was so kind as to include the very basics of using spreadsheets in his program).

- Finally there is a big experiment of helping a newsroom to adapt to the age of never-ending social media streams, community interaction and digital storytelling.

Frankly, all this is pretty exhausting – but at least it forces me to focus for longer periods of time on the same subjects. In this sense it’s immersive – when one is trying to meet some Python course objective, times passes very fast – it’s like playing in some 3D environment.

Is something changing?

These last few years I got the impression we were evolving from longer, immersive experiences to sequences of fast dipping in and out of media streams (status updates, tweets etc). In that context I was not surprised an immersive envrionment such as Second Life was stagnating. It quite simply takes too much time and our attention spans were getting too short for this.

But think again. Maybe we once again want something more. People start complaining about the ‘Facebook-experience’. They start reading books such as Net Smart or meditate about mindfulness. But there’s also something going on at the technology-side of things.

Philip Rosedale, Chairman of Linden LabPhilip Rosedale (archive picture), the founding father of Second Life, has a new company, High Fidelity, to create a new kind of virtual reality platform. True Ventures invested in the company. It’s about a new virtual world enabling rich avatar interactions driven by sensor-equipped hardware, simulated and served by devices (phones, tablets and laptops/desktops) contributed by end-users. Virtual worlds watcher Wagner James Au on New World Notes says that Rosedale is not alone: others are working hard to create new virtual reality platforms: “Overall, this feels like a real trend, made possible by continued leaps in computer power, especially related to 3D graphics, and their continued drop in price.”

But maybe this new trend is also driven by the need of balancing the short attention bursts by longer periods of mindful attention…

Read also: 

True Ventures about the investment in High Fidelity.

Open source communities meet… in real life!

Not sure I’ll understand very much of the seminars about “The Anykernel and Rump Kernels” or “Porting Fedora to 64-bit ARM systems” but then again they’ll talk also about “Open Science, Open Software, and Reproducible Code” and “the legislation in the European Union affecting free software”.

Anyway, FosDem gathers more than 5,000 hackers in Brussels, Belgium, February 2 and 3:

FOSDEM is a free event that offers open source communities a place to meet, share ideas and collaborate.
It is renowned for being highly developer-oriented and brings together 5000+ geeks from all over the world.

How one second every day can change a life

Sometimes an idea is very simple, yet very deep. Like this idea of recording one second of every day of your life. That’s what Cesar Kuriyama did when he stopped working. He learned a lot by doing so. The BBC interviewed him and he was invited for a TED presentation.



This is not an automatic capturing of some shots during your everyday life. The procedure Kuriyama proposes, involves decision-making. What are the images of the day you want to keep, so you’ll watch them one year from now, or ten years, or when you’re very old and about to pass away? To me it seems like an exercise in mindfulness. One might also wonder whether this recording ritual influences what one actually does during a particular day. Kuriyama himself says this whole procedure stimulates him to do at least one interesting thing each day.
Kuriyama also runs a blog, One Second Every Day, on which he published some of his favorite reactions on his TED talk. Someone had a very interesting vision: suppose that many people would start recording one second every day and that something very important on a worldwide scale happened, and one could look at that day through the eyes of ‘the human race’ just by telling the computer to make a selection from the accumulated recordings for that day. Or, less dramatic, suppose you could filter those recordings and tell the computer to show you a particular place through the eyes not of institutional reporters and communicators but again through the eyes of random people recording on that place their seconds of existence.
Anyway, Kuriyama launched a Kickstarter project for developing an app enabling us to record our life seconds. This app will be available in the next few days on iOS and later also on Android.
And this is the video of one year of Cesar Kuriyama:

1 Second Everyday – Age 30 from Cesar Kuriyama on Vimeo.

Update: the app is now available for iOS.

A virtual worlds community going beyond virtual worlds?

Fleep Tuque, a major virtual worlds community expert, said on her Google+ page that AvaCon, the organizers of the Second Life Community Convention (SLCC) plans to include the open-source version of Second Life, OpenSim, and other platforms, in the upcoming gatherings (which will get another name). On the AvaCon website it seems they’re looking for volunteers.

In a famous blogpost Tuque previously explained that people who care about the future of the Metaverse need to move beyond Second Life. There was no edition 2012 of the SLCC as there was disagreement between Linden Lab, the company behind Second Life, and AvaCon.

All of which is very interesting as the community conventions were highly creative gatherings, with keynotes from visionaries such as Philip Rosedale and Ray Kurzweil. Most of all, these conventions inspired people who are actually building new layers on top of our reality and who are part of a digital culture avant-garde.

This is how AvaCon defines its mission:

Our mission is to promote the growth, enhancement, and development of the metaverse, virtual worlds, augmented reality, and 3D immersive and virtual spaces. We hold conventions and meetings to promote educational and scientific inquiry into these spaces, and to support organized fan activities, including performances, lectures, art, music, machinima, and much more. Our primary goal is to connect and support the diverse communities and practitioners involved in co-creating and using virtual worlds, and to educate the public and our constituents about the emerging ecosystem of technologies broadly known as the metaverse.

But what is the Metaverse exactly? This is what Wikipedia says:

The Metaverse is our collective online shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual space, including the sum of all virtual worlds, augmented reality, and the internet. The word metaverse is a portmanteau of the prefix “meta” (meaning “beyond”) and “universe” and is typically used to describe the concept of a future iteration of the internet, made up of persistent, shared, 3D virtual spaces linked into a perceived virtual universe.

So we talk here about the sum of all virtual worlds but also about augmented reality and even ‘the internet’, which seems to be quite a broad definition. Maybe that’s normal as the mobile revolution, ubiquitous computing, the internet of things are integrating ‘the internet’ with the ‘physical space’.

I do hope AvaCon will embrace this broad definition. (Some) people in virtual worlds not only want to export their creations into other virtual places, they also want to turn bits into atoms through 3D-printing (read about Second Life artisan Maxi Gossamer in the New World Notes).

It also makes sense to go beyond virtual worlds (which does not mean abandoning them) as we know them and not just beyond Second Life. In essence these virtual worlds create the illusion of 3D on a flat screen. But what about this? Thesis Prize Winner at the Harvard Graduate School of Design 2011 Greg Tran:

Greg believes that ”People assume we have digital 3D already but this is a fallacy. When you rotate your model on ascreen or watch a Pixar animation is actually just a digital 2d REPRESENTATION of material 3d.What people are calling 3DTV and 3D movies are just a form of shallow depth or Bas Relief, not true digital 3D. The critical/operative imperative of the digital 3D is that there is a subject moving through space. The digital 3D is in its beginning stages, but will evolve in a similar way to the digital 2D. The digital 2D began as a specialized, singular medium which was largely used for documentation purposes, but has evolved towards personalization, interactivity, fluency and distribution.”

Or what about telepresence through iPads mounted on light structures? Or about avatars combined with robotics?

One of the lessons of the latest MetaMeets conference was that it’s very worthwhile to gather people who are interested in augmented reality, mobile applications, Kinect and Kinect-style sensors, and virtual worlds (plural). I hope AvaCon will succeed in doing this on an even bigger scale and that they will embed their virtual worlds focus into a larger vision.

Read also: The Metaverse is Dead (and the discussion following the post).

Hat tip to Daniel Voyager for posting about Fleep Tuque on Google+ (did I mention I’m kind of addicted to Google+?)