Deconstructing learning through social media: virtual seminar, MOOC and OpenCourseware

I’m about to start a wild experiment in learning, by participating in various online courses, using various social media platforms. I have various objectives:
– to experience what learning could mean in this century and what it tells us about the changes in society and in the economy.
– to gain a deeper understanding in the philosophical underpinnings of new media.
– to become more creative, by better understanding what’s “new” about new media.
– to experiment with ways to combine various social media for online learning processes in the broadest sense of the word “learning”.

I’m not an educator working in a school or university, but a financial blogger/journalist/newspaper community manager. I’m already using Twitter, blogs, curating tools and chat systems to interact with our community. This, in my opinion, is a form of online learning and I hope to develop new practices inspired by the courses I’ll participate in during this Fall.

New Media Studies

The course which seems more “philosophical” is Awakening the Digital Imagination: A Networked Faculty-Staff Development Seminar coordinated by Professor Gardner Campbell, Virginia Tech.

The course runs almost every week from September 12 through December 2. There is a syllabus, The New Media Reader (MIT Press, 2003) and we’ll work on various platforms such as Twitter, Flickr and… Second Life. The project in Second Life is being facilitated by Liz Dorland, Washington University (Chimera Cosmos in Second Life) and by Robin Heyden, Heyden Ty (Spiral Theas in Second Life) and the infohub and group blog are up and running.

This week we’ll discuss Vannevar Bush, “As We May Think“. The first week the participants discussed Inventing the Medium by Janet H. Murray, the Introduction of The New Media Reader and watched this video:

The video says “the Machine is Us/ing Us”. While using the web we’re teaching the Machine, which learns from our billions of daily online actions. The Machine is not just connecting data, it’s connecting people. In that sense one could dream of an exponentially increasing worldwide intelligence, which eventually becomes self-learning (the Technological Singularity discussion). It reminds us of the optimism of engineers, who realize that our world and our survival become ever more complicated. However, engineers are optimistic: complexity is a problem which can be tackled. Computers and networks change Thought itself, and enable it to tackle the big challenges of our time.

But then again there are these other thinkers, more to be found in the humanities: they talk for tens of years now about the end of the big Ideologies, the end of the big metaphysical stories making sense of it all. Patient deconstruction and analysis show the fallacies, the inconsistencies, the circular reasonings in those stories. Should we confront the supposed cynical smile of the humanities-expert with the optimism of the engineer, or rather deconstruct this opposition? I’ll find out in the weeks to come.


I also registered at the Massive Online Open Course (MOOC) Change, an international, distributed and rhizome-like learning network/experience. I also attended previous, similar editions (the Connectivism courses), often as a lurker, sometimes active. It’s bewildering and mind-blowing, no, mind-amplifying!

The course is facilitated by  Dave CormierGeorge Siemens and Stephen Downes.

Dave Cormier offers this video to explain what we’re up to.


In a post about how to participate it is explained that: “…there is no one central curriculum that every person follows. The learning takes place through the interaction with resources and course participants, not through memorizing content. By selecting your own materials, you create your own unique perspective on the subject matter.”

The interactions take place on various social media platforms, using many tools.


The last part of my program for this fall is studying MIT OpenCourseWare Introduction to Computer Science and Programming (instructors are Prof. Eric Grimson and Prof. John Guttag). The idea is to learn how computer scientists actually think and in that sense the course is about much more than just “learning how to program in Python”.

Interesting to see is how the video, transcripts, reading material can be consulted for free, while direct interaction with the teaching staff is reserved for those who pay the hefty fee for studying at the MIT. This does not mean that using these course materials is devoid from any interaction: OpenStudy provides a platform for collaboration with fellow-users (the platform could also be used for the Change MOOC).

The issue of how to facilitate learning collaboration while also protecting the business model of universities is solved in another way by Stanford University: they’re organizing an Open Class on Artificial Intelligence. Participants will not be able to ask questions directly, but a voting system will select a number of questions which will be answered by the instructors.

That’s one of the fascinating aspects of these courses: we learn how to practice and think in new ways, and while trying to do so it becomes obvious to the participants that the activity of learning itself and the institutions of learning are being confronted with disruptive change.

Kitely asks for some help to get virtual worlds on the web

I’ve been very busy covering the European debt crisis, but now it’s time for something completely different: the future of virtual worlds. At MetaMeets in Amsterdam, almost two weeks ago now, I interviewed some very inspiring people. I’m working on a story for a mainstream audience about virtual worlds, but I’ll publish some stuff now already on this blog.

Ilan Tochner is the founder of Kitely, a company which works on a project allowing users to create their OpenSim-based virtual world as easily as posting a YouTube-video. Just give it a try, it actually works!

Tochner however realizes that also the guests visiting those virtual worlds should have a very smooth, basically one-click kind of experience when entering those environments. He claims it’s possible to build web-based virtual world viewers – all Tochner needs is a little help from his virtual world friends.

I hope he is right about this – it would help tremendously offering a web-based access. People are used to the web, they want frictionless, instantaneous gratification. Every click extra, every hurdle they have to take means that your number of visitors dramatically decreases.

Remember: it’s not about convincing those who are willing to experiment with virtual worlds. It’s about offering some activity which interests a specific audience – an activity which probably has nothing to do with virtual worlds as such, virtual worlds would just be a cool platform. So getting people into your venue should be totally non-geeky and straightforward.

MetaMeets Day 2: going beyond virtual worlds, machinima, avatars…

Beyond the beyond is the name of Bruce Sterling’s famous blog on Wired. It’s a habit of sci-fi people to think beyond what is anticipated by the mainstream, eventually to think about how ‘change‘ or ‘beyond’ itself gets new meanings.

It seems also virtual people love to think ‘beyond’: beyond virtual worlds, avatars, machinima. That at least is the conviction I have after attending the MetaMeets conference about virtual worlds, augmented reality and video/machinima in Amsterdam. I’ll give a very fast overview of the second and last day of the conference to illustrate this.

Heidi Foster is involved in the management of a new breedable pet in Second Life, Meeroos, with a large customer base. Meeroos are mythical animals, Foster explained, but they are mostly very cute and they ask to be picked up. To be precise: the project launched on May 21 and now there are 22,000 players and 250,000 Meeroos in Second Life. It’s conceivable that the Meeroos will invade the rest of the Metaverse by spreading to other virtual worlds such as OpenSim. In the discussion it was suggested to expand to mobile devices as well. That would be awesome I think: develop and launch on Second Life, spreading throughout other virtual places and ending up on smartphones and tablets.

Not a potential but a real move to mobile devices was presented by Timo Mank, an artist-curator at the Archipel Medialab. In 1999 he co-founded Art Hotel Dit Eiland (This Island) in the Dutch village of Hollum on Ameland. The Medialab initiates Artist In Residences focused on cross reality projects. Many artists from PARK 4DTV worked on Ameland creating content for web based virtual islands. Until recently Timo was curating Playground Ameland Secondlife.

Early this year the Foundation Archipel Ameland shifted focus from yearly media art interventions to transmedia story telling for iPad. The project is called TMSP TV and it connects twitter with guests at the TMSP studio in Diabolus Artspace Secondlife. The LiveLab uses the daily on goings in the World Herritage Waddensea and brings this material as live feed to virtual space where it’s playfully reevaluated, mixed and redistilled by guests and performers.

Toni Alatalo is the CTO of a small games company, Playsign, and the current lead architect of the open source realXtend platform. He explained that not every virtual world needs avatars. Imagine a virtual environment allowing to explore the human body by traveling through the veins, or just think Google Earth. Technologically speaking avatars do not need to be part of the core code of the virtual environment, instead the code could be modular. Which could lead us indeed to virtual worlds without avatars, or to avatars in environments which are not perceived as classical virtual worlds (think augmented reality, smartphones).

metameets audience looking at 3D video

Of course there were things which seemed very familiar to seasoned users of Second Life or Open Sim. Melanie Thielker (Avination) talked about roleplaying, commenting a video depicting the awesomeness of user-generated content. ‘Content’ is an awful word used by publishers when they mean all kinds of stuff such as texts, videos, infographics, images. In this case it refers to impressive builds made by users of the virtual worlds, but Melanie emphasized rightfully that the most important content items are the storylines people create, the characters they build, the backstories they provide, the communities they form. They write their own books in a very experimental, fluid, ever-changing setting.

But even this well-known practice is going somehow ‘beyond’ as it takes place in Melanie’s own virtual world, independently from Second Life. Melanie is an entrepreneur in the Metaverse.

Karen Wheatley is the director of the Jewell Theatre in Second Life. She goes beyond theatres and beyond some existing Second Life subcultures. She runs a theatre in Gor. The Gorean subculture is known for its traditions (based on novels by John Norman), is fond of a warrior ethos, (mostly) female slaves and dislikes furries (avatars with animal-like features) and kid-avatars. All of which does not prevent Wheatley to organize her Shakespearian performances in Gor, open for all avatars. She gets sponsoring and so we could consider her being an entrepreneur too.

Draxtor Despres goes beyond in various ways. In his video reportages he combines ‘real’ footage with video shot in virtual environments. He presented his newest big project: a documentary for the German public television ZDF, Login2Life which will come out mid-July. It goes beyond Second Life as it also shows World of Warcraft.

Stephen M. Zapytowski, Professor of Design and Technology at the School of Theatre and Dance of the Kent State University presented another example of crossing boundaries: April 2011 saw the premier of his avatar ghost for Kent State’s production of Hamlet. This ghost played “live” on stage with real life actors in a blend of virtual and real worlds. Which of course made the audience dream of avatars and humans playing nicely together in the augmented reality (please stay calm: we’re not there yet).

Talking about playing together: that’s what the music panel with JooZz & Al Hofmann talked about. They want even more sophisticated means for people from all over the planet to jam together in perfect synchronicity.

Chantal Gerards showed us a few machinima videos, and I sensed a bit of frustration. In one of her creations she used music from the director David Lynch. Unfortunately, he did not even want to watch the video as ‘he does not like machinima’.

Chantal said: “I have a scoop for you today. I stop making machinima”, adding a bit mysteriously that she will move ‘beyond machinima’. Her advice goes beyond machinima as well: create together, with all kinds of people and platforms, move beyond the platform so that what you create gains wider relevance.

Read also my write-up of the first day: “we are at the beginning

MetaMeets: “We are at the beginning”

logo metameets“We are not at the end of the road but at the beginning;” That was how Tim Gorree, IT Architect, Web Technologies at Nokia, concluded the first day of MetaMeets in Amsterdam.

The conference was started by another Nokia person, the director of organization development Ian Gee. He told us that the concept of “change” is changing. Television shows deal with spectacular changes of individuals and help define how people look these days at change. He also challenged the audience to think out of the box, to give one example: why don’t we stop working at 40 and come back at 60? He learned me a new word: metanoia, change beyond that what can be anticipated and predicted.

Noah Felstein showed us how difficult it is to make predictions about change, commenting on the bewildering variety of life forms in the early stages of evolutions, then showing us Habitat, the online role-playing game developed by Lucasfilm Games and made available as a beta test in 1986 by Quantum Link, an online service for the Commodore 64 computer and the corporate progenitor to America Online.

Felstein was among the first ten employees at Lucasfilm Games (now LucasArts Entertainment), The 3DO Company, and Dreamworks Interactive. In his latest venture he has become a co-founder of a start-up company, where he is helping create software to enable speedy massively-multiplayer game capabilities across both mobile and web based platforms. He is a strong believer in presence and synchronous interaction.

As those topics demonstrate, MetaMeets is by no means a Second Life-centered conference. Justin Clarke Casey demonstrated OpenSim and Ilan Tochner showed us Kitely, a venture which enables people to launch real fast virtual worlds “on demand”, based on OpenSim (more about his ideas about ‘virtual worlds as apps’ and easy access for the end-user tomorrow).

As usual at these conferences about virtual environments, education is one of the most convincing useful applications, as demonstrated today again by various specialists. Lars Dijkema and Mathijs Hamers from Elde College presented a project for an ecologically sustainable school, which they visualized in 3D and in a virtual environment (FrancoGrid). A major reason for building in a virtual environment? The social interaction and feedback (their institution, Elde college, also encouraged them to use social media in order to get help and feedback from outside).

Social interaction in virtual environments is not always self-evident and can be very different from what teachers and students are used to in traditional settings. Jolanda Verleg from Insperion thinks up didactic concepts for schools or companies and helps them use visualizing them in virtual environments. She admits that some people are “dysvirtual” and will “never get it”, but points out that virtual training exists alongside the more traditional approaches.

Ineke Verheul from GameOn/Surfnet/Virtuality illustrated the educational importance of roleplay in virtual settings by the Chatterdale project, a virtual language learning village, where students had to investigate a bomb threat.

One of the impressive aspects of all these presentations is how virtual environments seem to incite people to become entrepreneurs. This was very obviously the case for yet another presenter, Melanie Thielker, who is the founder of Avination and an OpenSim Core Developer with a special interest in roleplay combat systems.

There are exceptions however. Lee Quick is the developer of the Kirstens Viewer, one of the longest established third party viewers (user interfaces) for Second Life. His business model? Just a passion for photography and images. Third party viewers are not really competition for the official viewer, so he explained. They just offer different tools for different jobs and so the Kirstens Viewer boasts 3D viewing, night vision, color filters and extra camera viewpoints – which makes it interesting for machinima-makers.

But maybe, just maybe, the virtual environments – Second Life or OpenSim – are not the endpoint of the technological evolution? What about augmented reality – putting layers of digital information on top of the physical reality? Meet Fred van Rijswijk, owner of C2K, a provider of “high end layar solutions” (Layar is a mobile browser for augmented reality).

The audience went wild, blending the virtual and the physical in an augmented reality. Just imagine (they’re really good in imagining things, those virtual worlds types) that avatars could “sit” in the conference room, visible through smartphones or other devices… Or maybe the devices should retreat in the background, offering us an immediate access to an augmented reality…

Tim Gorree said Microsoft is developing hyper realistic avatars and of course developed the Kinect. Why not use avatars as identity carriers, dealing with the typical problem of lost passwords?

“Count up all the virtual worlds user hours, gaming user hours, chances are all this is more important than the web”, so Tim continued. “Avatars have been used to validate transactions for hundreds of years – think stamps, coins for example. These days there are billions of (virtual) avatars out there, why not use them to change society?”


At last! Moving around in Second Life using Kinect

Remember the guys who used the Kinect to move around and play in World of Warcraft? I was frustrated that Second Life did not have the scoop… but here we are now: those same people from the University of Southern California, Institute for Creative Technologies, demonstrate how they use the Kinect to move an avatar and to use the camera:

Eternal shame! World of Warcraft mage beats Second Life geeks!

Oh my! Is our Second Life community, and especially the geeky part of it, losing it’s innovative edge? New World Notes posted a few days ago about how the Kinect is being used to move around and play in World of Warcraft. Watch this video:

Interesting to note: in 2008 already former Linden Lab Chairman Mitch Kapor was involved in the development of 3D web cameras enabling people to control their avatars by moving their body. Wagner James Au said in his post that an earlier version of the project “evolved into the Kinect”. Reading around, I learned on ArchVirtual that the co-developer of Hands-Free, Philippe Bossut, still works for… Linden Lab (the company behind Second Life).

All of which makes it even stranger that as yet we don’t have any video footage or stories about the Kinect in a Second Life context.

Is this because Second Life is not a gaming world? In the comments on New World Notes it seems people are interested in building in-world using Kinect hacks, but then again, sophisticated builders are a minority in Second Life.

So I set out to ask people in-world about Kinect. The AW Groupies is a very active tech chatgroup, but it seemed they were far more interested in discussing issues about “making meshes portable across platforms” (if you don’t understand a word of this, do not panic, go here)

Demoralized, I went to OpenSim, where I bumped into John “Pathfinder” Lester. Yes, also Pathfinder thinks the Kinect could be a game-changer, but no, he had no information about people in OpenSim or Second Life actually trying that out. “Give it some time”, he said.

Now, in case you have any doubts of the importance of what’s happening around the Kinect, Robert Scoble shared this video on Quora (notice the interesting remark about deviceless augmented reality!):

The future is here! And it’s even not that unevenly distributed!

In the previous post I briefly mentioned the Kinect as possibly being a part of the further evolution of virtual worlds. I was very interested finding a presentation by former Linden Lab employee Kyle Machulis about the OpenKinect community. Which is kind of neat, because that community demonstrates that one can do some very futuristic stuff without huge research budgets.

I was aware of some open source developments around the Kinect camera, but quickly lost track of what’s going on. Now, this video does a great job giving an overview of the projects from day zero onwards. Kyle Machulis is an engineer working on projects ranging from haptics to driver reverse engineering to audio research to teledildonics (or haptics as it’s called in a slightly less suggestive way), so you’ll gets tons of inspiration whether you’re interested in industrial and research applications, adult entertainment or new media projects.

Read Nerd Nite SF which also has a handy list of essential links.

Nerd Nite SF: “OpenKinect: One Month In” – Kyle Machulis, 12/15/10 from nerdniteSF on Vimeo.

Existential questions about virtual worlds

It has been interesting to be away from virtual worlds stuff for a few weeks. I had some catching up to do, but at first sight it seems not much has changed.

There is another famous Linden Lab employee leaving the company, Jack Linden. Other virtual worlds are presenting themselves as alternatives for Second Life, such as OpenSim. Blue Mars continues to promote is’s nice graphics and Twinity it’s mirror worlds.

But looking at all this from a distance, it seems the momentum of those projects and companies is lost, at least for now. Outside the feverishly working communities of those virtual places, nobody seems to care. What’s hot right now is Zynga, Facebook, Twitter, the iPad and the epic struggle between Android and Apple, and of course there is Microsoft’s Kinect. People wonder constantly whether Second Life is still around, and as far as Blue Mars, OpenSim or Twinity is concerned, well even most accomplished geeks won’t know what you’re talking about unless they happen to be members of those tiny niche-communities.

I was not surprised at all reading Botgirl’s post about Pew research which points in the same direction:

According to the latest Pew Generations Report, virtual worlds have less participants than any other online niche surveyed and are experiencing no growth. It’s pretty pathetic. Virtual worlds were not just trounced by social networks and multimedia viewing, but even by religious information sites and online auctions. After seven years in the public eye, it’s clear that neither incremental technology improvements nor new ad campaigns are going to dramatically increase the virtual world market in the foreseeable future.

I couldn’t agree more with Botgirl’s solution:

After reading the report, I’m more convinced than ever that browser-based access to virtual worlds in conjunction with social network integration is the most credible light at the end of the tunnel. The way to move virtual worlds from their current isolated backwater into the integrated mainstream is by making them as seamlessly accessible and usable as every other category in the Pew Report. This will also require mobile-compatible clients, since mobile internet use will surpass computer-based use within the next few years.

Wagner James Au at the New World Notes has been suggesting this Facebook Connect option for quite some time now, but in his post discussing Botgirl’s article he says he’s “starting to think there’s an even better way to make 3D virtual worlds more mass market: Integration with Kinect and Xbox Live.”

So I went to watch the latest Metanomics video for inspiration in times of crisis in virtual worlds. As usual there were distinguished guests such as Larry Johnson, Chief Executive Officer of the New Media Consortium, Brian Kaihoi of the Mayo Clinic and Terry Beaubois, Professor of the College of Architecture and Director of the Creative Research Lab (CRLab) at Montana State University, being interviewed by Robert Bloomfield, Professor of Accounting at the Cornell University Johnson Graduate School of Management.

All these people invested lots of time and money in Second Life projects, at one point they really believed this was an important part of the future (I do not exclude some of them still do believe that). Now they are still very active, but they admit times are different now. The financial crisis made institutions look hard to save costs, but there is more than that. The Gartner Cycle of Hype was mentioned and I confess, being a slightly cynical journalist, to me that sounds like “yes, we completely lost traction, but hey, fancy consultants tell us that it’s nothing to worry about: after the Disillusionment will come the Slope of Enlightenment and we’ll get to the Plateau of Productivity.” Yeah, right. Maybe. Or maybe not.

So is this some convoluted way of saying that I lost all hope virtual worlds will have a bright future after all? Well, it’s convoluted because it’s a complicated matter. As the folks at the Metanomics show said, there is the technology (and the business), but there’s also the community. It remains true that the Second Life community is awesome: highly creative, inspiring people, not just using new technologies but actually living technology.

It’s also true that there’s a lot to be learned in “social technology”, such as using text backchat during live shows, ways to produce chat shows, to integrate live events with video, chat, social streams etc. I actually apply stuff I learned in Second Life in the context of my newsroom, facilitating a virtual community, organizing chat sessions etc. But I don’t use Second Life, because of just too many practical hurdles and a cost/benefit which I cannot justify.

As some of the panel guests said, Second Life and similar environments are a “third place” where you “go” to actually meet other people. But then again, a CoverItLive chat box is also such a third place where people meet each other. I do know the arguments explaining why virtual environments are more intense: the representation by a virtual body means that people actually apply real world principles while meeting each other (maintaining a certain “physical” distance, for instance), implying that what goes on is somewhere in between “just chat” and “actually meeting”. But maybe people just want to attend an online chat event without any hassle, and they’ll use forums to connect with others…

All of which means, for my practice, that I’d love to have a lite version of Second Life or a similar world, very scalable, browser based, and yes, also allowing for using mobile devices. Also, in some way we’ll see further down the road Augmented Reality applications combining the physical and virtual worlds, and maybe the Kinect can facilitate a revolution as far as interfaces are concerned – all of which means that Second Life as we know it will have been a useful stepping stone on the road to somewhere very different.

Making sense of our streams, in real time

How do we make sense of the streams of information on social networks? It’s easy to get overwhelmed and difficult to tell a good story about what happens on Twitter, Facebook, LinkedIn etc. I’m a strong believer in virtual worlds as islands in those streams, where we can gather, and make time for thoughtful discussions. But even then we could make good use of tools to tell stories about what happens ‘out there’.

Josh Stearns on Groundswell has a great post about The New Curators: Weaving Stories from the Social Web.

Josh discusses Slices of Boulder, which seeks to aggregate and curate local information streams. The project is a collaboration between the Digital Media Test Kitchen at the University of Boulder and Eqentia.

Another tool, or rather a platform, is Swift River, built by the folks behind Ushahidi. It was designed with crisis situations in mind. The developers describe it as a “platform that helps people make sense of a lot of information in a short amount of time.” On their blog they try to clarify the concept as being an open source Yahoo Pipes for any SMS, Twitter, Email, and JSON/ATOM/XML/RSS feed, soon video and audio as well.

Storify finally is as Josh explains “based on two panes: 1) Navigating various content feeds (i.e. a Twitter search, a Facebook stream, as well as content from YouTube, Flickr and more) and 2) A blank stream where you can drag and drop elements from those streams to build your story.” It is simple, but compelling.

Storify demo from Burt Herman on Vimeo.

To be honest, Storify is the only tool mentioned here I actually experimented with (I’ll try the others out as well). Have a look at my post about Second Life in a browser and two stories on my financial blog (the blogposts are in Dutch, the tweets in English): one post about the US GDP report of last Friday and another one about Nouriel Roubini being pessimistic about the growth prospects (a social media discussion in which he does not hesitate to call a participant “an idiot”).

Storify (find an invite code at TechCrunch) actually helps you to discover stories.  It makes it easy to combine social media streams, and by doing that you stumble upon unexpected stuff (such as the angry outbursts of Roubini) and you can make that discovery process visible.

On Zombie Journalism Mandy Jenkins explains ten ways journalists (and bloggers of course) can use Storify: gathering reactions on breaking news, combining past content with newer information and social streams, showing your own quests on Twitter, Facebook etc, or organizing your own live tweets from a conference.

Robin Good is following up the fast expanding universe of real time curation tools on his blog. He also prepared a mindmap about all this:

Tools which help us to live in the information streams

We’re living in streams or flows of information: think status updates, tweets, texting, rss-feeds… It’s an era of niche markets, of networks rather than destinations and what we need are tools that allow people to more easily contextualize relevant content. That is what Danah Boyd eloquently explains on Educause Review. Danah is a social scientist at Microsoft Research and a research associate at Harvard’s Berkman Center for Internet and Society.

I liked her Educause article Streams of Content, Limited Attention: The Flow of Information through Social Media because of the ‘streams’ and ‘flow’ metaphors which in my opinion are very appropriate to describe today’s social media experience.

She deals with the issues of democratization, stimulation, homophily and power in a lucid way, not only talking about how awesome social media are but explaining the awkward and even threatening issues as well.

I’m especially interested in how we can create tools to provide context and meaning. Danah says:

We need technological innovations. For example, we need tools that allow people to more easily contextualize relevant content regardless of where they are and what they are doing, and we need tools that allow people to slice and dice content so as to not reach information overload. This is not simply about aggregating or curating content to create personalized destination sites. Frankly, I don’t think this will work. Instead, the tools that consumers need are those that allow them to get in flow, that allow them to live inside information structures wherever they are and whatever they’re doing. They need tools that allow them to easily grab what they want and to stay peripherally aware without feeling overwhelmed.

This is rather abstract, which is good, because one needs a bit of higher level reasoning to see the structural issues at stake. However, I wonder what kind of tools Danah would suggest here. Google’s Living Stories are somehow a way to provide flexible context to breaking news, but I guess we should innovate more in order to help contextualizing things wherever people are or whatever they are doing.

The other major topic is that of the business models new media will use. Danah offers some high-level ideas, but leaves it to us  to propose concrete solutions:

Figuring out how to monetize sociality is a problem, and it’s not one that’s new to the Internet. Think about how we monetize sociality in physical spaces. The most common model involves second-order consumption of calories. Venues provide a space for social interaction to occur, and we are expected to consume to pay rent. Restaurants, bars, cafes—they all survive on this model. But we have yet to find the digital equivalent of alcohol.

I think virtual environments and augmented reality are interesting cases in this context. Virtual worlds are somehow islands in the information streams, inciting people to pay attention for a longer time, to immerse themselves. But at the same time those worlds are internally characterized by streams: for instance by the flows of group text chats and individual chats.

Augmented reality can put layers of context on the physical reality – layers which can consist out of more or less static information such as Wikipedia entries or out of streams like nearby tweets. Of course, augmented reality, virtual worlds and the physical reality can be combined in all sorts of interesting ways.

Or can they? As Danah remarks, the social media tools often are clunky. It takes learning curves to master them, and a geeky attitude. It’s not that very enjoyable to stare though your smartphone camera in order to see often clumsy little texts or virtual objects. Often the tools are the creations of computer scientists and engineers who’ve forgotten how ignorant, clumsy and resistant to change most people are, and it seems they’re not interested in providing tools which are fast, fun and easy to use. The Living Stories are a nice example: it’s a fascinating Google project, which was stopped and is now as an open source project available for others to develop – but it’s not beautiful, it does not seduce the common social media consumer (same story applies for Google Wave – made by software engineers for software engineers). Compare this to Apple (and let the engineers and true geeks howl): it’s slick, it’s beautiful, and all of a sudden the ubiquitous internet goes mainstream.

I’m convinced augmented reality and virtual environments will be important in helping us live in the streams – but we’ll need tools and objects which make us feel happy and which seduce us: fast, fun, easy and beautiful tools.