“The ideal technology is invisible and ambient”

I did not make it to the LeWeb conference in Paris, France, but I’m looking at the abundant streams of tweets and blog posts about the event. I won’t talk in this post about the Instagram-Twitter drama – it has been covered widely on so many other places. My personal interest is in the more abstract, almost philosophical side of things, and so I was very happy to discover this video of the presentation by Amber Case, a cyborg anthropologist and entrepreneur.

She talks about big data, the internet of things, the quantified self and geo-location. The ideal technology, so she explains, is ambient and invisible. So forget those dramatic images of machine-human cyborgs. The future cyborgs won’t appear carrying any gear at all – but then again, they actually do, and it will interact with their environment, and it will enable them to add more meaning to their lives rather than making it more superficial and empty.

[iframe][/iframe]

Talking about the quantified self, the Canadian artist, scientist and intellectual Ariel Garten showed a EEG (Electroencephalography) headband to help you to find inner peace! But not only that, it is very conceivable that we’ll be able to improve gaming or avatar behavior by monitoring and steering our brainwaves. Take that, Kinect! Not only we’ll interact better using gestures, but also by being mindful of our brainwaves!

[iframe][/iframe]

Changing the world while exiting the trough of disillusionment

I’m recovering from the second MetaMeets day, but here comes my report about the second part of this two-day conference in the beautiful ‘s Hertogenbosch (the Netherlands).

This day was hands-on: we had a workshop during which we learned to use sculptris to make a model, meshlab to clean it up, and then have it 3dprinted at fablab. My own creation was less than stellar (I even had no computer mouse so my equipment was to blame of course, not me!) but anyway, it was great fun. Chris Kautz facilitated the workshop, he has a great website packed with tutorials and resources: art-werx.com. On YouTube he has a series as crocodileEddie.

Much of the conference was about escaping from the virtual or digital world into the real world via augmented reality or 3D-printing, but we also discussed how to get the physical into the virtual, using Microsoft’s motion sensing input device Kinect.

The chair organizer of MetaMeets Jolanda Mastenbroek was thrilled to try out the Kinect – by slowly moving her body, she brought avatars in Second Life to life – they were moving in sync with her movements in the physical world. This could also work for the open source-version of Second Life, OpenSim.

[iframe][/iframe]

For the techies, please consult this page about Kinect and Second Life. It’s an ongoing project, but imagine the possibilities for machinima, gaming and inevitably adult entertainment (always an indication whether or not a technology will succeed).

In my presentation I asked for business models. Can people earn a living in this sector of virtual worlds, augmented reality and mixed realities? Someone who combines with great success his physical artwork with virtual stuff is the French artist Patrick Moya. We watched this video about his work:

[iframe]

[/iframe]

A very different style is this beautiful impression of the Second Life art installations by Artistide Despres, filmed and edited by Marx Catteneo (aka Marc Cuppens) http://www.marccuppens.nl
handheld machinima 2012:

[iframe]

[/iframe]

Cuppens also showed this video about The Cube Project LEA 2012 Second Life.

The Cube Project August 2012, “Over 25 virtual artists have joined the ranks of The Cube Project, curated by Bryn Oh, to create a 20-sim exhibit in just 5 days. What’s the theme? Artists can only use two distinct virtual objects: a black cube, and a white cube.”

Bryn Oh: “We are turning away for a moment from the wonderful range of mesh or photoshopping beautiful textures to work instead on simple minimal compositions in black and white, over 20 regions. The overall idea is to create a massive harmonious environment rather than follow the standard exhibition practice of each artist having a clearly defined separate space to exhibit.”

The Cube Project is a collaborative artwork consisting of virtual artists Bryn Oh, Cajska Carlsson, Charlotte Bartlett, Dancoyote Antonelli, Giovanna Cerise, Haveit Neox, Kicca Igaly, L1Aura Loire, London Junkers, Maya Paris, Misprint Thursday, Nessuno Myoo, Oberon Onmura, PatriciaAnne Daviau, Pol Jarvinen, Rag Randt, Rowan Derryth, Sea Mizin, Secret Rage, Solkide Auer, Remington Aries, Solo Mornington, Tony Resident, Werner Kurosawa and Xineohp Guisse.

A video impression by Marx Catteneo – handheld machinima august 2012
Music by the Artist: Logical Confusion Track: Darklight Album: Logical 3
Downloaded from tribeofnoise.com

[iframe]

[/iframe]

Virtual worlds are not dead, they just smell funny, Flufee said at the opening of the conference (see previous post). It’s a quote from Frank Zappa who said Jazz isn’t dead. It just smells funny. The same applies for virtual worlds. They are somewhere on the agonizing slow exit of the trough of disillusionment in the Gartner cycle of hype, but they allow us to change the real world as we put layers of digital information on the physical reality. They also allow us to change the real world as they enable artists to create new art.

Read also the first part of the MetaMeets report. I also updated my wiki mindmap about this conference.

MetaMeets! Virtuality Meets Reality

Tomorrow I’ll participate in the MetaMeets gathering in ‘s-Hertogenbosch,The Netherlands. What we’l do and talk about:

MetaMeets is a seminar/meeting about virtual worlds, augmented reality and 3D internet, this year’s topic will be The Art of Creation : Virtuality meets Reality.

Virtual worlds and 3D internet have been developing continuously. Mobile and browser based worlds have been created. Mesh format uploads have provided huge progress in content creation through open source programs like Blender and Google Sketchup.
Machinima creation has grown and improved with special interfaces and innovations in visual possibilities, making films shot in virtual worlds a professional tool for presentation to a mainstream audience.

MetaMeets has chosen this year to shine a light on this versatile digital canvas by taking its participants interactively into the Art of Creation. The programme will begin with a few lectures on the current state of virtual worlds and their new developments. Subsequently, we will have workshops exploring methods of accomplishing each of the key steps in 3D creation. The workshops will range from creating a virtual world on your own server, creating 3D content, creating (motion) pictures of it, and even printing 3D objects as real world 3D models.

We also will have an interactive roundtable discussion based on the movie The Singularity is Near that is released this summer for download and availible on dvd.

This is a mindmap I prepared. My subject is about the virtual which escapes into the real. Or how maybe Second Life is catering for a niche group of people, but the ethos of virtual worlds is spreading fast in what we once called the ‘real world’.

[iframe]

Create your own mind maps at MindMeister
[/iframe]

Apps on top of the real world

This seems to be pretty cool, but as you’ll see in the ‘read more’ section, it’s much more than just ‘cool’:

And here is how it works:

It’s build by Stockholm-based 13thlab.com and it’s an app available on iOS.

Using advanced computer vision, Minecraft Reality maps and tracks the world around you using the camera, and allows you to place Minecraft worlds in reality, and even save them in a specific location for others to look at.

Minecraft Reality is built on our PointCloud SDK. For more information, and examples of what people are placing, visit http://minecraftreality.com.

Just like the Google ARG Ingress, this is yet another example of the crumbling walls between the digital world and the world formerly known as the real world.

The guys of 13thLab claim: “We think the camera will replace the GPS as the most important sensor to interpret and make sense of the world around you.”

Hat tip for Bruce Sterling on Beyond the Beyond for posting about this.

Read also:
– If the world were your platform, what apps would you build, by Janko Roettgers at GigaOM. He asks the fascinating question: “If your apps aren’t just running on a phone or a tablet anymore, but essentially on top of the real world — what kind of apps do you build?”
– The World Is Not Enough: Google and the Future of Augmented Reality by Alexis C. Madrigal at theAtlantic.
-Minecraft creations meet the real world through augmented reality iOS app by David Meyer on GigaOM.

Microsoft has its own Project Glass

“Microsoft has it’s own Project Glass cooking in the R&D labs.

It’s an augmented reality glasses/heads-up display, that should supply you with various bits of trivia while you are watching a live event, e.g. baseball game. ”

The information is based on a patent application, so don’t expect a Microsoft Glass for Christmas. 
via Diigo http://www.unwiredview.com/2012/11/22/microsoft-has-its-own-project-glass-augmented-reality-glasseswearable-computer-combo/

patent drawings for microsoft glass

Nicholas Carlson at BusinessInsider explains the differences between Microsoft Glass and Google Glass. It seems that Google Glass will be more like a tiny screen somewhere in your vision while the Microsoft project is about overlaying digital information on the physical environment. However, the Microsoft project seems to be more about events – where the user is more or less staying on the same spot, while Google Glass is also about the users moving around, sky diving etc.

Purists would say the Google is not working on augmented reality (if the information about Google Glass) is correct as it does not really is overlaying digital information. In my opinion, if you look at it from a slightly different angle, in both cases we’re (at least in some applications) annotating the physical world.

A social network for things | Beyond The Beyond

“Thingiverse is also introducing a new “Follow” button that will connect you to the things, digital designs, designers, users, tags, categories: all the stuff you care about most. By following a Thing, you’ll get a notification when someone comments on it, makes a copy of it, or remixes it. Some new digital designs inspire a whole family of new Things, and the Follow button helps you keep track of those.  ”

As Bruce Sterling says, it’s almost a social network of things. Now just imagine to have this affordance in augmented reality – you just point your smartphone, tablet or google glass to a thing, you activate some app and you get all this information. Also in the press release, the guys from Thingiverse explain how users have been tagging their uploads with useful descriptors – and so now you can follow tags or categories to get updates in a dashboard. We’re talking here about the annotation of our physical reality, bookmarking no longer just the digital world of websites but of the objects which surround us. 
via Diigo http://www.wired.com/beyond_the_beyond/2012/11/developments-at-makerbot-thingiverse/

Pilot Your Own Robotic Sub And Explore The Ocean With AcquatiCo | Singularity Hub

Another great story from Singularity Hub. If this Kickstarter project is successful, it will enable us to explore the oceans by just using our laptop or tablet. 

Which in a way reminds me of those cute iPad-robots enabling people to move around , see, hear and communicate from  whatever distance. So yes indeed, let’s do this in the oceans as well! 

“Eduardo Labarca wants to bring the ocean you. Not through the kind of striking, high-definition imagery that Planet Earth brought, but through an immersive experience where you actually get to navigate the corals, chase the fish, explore the shipwreck yourself. Which is why Labarca created AcquatiCo, a web-based ocean exploration platform. A Kickstarter campaign has been launched for the startup. If successful, it will be the first step in the company’s goal of giving people unprecedented access to the ocean’s treasures using just their computers, tablets or smartphones. I got a chance to talk with the Singularity University graduate and ask him about AcquatiCo, and his vision to “democratize the ocean.” ”

via Diigo http://singularityhub.com/2012/10/23/pilot-your-own-robotic-sub-and-explore-the-ocean-with-acquatico/

Find out about the future by looking at Defense

The computer visionary Doug Engelbart designed in the 1960s the NLS – the “oN-Line System” – a revolutionary computer collaboration system implemented by researchers at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI). The NLS system, so explains Wikipedia, was the first to employ the practical use of hypertext links, the mouse, raster-scan video monitors, information organized by relevance, screen windowing, presentation programs, and other modern computing concepts. The project was funded by the U.S. Defense Advanced Research Projects Agency, NASA, and the U.S. Air Force.

Throughout the history of computing we see the crucial role being played by the military and the intelligence community (this is just one of the many interesting discussion threads of Howard Rheingold’s course about Think-Know tools). One of these famously funded project gave us the Mother of all Demos by Engelbart (the mouse! videoconferencing! hyperlinks!):

Maybe it’s a good idea to have a look at what they’re funding now in order to get an idea of the longer term developments in computing. Typically projects which are too long term and risky to be interesting for big corporations or even venture capitalists sometimes get support from those defense-related agencies. However, these days the capital needed for innovative projects is no longer as enormous as it used to be, and we see how agencies invest in commercial start-ups not only to stimulate research which otherwise may not have been done, but also to get first-hand information about research which the private sector is doing anyway.

One of the most fascinating agencies is DARPA, which has a habit of changing names. Wikipedia: “Its original name was simply Advanced Research Projects Agency (ARPA), but it was renamed to “DARPA” (for Defense) in March 1972, then renamed “ARPA” again in February 1993, and then renamed “DARPA” again in March 1996.”

DARPA of course is not only active regarding information processing. This is what Wikipedia tells us about the more recent history: “During the 1980s, the attention of the Agency was centered on information processing and aircraft-related programs, including the National Aerospace Plane (NASP) or Hypersonic Research Program. The Strategic Computing Program enabled DARPA to exploit advanced processing and networking technologies and to rebuild and strengthen relationships with universities after the Vietnam War. In addition, DARPA began to pursue new concepts for small, lightweight satellites (LIGHTSAT) and directed new programs regarding defense manufacturing, submarine technology, and armor/anti-armor.
On October 28, 2009 the agency broke ground on a new facility in Arlington, Virginia a few miles from the Pentagon.
In fall 2011, DARPA hosted the 100 Year Starship Symposium with the aim of getting the public to start thinking seriously about interstellar travel.”
Interstellar travel really sounds cool, but let me look at that another time. For now, let’s just read how the Information Innovation Office describes itself on the DARPA-site:

I2O aims to ensure U.S. technological superiority in all areas where information can provide a decisive military advantage. This includes the conventional defense mission areas where information has already driven a revolution in military affairs: intelligence, surveillance, reconnaissance, command, control, communications, computing, networking, decision-making, planning, training, mission rehearsal, and operations support.

It also includes emergent information-enabled technologies and application domains such as social science and human, social, cultural, and behavioral modeling; social networking and crowd-based development paradigms; natural language processing, knowledge management, and machine learning and reasoning; medical/bio informatics; and information assurance and cyber-security.

I2O works to ensure U.S. technological superiority in these areas by conceptualizing and executing advanced research and development (R&D) projects to develop and demonstrate interdisciplinary, crosscutting and convergent technologies derived from emerging technological and societal trends that have the potential for game-changing disruptions of the status quo.

The capabilities developed by I2O enable the warfighter to better understand the battlespace and the capabilities, intentions and activities of allies and adversaries; empower the warfighter to discover insightful and effective strategies, tactics and plans; and securely connect the warfighter to the people and resources required for mission success.

Headings on that page are “understand“, “empower” and “connect“.

One of the many fascinating programs is Social Media in Strategic Communication (SMISC). It aims to develop “a new science of social networks built on an emerging technology base. Through the program, DARPA seeks to develop tools to support the efforts of human operators to counter misinformation or deception campaigns with truthful information.”

It’s all there: analyzing narratives, experiments with role-playing games which make heavy use of social media…

In-Q-Tel

Yet another interesting organization is In-Q-Tel, launched in 1999 as an independent, not-for-profit organization, IQT was created to bridge the gap between the technology needs of the U.S. Intelligence Community (IC) and new advances in commercial technology.

Just looking here at information and communication technologies, the site of this special kind of venture capitalist explains:

Focus areas in the ICT practice include advanced analytic tools, next generation infrastructure and computing platforms, mobile communication and wireless technologies, embedded systems and components, geospatial and visualization tools, and digital identity analytics.

For more concrete information one can simply consult the list of companies in which In-Q-Tel invests (note to self: make a Twitter list which includes these companies to get updates!). To give but two examples:
Streambase Systems, Inc., a leader in high-performance Complex Event Processing (CEP), provides software for rapidly building systems that analyze and act on real-time streaming data for instantaneous decision-making. The World Economic Forum awarded StreamBase the title of 2010 Technology Pioneer.

Cloudera Enterprise is the most cost-effective way to perform large-scale data storage and analysis, and includes the tools, platform, and support necessary to use Hadoop in a production environment. (The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models.)

Read also:

Pentagon’s Plan X

Bezos, CIA invest $30M in quantum computing startup

Big Data and Cyberpunk

– Cloudera Makes Hadoop Real-Time with Impala (SiliconAngle)

3D printing: does the revolution look vintage already?

Nice overview of 3D-printing:

Hat tip to Bruce Sterling on Beyond the Beyond. I liked his comment:

Really makes one anticipate 3d printing in 2022, when all this contemporary stuff looks charmingly crude and tentative. Very “early teens.”

So does our revolution look vintage now already? More about all this during the MetaMeets conference (November 30, December 1, ‘s-Hertogenbosch,The Netherlands): “The Art of Creation : Virtuality meets Reality”. If there really is a Makers revolution going on, how can we support that and profit from it in virtual environments?