A new, possibly last edition of the famous Cooperation course by Howard Rheingold, the man who gave us the term “virtual community”, launches today. It’s not too late to participate, but be warned: this is not a Massive Online Open Course, but a Small, Intensive Online Course which is reasonably priced.
What is it about? I quote Howard:
“a six week course using asynchronous forums, blogs, wikis, mindmaps, social bookmarks, synchronous audio, video, chat, and Twitter to introduce the fundamentals of an interdisciplinary study of cooperation: social dilemmas, institutions for collective action, the commons, evolution of cooperation, technologies of cooperation, and cooperative arrangements in biology from cells to ecosystems.”
We’ll have a small group of participants which are involved in fascinating projects and learning activities. For more information click here.
I had to explain to journalism students how to use social media (I only had one hour!). Call me old-fashioned, but I still believe the first stage is organizing your thoughts and listening. After that participate actively in conversations but don’t just broadcast your messages. So the sequence could be like this:
- Organise your thoughts using mindmapping. There are lots of tools, I like to use MindMeister. The added benefit is that, in a later stage, you can turn your mindmap into a collaborative map, asking others to help you. There are free alternatives for online mindmapping but they won’t all allow for collaborative mindmapping.
- Head over to social media such as Twitter and Facebook to listen. Organize relevant people in Twitter lists, search for hashtags. Other interesting places for in-depth discussions are Reddit and Quora, and for each subject you’ll find specialized forums.
- Use a good dashboard such as Tweetdeck or Hootsuite to keep stuff organized.
- I still use RSS-feeds and RSS-readers, in my case that’s Feedly.
- Make your own procedure for hunting and gathering important content. You can stock posts into Feedly and in social bookmark services such as Diigo. I use Diigo a lot, it makes it easy to use tags, descriptions and to work in groups.
- Have your own blog, but don’t underestimate the technical hassle. An easy solution might be Tumblr or Medium. Or you could go for a fully hosted WordPress or Drupal solution.
- Of course you also need social media to talk about your blog posts and to discuss with others. Don’t hesitate giving others credit for their posts and contributions and engage in real conversations, not in thin excuses to promote yourself.
- This is where the circle closes itself: you return to the social media to tell people about your post and to reconnect. You can use the feedback to develop your mindmap even further and then you can publish the mindmap as a collaborative document where others can add their own thoughts. Maybe this will inspire you for a new post and a new cycle.
- Chances are that you can invite people to form a small community on Slack, where you can work together – exchange bookmarks, organize channels for different aspects of the subject the community is interested in. A videoconferencing tool such as Zoom enables you to engage with that community into a weekly of monthly meeting. Or you can invite experts for short presentations and experiment with realtime collaborative mindmapping.
Social media in this classical sense costs time and effort (which I myself lack for the moment). A lot can be said about strategy – I recommended the students to focus on well-defined subjects. Specialized subjects and real-time communication also help to avoid the kind of brutal and rude social media fights one witnesses every single moment these days.
Today I have to give a very brief talk about journalism and artificial intelligence here in Brussels, Belgium. I’m not an AI-expert but at the newsroom we read and talk about the AI-disruption all the time.
I made a mindmap:
Some conclusions and questions: what are the implicit ethical choices we make while using AI-powered tools? We used to speak about ‘the people formerly known as the audience” implying that these days the “audience” is an active participant in the news process – will AI help us in exploring that further or will it just simulate interaction and debate? What about the journalists, will they simply lose their jobs? I think they’ll have to engage even more in team work but this time the teams will consist not only out of reporters and multimedia wizards but also AI-agents will join us. The hybrid human/AI-teams who are most efficient in coordinating tasks will outperform.
What does philosophy tell us about virtual reality and virtual worlds? I’d like to start with some people who do not belong to the typical college overview of classical philosphers, people who started thinking about the augmentation of human intellect and human emotions such as Vannevar Bush (As Ae May Think, 1945), J.C.R. Licklider (Man-Computer Symbiosis, 1960), Douglas Engelbart (Augmenting Human Intellect: A Conceptual Framework, 1962), Theodor H. Nelson (Computer Lib / Dream Machines, 1970-1974)… all authors and texts which can be found in The New Media Reader (MIT).
What they have in common is the awareness that the world becomes a complex, rapidly evolving and dangerous place. In order to keep up with the challenges we need to augment our intellectual capacities, not only our rational reasoning skills but also our ability for compassion and empathy. Computers in their many forms are a crucial part of that augmentation.
The texts in the New Media Reader are often decades old. They are chosen not only because the authors were able to predict the path of technological change, but also because they show us that those authors had ambitions and visions which have not been realized yet. In other words, they still inspire us to create new things which may be beneficial for the planet. I’m convinced that augmented and virtual reality are offer possibilities in the context of augmenting our faculties – for instance by enabling us to build 3D information structures.
There are many more authors and texts in this hugely inspiring book, among them also more “recognized” philosophers (at least in continental Europe) such as Gilles Deleuze and Félix Guattari (A Thousand Plateaus, 1980) and Donna Haraway (A Cyborg Manifesto: Science, Technology and Socialist-Feminism in the late Twentieth Century, 1985).
I participate in a group reading those texts, and trying to develop practices and even tools in this context. The group, Metacaugs, lives primarily on Slack, if you’re interested have a look at our Orientation Guidebook.
At the art gallery of Ontario, this very nice use of augmented reality by digital artist Alex Mayhew: ReBlink.
Finally virtual reality platform Sansar is in open beta. I think Linden Lab, owner of Second Life, did a nice job. Will social VR reach more people now or will it just take away people and attention from Second Life? Or will we have to wait for Facebook Spaces to have success in order to pull Sansar along (or would it be destroyed by Facebook)? We just don’t know. It could very well be that for the vast majority of human beings social VR is a solution for a problem that does not exist. Nevertheless, I love it. MixedRealities has a little presence at Sansar, it’s called Philosopher’s Corner. Here you see an image of the construction works:
I’d love to do something similar at High Fidelity, yet another beautiful VR-platform, but give me some time as it seems far more challenging. But a fascinating world it is, here you see a place called “Playa”:
It’s the book Neuromancer by William Gibson that really made the word “cyberspace” popular. He describes it as thus:
A graphic representation of data abstracted from banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding.
That a least was the quote I got in my mind when reading on Singularity Hub about a new way to represent information infrastructure – in the form of a virtual city. The buildings have different shapes, heights and colors and an operator, immersed in this city, can immediately see whether something bad is happening in a very intuitive way. The Colorado-based startup ProtectWise built the tool.
In January 2018 William Gibson will publish a new book, Agency. If you can’t wait reading a great science fiction book, Gibson is very positive about Cory Doctorow’s newest book Walkaway.
I attended a lecture by professor Pattie Maes of MIT Media Lab. She founded and directs the Media Lab’s Fluid Interfaces research group. Some of her main talking points:
– The next phase in computing is about intelligence augmentation, by sensors, Augmented Reality and Artificial Intelligence.
– AR will “edit” our world in a smart way. For instance it will nudge us away from sugar if that is what we want. It will predict our behavior so that we can rectify it.
– One of the systems she discussed remembers who you shook hands with.
– She sees great future for health (mental and physical) applications – VR and AR. We’ll have tag along therapists.
– Pattie Maes is a big believer in glasses and lenses for later phase Augmented Reality. Apple will probably release a smartphone specially equipped for AR, but finally we’ll wear devices which will keep our hands free.
– People tend to be too optimistic about what exists now. In reality a lot of improvement is possible. Accept the future, we already carry digital augmentation with us but it can be made much better and more natural.
– Many concerns regarding smart glasses and AR can be solved with the right design choices. Which are those concerns? Privacy erosion, dependency, lack of understanding, lack of control, unequal access.
Will we be overtaken by our robotic AI Overlords? Pattie Maes says it was an error to talk about Artificial Intelligence while it’s more correct to talk about Artificial Pattern Recognition. This pattern recognition works increasingly well, but only for very specific activities. This so-called AI can not broaden its scope, generalize or be inspired by very different domains.
These past few weeks were remarkable: Facebook-people talking about Full Augmented Reality Glasses and Elon Musk about his Neuralink company, which wants to develop implantable brain–computer interfaces (BCIs). The goal is not just to treat brain diseases but to enhance the human being so that it can compete with AI-powered robots. Even the direct transmission of thoughts – rather than speak or type – would become possible. More about this can be read in the extensive article on Wait But Why.
It somehow makes sense. If you live in an augmented reality (virtual reality being just an option of the Full AR Glasses), and you want constant interaction with the Machines (your virtual assistant, all the electronic affordances you can imagine) it would be very convenient to be able to do so by just thinking. Once you can do that, why not transfer thoughts from one person to the other?
Once we get near perfect telepresence – summoning people to “be” here right next to me (as a kind of holograms) and to be able to look around as if they were actually here (which they would be, in a sense) – we get used to the Other in a spectral form, hence why not dream about beating death itself? Longevity is yet another ambition of Silicon Valley – Alphabet and others investing heavily in the struggle against disease and decay.
But wait, there are about 7.5 billion people on this planet, when we eradicate diseases, avoid using doomsday weapons, lengthen life expectancy dramatically, the population growth will increase dramatically which could become very uncomfortable – so it really makes sense to explore space and establish colonies in space or on neighboring worlds such as Mars.
Politics and ethics
We could throw in easily lots of other new technologies – everything related to smart cities, food production in extreme environments, identity management (blockchain…) and so on. So let’s not be surprised that some of the most passionate debates about politics, philosophy and ethics emerge from this constellation of disruptions. A few examples: transhumanists split in a left- and a right-wing, researchers want to expand human rights to protect us from the abuse of neurotechnology. These debates did not yet go mainstream, but eventually they will. One can only hope that an informed debate will be possible – even though the current state of political discourse makes me feel pessimistic.
This is a fascinating presentation by Michael Abrash, Chief Scientist at Oculus/Facebook, during the F8 conference. It’s about nothing less than augmenting the capabilities of the homo sapiens. He advocates full AR glasses and boy, they go far beyond Pokemon Go on your smartphone. Abrash refers to J.C.R. Licklider’s famous paper Man-Computer Symbiosis (1960) to underline the importance of the new developments.
The full AR glasses will give us better vision and hearing, will make us more intelligent, productive and connected. Yet they will be stylish, power efficient and socially acceptable. That will be necessary as they will be a constant part of our lives.
The glasses will know about our surroundings, history and needs. They will blend the physical and the virtual world according to our needs and desires.
Unfortunately, they do not yet exist. It will take breakthroughs in materials science, perceptual science, graphics, AI. So give it five or ten years, maybe longer – but imperfect versions will be available sooner and then develop just like happened with the personal computer.
It implies stuff like new brain-computer interfaces allowing us to think our instructions for our tiny but powerful artificial assistent. What Abrash did not say, is that there will be a divide between those owning and using the glasses efficiently and those who don’t have the glasses or don’t use them in a productive way.