The education theorists, practitioners and technologists George Siemens and Stephen Downes united again for the course E-Learning 3.0. Stephen was in a hotel room in Toronto and George somewhere in Australia, but the wonders of YouTube made them unite (after a search for the light switches).
In my earlier posts I referred to the very first MOOC they facilitated (the very first MOOC in general) in 2008, Connectivism & Connected Knowledge (CCK08). There were about 2,000 participants, these days it seems harder to get that many people. Reasons might be the marketing power of Coursera, edX and similar big platforms, the fact that big social media (Facebook) seemed to make blogging and RSS-feeds less relevant.
What else changed? In the video-discussion george Siemens mentions Artificial Intelligence (AI). If machine learning can learn about everything humans can learn, why would we still learn? One part of the answer is that as humans, we cannot not learn. But what is uniquely human? Is it, as he says, compassion and kindness? The ‘beingness’ of humans?
Stephen is not convinced. ML could develop ethics too. But maybe the way we experience the world as biological organisms is different from the way an AI can be aware. So humans could be the voice in the AI’s mind telling that there are more ways to look at the world.
If humans cannot not learn, maybe we should think about teaching. Learning at school can be a frustrating experience, and maybe what we require students to learn is not suited in the age of AI. Stephen point out that the capacity to take decisions and to choose by the learner will become even more important. That was obvious already in 2008 when the learners got an avalanche of learning materials to digest during CCK08 and they were told a that time already to pick and choose. Other important aspects, which are not being measured by universities, is the ability to contemplate about our place in the universe and in the community.
Also in the video: an interesting conversation about fake news and blockchain. Attention: the real conversation starts after about 8 minutes.
This week I learned about information philosophy, most notably the work of Luciano Floridi. In this day and age of data and digitalization, he develops an ontology and ethics based on reality-as-information. Virtual worlds geeks will appreciate how the professor also refers to Second Life (and other virtual environments), for instance in his book The Fourth Revolution. He writes about “the infosphere”, “inforgs or information organisms”, being “onlife”.
When I think about Second Life or similar worlds, I consider them as a kind of cyberspace, but the interesting thing about the notion of Infosphere is that the cyberspace is just a part of it. Even so virtual worlds are revealing as these are environments where people acutally live. These days the digital and virtual are blending more and more with the physical reality, inspiring information philosophers to develop an object oriented approach on the basis of “information”: humans, organizations, animals, plants, objects can be studied as information objects with specific functions or methods.
I’m reading now The Fourth Revolution and The Routledge Handbook of Philosophy of Information.
Floridi is also a must-follow thinker on Twitter.
Fascinating: IBM trained an algorithm in debating humans. There’s still some way to go, but the results were pretty impressive. I don’t know about IBM’s Project Debater, but there is an interesting history of philosophical research into argumentation. This inspired practices such as argument mapping. Like mind maps and concept maps, argument maps can become pretty complicated. I could imagine 3D argument maps could be interesting, but as yet I did not find software enabling 3D or VR argument maps. Maybe I should give it a try using some virtual environments such as Second Life or High Fidelity, but it would even be nicer to build browser-based tools or apps. Just imagine the possibilities of live group sessions using immersive argument mapping.
(‘Philisophy and tech’ is a series of posts in which I discuss very briefly philosophical issues I encounter reading stuff about technology)
Last week Google published ethical principles guiding its AI development and research. Richard Waters of the Financial Times quotes AI-professor Stuart Russell at the University of California, Berkeley, who says that Google has to think about the output of their algorithms as a kind of ‘speech act’. What he means is that when people use their AI-enabled tools, such as searching texts or images, the responses generated influence the way people look at the world and ultimately change their behavior and convictions. It’s not about ‘mere talking’ but about doing stuff in the real world. The Stanford Encyclopedia of Philosophy has a lot about speech acts. An interesting take on Google’s new AI ethics can be found at the Electronic Frontier Foundation. Also watch professor Russell’s talk at TED2017.
Which philosophers are particularly relevant when studying and using “new” technologies? Here’s my list based on my readings these last few weeks.
Rosi Braidotti, Metamorposes. Towards a materialist way of becoming
Andy Clark and David Chalmers, authors of The Extended Mind. Andy Clark also wrote Natural-Born Cyborgs.
Mark Coeckelbergh, author of New Romantic Cyborgs.
Gilles Deleuze, Felix Guattari, A Thousand Plateaus
Michel Foucault, Surveiller et punir and, regarding the Panopticum, Jeremy Bentham
Donna Haraway, writer of A Cyborg Manifesto.
John Searle about the Chinese Room thought experiment (and commentators on his ideas) This is one of the topics in a Philosophy of Mind course I’m reading and watching.
Quite some stuff about cyborgs and ‘monsters’ upsetting the classical oppositions human-machine, man-woman, human-animal, real-unreal. It’s a very incomplete and arbitrary list but based on stuff happening in technology, society and culture.