This week I learned about information philosophy, most notably the work of Luciano Floridi. In this day and age of data and digitalization, he develops an ontology and ethics based on reality-as-information. Virtual worlds geeks will appreciate how the professor also refers to Second Life (and other virtual environments), for instance in his book The Fourth Revolution. He writes about “the infosphere”, “inforgs or information organisms”, being “onlife”.
When I think about Second Life or similar worlds, I consider them as a kind of cyberspace, but the interesting thing about the notion of Infosphere is that the cyberspace is just a part of it. Even so virtual worlds are revealing as these are environments where people acutally live. These days the digital and virtual are blending more and more with the physical reality, inspiring information philosophers to develop an object oriented approach on the basis of “information”: humans, organizations, animals, plants, objects can be studied as information objects with specific functions or methods.
I’m reading now The Fourth Revolution and The Routledge Handbook of Philosophy of Information.
Floridi is also a must-follow thinker on Twitter.
Fascinating: IBM trained an algorithm in debating humans. There’s still some way to go, but the results were pretty impressive. I don’t know about IBM’s Project Debater, but there is an interesting history of philosophical research into argumentation. This inspired practices such as argument mapping. Like mind maps and concept maps, argument maps can become pretty complicated. I could imagine 3D argument maps could be interesting, but as yet I did not find software enabling 3D or VR argument maps. Maybe I should give it a try using some virtual environments such as Second Life or High Fidelity, but it would even be nicer to build browser-based tools or apps. Just imagine the possibilities of live group sessions using immersive argument mapping.
(‘Philisophy and tech’ is a series of posts in which I discuss very briefly philosophical issues I encounter reading stuff about technology)
Last week Google published ethical principles guiding its AI development and research. Richard Waters of the Financial Times quotes AI-professor Stuart Russell at the University of California, Berkeley, who says that Google has to think about the output of their algorithms as a kind of ‘speech act’. What he means is that when people use their AI-enabled tools, such as searching texts or images, the responses generated influence the way people look at the world and ultimately change their behavior and convictions. It’s not about ‘mere talking’ but about doing stuff in the real world. The Stanford Encyclopedia of Philosophy has a lot about speech acts. An interesting take on Google’s new AI ethics can be found at the Electronic Frontier Foundation. Also watch professor Russell’s talk at TED2017.
Which philosophers are particularly relevant when studying and using “new” technologies? Here’s my list based on my readings these last few weeks.
Rosi Braidotti, Metamorposes. Towards a materialist way of becoming
Andy Clark and David Chalmers, authors of The Extended Mind. Andy Clark also wrote Natural-Born Cyborgs.
Mark Coeckelbergh, author of New Romantic Cyborgs.
Gilles Deleuze, Felix Guattari, A Thousand Plateaus
Michel Foucault, Surveiller et punir and, regarding the Panopticum, Jeremy Bentham
Donna Haraway, writer of A Cyborg Manifesto.
John Searle about the Chinese Room thought experiment (and commentators on his ideas) This is one of the topics in a Philosophy of Mind course I’m reading and watching.
Quite some stuff about cyborgs and ‘monsters’ upsetting the classical oppositions human-machine, man-woman, human-animal, real-unreal. It’s a very incomplete and arbitrary list but based on stuff happening in technology, society and culture.