(‘Philisophy and tech’ is a series of posts in which I discuss very briefly philosophical issues I encounter reading stuff about technology)
Last week Google published ethical principles guiding its AI development and research. Richard Waters of the Financial Times quotes AI-professor Stuart Russell at the University of California, Berkeley, who says that Google has to think about the output of their algorithms as a kind of ‘speech act’. What he means is that when people use their AI-enabled tools, such as searching texts or images, the responses generated influence the way people look at the world and ultimately change their behavior and convictions. It’s not about ‘mere talking’ but about doing stuff in the real world. The Stanford Encyclopedia of Philosophy has a lot about speech acts. An interesting take on Google’s new AI ethics can be found at the Electronic Frontier Foundation. Also watch professor Russell’s talk at TED2017.
Today I have to give a very brief talk about journalism and artificial intelligence here in Brussels, Belgium. I’m not an AI-expert but at the newsroom we read and talk about the AI-disruption all the time.
I made a mindmap:
Some conclusions and questions: what are the implicit ethical choices we make while using AI-powered tools? We used to speak about ‘the people formerly known as the audience” implying that these days the “audience” is an active participant in the news process – will AI help us in exploring that further or will it just simulate interaction and debate? What about the journalists, will they simply lose their jobs? I think they’ll have to engage even more in team work but this time the teams will consist not only out of reporters and multimedia wizards but also AI-agents will join us. The hybrid human/AI-teams who are most efficient in coordinating tasks will outperform.
Bruce Sterling is a tremendously inspiring science fiction author, futurist, design thinker and cultural critic. Menno Grootveld and Koert van Mensvoort had a great interview on nextnature.net with him about Artificial Intelligence, the Technological Singularity and all things convergence of humans and machines.
In this interview, Sterling explains in very clear terms how we are prisoners of metaphysical ideas when we believe in ‘thinking machines’ and entities which would become like artificial super-humans.
Of course, technologists are seduced by these ideas, but then their projects get unbundled into separate products and services.
It’s not that Sterling wants to prevent us from being too ambitious regarding computers and algorithms. In fact, he explains that by trying to think about our technological futures in anthropocentric terms, we actually limit what is possible.
Whether you are obsessed by the Singularity or not, this is a must-read interview.