Towards a global code of ethics for artificial intelligence research
There have been spectacular advances in the field of artificial intelligence (AI) in recent years, leading to inventions that we had never thought possible. Computers and robots now have the capacity to learn how to improve their own work, and even make decisions – this is done through an algorithm, of course, and without individual consciousness. All the same, we must not fail to ask some questions. Can a machine think? What is an AI capable of at this stage of its evolution? To what degree is it autonomous? Where does that leave human decision-making?
More than ushering in a Fourth Industrial Revolution, AI is provoking a cultural revolution. It is undeniably destined to transform our future, but we don’t know exactly how, yet. This is why it inspires both fascination and fear.
In this issue, the Courier presents its investigation to the reader, elaborating on several aspects of this cutting-edge technology at the frontiers of computer science, engineering and philosophy. It sets the record straight on a number of points along the way. Because, let’s be clear – as things stand, the AI cannot think. And we are very far from being able to download all the components of a human being into a computer! A robot obeys a set of routines that allows it to interact with us humans, but outside the very precise framework within which it is supposed to interact, it cannot forge a genuine social relationship.
Even so, some of AI’s applications are already questionable – data collection that intrudes on privacy, facial recognition algorithms that are supposed to identify hostile behaviour or are imbued with racial prejudice, military drones and autonomous lethal weapons, etc. The ethical problems that AI raises – and will undoubtedly continue to raise tomorrow, with greater gravity – are numerous.
While research is moving full speed ahead on the technical side of AI, not much headway has been made on the ethical front. Though many researchers have expressed concern about this, and some countries are starting to give it serious thought, there is no legal framework to guide future research on ethics on a global scale.
“It is our responsibility to lead a universal and enlightened debate in order to enter this new era with our eyes wide open, without sacrificing our values, and to make it possible to establish a common global foundation of ethical principles,” says Director-General Audrey Azoulay, of UNESCO’s role, in this issue of the Courier.
An international regulatory instrument is essential for the responsible development of AI, a task that UNESCO is in the process of undertaking. The Courier lends this initiative its support, by exploring different avenues of thought on the subject.
Artificial Intelligence: Between myth and reality | Jean-Gabriel Ganascia
A bionic hand that sees | Chen Xiaorong
Of robots and humans | Vanessa Evers
Chef Giuseppe heralds a new culinary era | Beatriz Juez
Miguel Benasayag: Humans, not machines, create meaning | Interview by Régis Meyran
Yoshua Bengio: Countering the monopolization of research | Interview by Jasmina Šopova
Moustapha Cissé: Democratizing AI in Africa | Interview by Katerina Markelova
Yang Qiang: The Fourth Revolution | Interview by Wang Chao
The threat of killer robots | Vasily Sychev
Working for, not against, humanity | Tee Wee Ang and Dafna Feinholz
Marc-Antoine Dilhac: The ethical risks of AI | Interview by Régis Meyran
Karl Schroeder: Is it really all for the best? | Interview by Marie Christine Pinault Desmoulins
Learning to live in the time of AI | Leslie Loble
Audrey Azoulay: Making the most of artificial intelligence | Interview by Jasmina Šopova