Tuesday , September 22 2020
Home / ARTIFICIAL INTELLIGENCE / Laurence Devillers, robot ethics

Laurence Devillers, robot ethics

The Aibo dog, powered by Sony AI. Larry French / AFP

Are we doomed to give up our freedoms in the face of the benevolent manipulation of our own data? If we believe the short fiction that opens the recent essay by Laurence Devillers “Emotional” Robots (editions of the Observatory, 272 pages, 20 euros), there is no doubt. The author describes the plight of a wine lover confronted, in 2025, with the various artificial intelligence (AI) systems that surround him, programmed to preserve his well-being and security. Between his car, his fridge, his virtual assistant and his watch connected to the platform of his insurer, the man must resolve to give up a last drink, caught in the tight mesh of a network of attention and benevolence that he would like to disconnect.

Make no mistake, the one holding the pen is not technophobic. Researcher at the Computer Science Laboratory for Mechanics and the Engineering Sciences (Limsi-CNRS) and professor of applied computer science at Sorbonne University, Laurence Devillers has been exploring the interactions between humans and machines for more than twenty years and, for her, no doubt social robots and other virtual assistants are “Useful to society, especially to support, in some cases, people with autism or Alzheimer's disease”.

Article reserved for our subscribers Read also The humanoid robot, a privileged partner of autistic people

But these beliefs don't stop the scientist from worrying. Admittedly, for the moment, exchanges between humans and machines remain limited. “It is difficult to obtain recognition of human expressions from software because emotions are often mixed, and vary according to context and culture”, she says.

But if they’re still struggling to decode the subtlety of our moods, AI systems learn quickly. In particular with the progress of machine learning by neural networks (deep learning, “deep learning”), associated with the massive collection of our personal data. “The objective of the builders is to train them to reproduce our rituals of social interaction in order to gain confidence and adapt their reactions”, warns the researcher who is alarmed by the lack of transparency of these devices: “How far should we accept that machines know us and especially that they use our emotions to influence us? “

Imitate human feelings

As a child, Laurence Devillers wanted “Work on the brain and the construction of thought”. Daughter of an engineer and a researcher in biology, she finally opted for computer studies but without giving up her youthful aspirations. In 2000, she turned towardsaffective computing, a field of research theorized shortly before by the American Rosalind Picard, at the crossroads between psychology, computer science and cognitive sciences, and which brings together three fields of action: the recognition of human affects, reasoning from this information, and l expression of affects by return machines.

Leave a Reply

Your email address will not be published. Required fields are marked *