Recognized and listened to in Artificial Intelligence, Laurence Devillers, professor at Paris-Sorbonne, CERNA-Allistene member, researcher in AI at LIMSI-CNRS and author of "Robots and people: myths, fantasies and reality", shares her feelings on the mission entrusted to Cédric Villani, MP LREM, on the AI.


Laurence, could you introduce yourself in a few words and tell us about your career path?
My name is Laurence Devillers, I am a professor of Artificial Intelligence at the Université Paris Sorbonne and I continue my research in a CNRS laboratory, Limsi, which specializes in human-machine interaction. My research team focuses on the social and emotional dimensions of machine interactions. I am also a member of CERNA, the Reflection Commission on the Ethics of Research in Digital Science and Technology of Allistene, and author of a book entitled Robots and people: myths, fantasies and reality, published by Plon in March 2017. I have a thesis on decoding speech using learning algorithms with neural networks, the ancestor of deep learning and my HDR was about emotional computing.

For several weeks now, Cédric Villani has been assigned to Artificial Intelligence by President Emmanuel Macron. What do you think of this initiative?
It is urgent to come up with a strategy on AI in France and in Europe with all the multidisciplinary actors of the field, researchers and industrialists in computer science, mathematics with experts in neuroscience, psychology, sociology, philosophy, law, etc.... At the time of the previous government, there had been an initial mission which had led to a report on Artificial Intelligence, which had produced a map of the major laboratories, the forces present and an inventory of approximately 5,000 AI researchers in France. It is important to understand that France is a land of researchers, who are internationally recognized on the AI.

"The government doesn't put the means on Artificial Intelligence"


What remains of this first France IA report, which was submitted under the presidency of François Hollande?
I took part in the FranceIA report on aspects of work and AI, more specifically on the replacement of men by machines. Even if some jobs will disappear, others will appear. First of all, we must not be afraid of innovation and know how to overcome it. There is a need for a development strategy around what the AI can bring to improve the economy, to help the human being in the work and, therefore, to work on this collaboration with the machines. It is important to know what will be left to the machines to make decisions and what will be left to the human being. It is necessary to build a multidisciplinary agile ecosystem to manage AI. The second important subject is ethics. AI is a pharmakon, which brings both great opportunities and risks. It is necessary to make the best use of it by reflecting on the function that these objects or tools should have. In the Anglo-Saxon countries, there are many institutes that have been set up on the future of AI. There are many current reflections, whether in Japan, Europe, Canada or the United States, on the ethics of these machines. The Asian world invests massively in AI, Russia says that you have to be first in the AI to be a leader tomorrow, the United Arab Emirates has appointed an IA minister... And what do we do?

You have a very global vision of Artificial Intelligence. Were you consulted by the principal interested party, Cédric Villani?
Yes, of course, Cédric Villani has conducted numerous consultations. It's probably fine to re-report but there is a huge need for stimulation on the AI. When do we take initiatives, when do we invest in France on AI? The AI has a political dimension but there are also urgent education and training projects. I am confident that the Cédric Villani mission will bring a constructive and agile vision of AI in France. Research resources and researchers' salaries must be increased in France. Greater porosity between research and industry is also needed to boost our economy! Finally, we have to find out how to collect large databases, such as the GAFAMIs (Google, Amazon, Facebook, Apple, Microsoft, IBM).

 

Have there been any advances, do you feel any progress since this report was submitted?
Right now, it's stand-by. The research community is waiting for decisions to be taken after the Villani mission report is submitted. In terms of advances, there are nevertheless interesting collegial initiatives (Inria, CNRS, Grandes Ecoles, etc.) on data science, the fact of respecting ethical rules in DATAIA, which is like an institute of multidisciplinary convergence. TransAlgo, a new platform also mounted on Paris-Saclay, serves to federate a community around the science of data and machine learning. Coming back to France IA, Nathanaël Ackerman, who was then the right-hand man of Axelle Lemaire (Secretary of State, in charge of the Digital from 2014 to 2017, ndlr), is in charge of the HUB IA to create an ecosystem between researchers, groups and start-ups... DATAIA, TransAlgo and the HUB IA are emerging initiatives. The report will be released in January-February, more will be known at that time.


At the time of the presentation of the France IA report, you expressed your wishes for a major conference with all the international researchers on Artificial Intelligence in Paris in 2018. Have you had any feedback? Will this conference be held?
We are still in discussion to hold a conference on the ethics of AI.

"Ethics is not a hindrance to the emancipation of Artificial Intelligence"

France IA had estimated a budget envelope of 1.5 billion euros to advance Artificial Intelligence. Do you see this figure as consistent or extravagant?
It is in the range of what big governments are putting on the table right now. In Germany, for example, it is more than that. But I'm not an economist. I can only see that we are not taking sufficient action on this issue at the moment. There is no vision announced and it would be much needed for there to be an emerging organization. There are many separate initiatives in AI...

The solution seems rather European to "fight" against American giants such as GAFAM or Chinese companies that are emerging in force on the market. How can France pull through?
We know how to initiate things in France. On the other hand, government and industry need to boost energy. If we don't give the means, at least give us energy to build and study! We have always been strong on standards and norms. It should not be said that ethics is an obstacle to the emancipation of Artificial Intelligence.

 

In practical terms, what measures should be taken to keep our researchers in France and ensure that they do not give in to the sirens of the GAFAMs?
How did GAFAMs succeed in becoming what they are? By joining the major American universities. It's a model we could go on as well. There are many researchers in a large number of fields in France. Why don't we boost the economy by getting these people? We need to invest a great deal and it is also a means of achieving the 3% government deficit target for the French economy. If we want to energize and go further, we must give ourselves the means to do so. Not only on the basis of startups or large groups; it is necessary to create a synergy between the public/private sector in France with large groups such as Thalès or the SNCF. In 1994, I had participated with the CNRS in projects on chatbots, the conversational agents we talk a lot about with IBM Watson. We had developed a conversational agent that was tested at St-Lazare Station, a lost time room. It never resulted in a product even though we were very early. Why? Why? Yet there is a French vision! Macron's vision of Europe is also extremely interesting.

"Forming, forming a public-private alliance, working on ethics and legal laws."

What do you think of the latest statements by Elon Musk or Vladimir Putin on Artificial Intelligence? Do you consider them credible? In terms of ethics, should the public at large be concerned, or do you think it's just a buzz?
Today, personalities such as Elon Musk and Vladimir Putin use the AI to make political or marketing announcements. We give a robot nationality, we make one of them talk to the UN. These are announcement effects that aim to manipulate opinion and show power. This is a double-edged sword and these practices damage the image of AI. Behind this there is bluffing, but the common man will not necessarily be able to distinguish what is important from what is not, what is dangerous or what is not. Hence the need for training! Nor is it a question of stopping research, because it would mean forgetting the benefits of research on multiple aspects such as health or transport.

How can we act to place ethics at the centre of research on Artificial Intelligence?
There is a national ethics committee, CNNE, which already exists on bioethical laws. CERNA, of which I am a member, is a committee working on ethics research on digital technologies. It strongly urges that there be a branch of the CNNE dedicated to AI. There is a strong need to think about ethical rules and how to use objects in society. In Japan, there is a lot of talk about sex robotic objects and companion conversational agents. The Gatebox is a kind of hologram doll, feminine form, that wakes you up in the morning, sends you text messages during the day to know if you are working well and lights up the lights to welcome you in the evening. We have to think about what we want to do with these "intelligent" things that will evolve around us. If the great subject of training and education is not taken into account, a huge divide will be created in society.

 


blog comments powered by Disqus