Will Artificial Intelligence and ethics ever manage to become one or at least live together? Homophobic, racist or sexist are the words used by critics today to qualify AI.

Artificial Intelligence and ethics are back in the spotlight. Since last week, and the publication of a research project by Michal Kosinski and Yilun Wang, two Stanford researchers, entitled:"Deep neural networks are more accurate than humans at detecting sexual orientation from facial images"! It is little to say that this research project has been the subject of much debate. The most eminent AI specialists and philosophers then gave their opinions, made judgements and, above all, devised limits to avoid longer-term drifts. In The Guardian columns, editorialist Matthew Todd, who legitimately wondered who would want to use an AI to define whether he was gay or straight, pointed out that "there are still 72 countries in the world where sexual activity with a person of the same sex is illegal, 8 where it is punishable by death"... Figures that speak for themselves !

Artificial Intelligence prefers men...

Without going so far, Artificial Intelligence is also able to know which face can hide under a fake beard, a fake moustache, fake glasses, or even stronger, under hoods. An AI jointly developed by British researchers from Cambridge University and Indian researchers from the National Institute of Technology in Warangal and the Indian Institute of Science. Deep learning has made it possible to further develop this Artificial Intelligence which, let us not hide it, undermines the concept of anonymity that some individuals would like to take advantage of in a world where the public life and private life frontier is increasingly thin. Amarjot Singh's laconic response to the Artificial Intelligence on Cambridge's side: "There are more benefits than nuisances to be gained from this technology."

A year ago, other scientists, this time Russians, had set out to organise a beauty contest with, for the sole jury, Artificial Intelligence also trained in deep learning. Six hundred thousand men and women had sent a photo. Forty-four winners, depending on the categories, had been selected by the bot. Thirty-eight were white, six were Asian... This week in France, the magazine Sciences et Avenir organized a debate on sexism in Artificial Intelligence. Asking the question is almost an answer. Laurence Devillers, a researcher at the CNRS in France, was worried about her research without dramatizing a world that was too masculine : "This can have several implications : we can find ourselves in situations where engineers and researchers - indeed, without even thinking about it - find more masculine data."

Artificial Intelligence tests its limits

By playing with our nerves or our values, Artificial Intelligence has tested its own limits and a few personalities from the AI universe have decided to intervene on the subject and make their voices heard. In this duality between Artificial Intelligence and ethics, the problem is not so much on the side of the machine as on the side of man. The common point between all these Artificial Intelligences, however different they may be, remains man. On France Culture, a French radio, Jean Gabriel Ganascia, Chairman of the Ethics Committee of the CNRS, explains: "It is precisely against this idea that a machine, since it operates in a systematic way, is neutral. Machines are made by men. Even if the learning is automatic, there are always men behind it. There is an implicit in machine programming, which is linked to both the descriptors that we will take into consideration and the examples that we will give to the machines". In the NY Times, Kate Crawford, a senior researcher at Microsoft, did not say anything else a year ago and warned already: "We must be vigilant about how we design and train these automatic learning systems, otherwise we will see forms of bias anchored in the Artificial Intelligence of the future. [...] We risk building an artificial intelligence that reflects a narrow and privileged vision of society, with its old prejudices and familiar stereotypes."

Has Artificial Intelligence exceeded the limits of ethics in facial recognition?

No doubt, that day arrived. Whatever the motivations of Stanford's two researchers, how can we want to know and imagine sorting individuals according to their sexuality ? Does the world boil down to a heterosexual/homosexual division ? And what about the 35,000 photos taken on American dating sites to realize this experiment compared to the 200 million used by Apple to train its AI and achieve facial recognition on the new Iphone X ? Are freedom and privacy more important than security at all costs ? Is it any wonder that an AI finds white skins more beautiful when the sample used by researchers to train their AI is 75% European and 1% African ? These are more than legitimate questions, but to which Artificial Intelligence will have a hard time answering without the help of its creators.


blog comments powered by Disqus