You are currently viewing Artificial intelligence: staggering discussion with the new OpenAI robot

Artificial intelligence: staggering discussion with the new OpenAI robot

  • Post category:Economy News
  • Reading time:2 mins read

Since artificial intelligences become what humans make of them, deviations are naturally possible.

This is how in 2016, a conversation robot developed by Microsoft found itself in a few hours praising Hitler and publishing sexist, racist and anti-Semitic messages. “We let him go unsupervised and uneducatedpoints out Patrick Watrin. He was not told that what he had learned was wrong or should not be conveyed. It’s education“, he repeats.

If he is given examples of hate speech, he will learn hate.

Because to train a neural network, it is shown examples of what it is supposed to reproduce. Exactly like a child. “If you take a child who does not originally have hate inscribed in his genetic heritage, and you put him either in a family that advocates values ​​of altruism or in a family that advocates values ​​of hate , you will have two different children“, sums up the scientist. In other words, if we give an artificial intelligence examples of hate speech, it will learn hate.

These notions have been taken into account by the developers of ChatGPT, and in particular the notion of politeness. Where older versions of the chatbot would have had no trouble telling a story advocating violence if asked, ChatGPT reminds us that it was not programmed for that.