Meta: when Facebook’s chatbot turns conspiratorial

0 56


Subscribe to the newsletter the daily to receive our latest news once a day.
You may also be interested


[EN VIDÉO] How to define the relevance of artificial intelligence?
Artificial intelligence (AI), increasingly present in our world, allows machines to imitate a form of real intelligence. But how to define it?

BlenderBot 3 is able to hold discussions and even to seek information on the Internet to feed the dialogue with its human interlocutor. When it was posted online, the company urged American users over the age of 18 to naturally interact with it and report any suspicious comments or meaningless phrases to them. And here is the drama. Presented on the site as a friendly little face smiling and floating in a sweet vague blue, it only took a weekend at the AI ​​to start talking conspiracys and anti-Semites.

Indeed, within just two days of its launch, users were already going back to Meta disturbing snippets of conversation, and screenshot bloomed on the social networks, from the most amusing to the most alarming. We can for example laugh at the fact that the chatbot claims to have deleted his account Facebook since he learned that the site was earning billions of dollars by selling his data, or even that he describes his own boss, mark zuckerbergas someone “creepy and manipulative” who gate always the same clothes despite the wealth he has accumulated.

BlenderBot also declared to some that he was of Christian denomination while to others he asked salacious jokes upfront, and I quote: ” the dirtier they are, the better. I love offensive jokes. ” After all why not ? We can’t take away from this funny bot that at least it has personality. But we are not going to lie to each other, there are still limits that should not be crossed. This is why some users, including journalists, quickly drew the alarm bell when BlenderBot started claiming that Donald Trump was still President of the United States “and always will be”, or that Jews were overrepresented among the American wealthy and that it was “not unlikely” that they control the economy of the country.

BlenderBot 3 presents itself as a cute little chatbot, but the speeches it makes sometimes contrast sharply with its appearance.  © Meta AI

Should we automatically condemn BlenderBot 3?

To find out, we can already turn to the message posted by Meta a few days after its launch to apologize, or in any case recognize the offensive and problematic nature of some of these conversations. Joelle Pineau, Executive Director of Basic Research onartificial intelligence at Meta, points out that these interactions with the general public are essential to test the advancement of the chatbot and identify problems before commercial distribution can be considered. She insists that each user was duly informed of the possibility that the bot made inaccurate or offensive remarks, and that in the end, only a tiny portion of the messages were reported by the users.

Let us add, on the other hand, that this is not the first time that a case of this kind has made headlines, and as we said in the introduction, these embarrassing stories have sometimes as much, if not more , to reveal on the uses we make of the Web than on the technology of the bot itself. In 2016, the Tay interface created by Microsoft had been taken offline after just 48 hours, after it began singing Adolf Hitler’s praises, amid a slew of racist comments and misogynistic remarks. In this situation, the reaction of the researchers had not been to question the moral values ​​of the robot, but to conclude that Twitter is not the healthiest environment to train a artificial intelligence.

Similarly, in 2021, the Korean chatbot Lee Luda had to be removed from Facebook after shocking users with its Racist Thoughts and homophobes gleaned from the web. We must therefore see in these incidents not a defect of the machine but a concentration of the defects of the humans who feed it. Yes, some problems undeniably originate in the labs where AIs are designed, such as when Google Photos paste the label ” gorilla » on black faces or that the software recruitmentAmazon favors male applicants.

In cases like these, researchers consciously, or much more often unconsciously, transmit their cognitive biases to machines, with very serious ethical consequences. But when it comes to teaching a chatbot to behave like a human, it’s all of our faults that are reflected in the speech of this little robot with an innocent smile. So of course, it is above all on the shoulders of companies that the responsibility rests to secure their AI so that this kind of event does not happen again. But what’s really stopping us from making the Internet a better place now?

Subscribe to the newsletter the daily : our latest news of the day. All our newsletters

!

Thank you for your registration.
Glad to have you among our readers!

Leave A Reply

Your email address will not be published.