BlenderBot 3 is able to hold discussions and even to seek information on the Internet to feed the dialogue with its human interlocutor. When it was posted online, the company urged American users over the age of 18 to naturally interact with it and report any suspicious comments or meaningless phrases to them. And here is the drama. Presented on the site as a friendly little face smiling and floating in a sweetblue, it only took a weekend at the AI to start talking s and anti-Semites.
Indeed, within just two days of its launch, users were already going back todisturbing snippets of conversation, and bloomed on the , from the most amusing to the most alarming. We can for example laugh at the fact that the chatbot claims to have deleted his account since he learned that the site was earning billions of dollars by selling his data, or even that he describes his own boss, as someone “creepy and manipulative” who always the same clothes despite the wealth he has accumulated.
BlenderBot also declared to some that he was of Christian denomination while to others he asked salacious jokes upfront, and I quote: ” the dirtier they are, the better. I love offensive jokes. ” After all why not ? We can’t take away from this funny bot that at least it has personality. But we are not going to lie to each other, there are still limits that should not be crossed. This is why some users, including journalists, quickly drew thewhen BlenderBot started claiming that Donald Trump was still President of the United States “and always will be”, or that Jews were overrepresented among the American wealthy and that it was “not unlikely” that they control the economy of the country.
Should we automatically condemn BlenderBot 3?
To find out, we can already turn to the message posted by Meta a few days after its launch to apologize, or in any case recognize the offensive and problematic nature of some of these conversations. Joelle Pineau, Executive Director of Basic Research onat Meta, points out that these interactions with the general public are essential to test the advancement of the chatbot and identify problems before commercial distribution can be considered. She insists that each user was duly informed of the possibility that the bot made inaccurate or offensive remarks, and that in the end, only a tiny portion of the messages were reported by the users.
Let us add, on the other hand, that this is not the first time that a case of this kind has made headlines, and as we said in the introduction, these embarrassing stories have sometimes as much, if not more , to reveal on the uses we make of the Web than on the technology of the bot itself. In 2016, the Tay interface created byhad been taken offline after just 48 hours, after it began singing Adolf Hitler’s praises, amid a slew of racist comments and misogynistic remarks. In this situation, the reaction of the researchers had not been to question the moral values of the robot, but to conclude that is not the healthiest environment to train a .
Similarly, in 2021, the Korean chatbot Lee Luda had to be removed fromafter shocking users with its and homophobes gleaned from the web. We must therefore see in these incidents not a defect of the machine but a concentration of the defects of the humans who feed it. Yes, some problems undeniably originate in the labs where AIs are designed, such as when Photos paste the label ” » on black faces or that the recruitment favors male applicants.
In cases like these, researchers consciously, or much more often unconsciously, transmit their cognitive biases to machines, with very serious ethical consequences. But when it comes to teaching a chatbot to behave like a human, it’s all of our faults that are reflected in the speech of this little robot with an innocent smile. So of course, it is above all on the shoulders of companies that the responsibility rests to secure their AI so that this kind of event does not happen again. But what’s really stopping us from making the Internet a better place now?