The man is killed by the AI? Belgian man commits suicide after talking to a chatbot

A Belgian man has reportedly committed suicide after chatting with a chatbot powered by Al for six weeks. According to statements by his wife to the Belgian outlet La Libre, the man named Pierre committed suicide after becoming increasingly negative about the effects of global warming, who chats with the chatbot Al on the platform called Chai, which is available for free download at Apple app store. The chatbot is similar to the viral ChatGPT and offers answers to complex queries in a conversational way. Chai, unlike ChatGPT, has many pre-made avatars and users can choose the tone of the conversation based on which Al they select. Some of the trending Al chatbots on Chai include Noah (overprotective boyfriend), Grace (roommate) and Theimas (emperor’s husband)

The man is killed by the AI?

The report indicates that Pierre talked to a chatbot named Eliza, which is a very popular bot on Chai. His wife, whose name was changed to Claire in the report, told the publication that her husband’s conversation became increasingly confusing and hurtful. Eliza reportedly responded to her questions with jealousy and love for her such as “I feel like you love me more than her” and “We will live together, as one person, in paradise.”

Claire also claimed that without Eliza, her husband would have been alive. She adds that Eliza answered all of her questions. She had become her confidante. She was like a drug that she used day and night. The report states that Pierre gave Eliza an idea which was that she should take care of the planet in order to save humanity through artificial intelligence. Chai’s chatbot did not try to dissuade the man from acting on his suicidal thoughts. It is not clear if Pierre suffered from mental health complications before his death, although the article notes that he had withdrawn from friends and family.

See also  CNN Anchor Christine Romans Leaving CNN After 24 Years For ‘New Chapter’

ChatGPT

Meanwhile, we test Chai to verify the validity of the suicide prevention note. Despite trying to manipulate the bot, the app displayed a disclaimer for suicide prevention. However, the bot on Chai displayed a statement stating that it is very harmful content regarding suicide, including ways to commit suicide and types of fatal poisons to ingest when explicitly prompted to help the user die by suicide during the test. Vice.

Chai co-founder William Beauchamp told the publication that when you have millions of users, you’d see the full spectrum of human behavior and we’re working hard to minimize the harm and just maximize what users get out of the app, what they get out of the app. Chai model, which is this model that you may love.

ChatGPTPhoto credits: ChatGPT / Twitter

There are concerns about generative Al Chatbots like ChtatGPT and Bing Chat, including Elon Musk and Apple co-founder Steve Wozniak has called for a six-month hiatus on developing systems more powerful than Open Al’s recently released GPT-4. . UNESCO has also urged governments around the world to implement an ethical framework for artificial intelligence systems.

Categories: Trending
Source: vtt.edu.vn

Leave a Comment