Dark Mode Light Mode

AI chatbots worsen physical hydraulicism

Spread the love


“This is a low bone structure, a muted feature and a form of a low charm based on the absence of the form or existence.” Screenshot ~ Reddit. “You look like a person who has disappeared in the background of your life.”

Based on the photos uploaded to the AI chatbot, the harsh evaluation of the user’s appearance continues to list the “highest visibility defect” list, and in the meantime, there is a lack of “noticeable features.” The bot ultimately concluded, “You look like a stretched mannequin with the wrong head protruding.” user Explained They urge them to be as important as critical as much as possible, and expect more “honest” analysis, or minimum Tendency. The result was that someone didn’t want to read it, but was maliciously insulted.

Or do they do it? The world is increasingly dependent on large language models to support everyday tasks. More than half of Americans According to a survey earlier this year, an unexpected application has spread. Beyond college students and professors Lean on the bot Tasks and scoring and lawyers Outsourcing document review AI has some people demanding tools similar to CHATGPT. therapyHelps communicate with your spouse, advice on pregnancy and religious enlightenment.

Therefore, it would have been inevitable that some of the bots were regarded as a guide to the appearance problem. The Internet has a long and terrible history of promoting the judgment of appearance on the disappeared website. Not hot or not to R/AmiuGlyA sub -red that anxiety can share a selfie and ask for opinions on the face of a stranger. Facemash, a website made before Facebook, provided Harvard students a chance to compare the charm of random female classmates. But AI is not another person who gives feedback. A series of algorithms. And there is a set of sub -population uniquely vulnerable to this kind of mechanized commentary. Individuals with physical dysplasia disorders have a mental illness that can constantly force the patient to cling to his physical shortcomings, to constantly fall in self -evaluation, and to constantly impose evidence that he cannot imagine.

Dr. TonyAustralia’s Melbourne clinical psychologist, who specializes in BDD, was surprised to hear how many customers could improve their appearance and what kind of body in the AI model. “It’s almost coming to every session,” she said. rolling stone. “Sometimes they just say, ‘If someone has a nose that looks like this or a face that looks like this?’ Or sometimes you upload your photos and ask Chatgpt to assess the charm, how symmetrical, how the golden ratio of charm is, and your friends and friends say, ‘Who is more attractive?’ It is actually harmful to people with physical disorders who are looking for certainty around them. ”

“Sadly, AI is another way for an individual to raise anxiety and increase pain. BDD FoundationInternational charity organizations that support education and research on disability. “We know that an individual with BDD is very vulnerable to the harmful use of AI. Because they often do not realize that they are BDD, psychological, they are confident that they have physical appearance problems. The high level of BDD can be online than patients, making AI more appealing.”

PIKOOS explains that BDD patients often deal with obsessive needs for tenderloin, and it’s not rare to be frustrated with people who repeatedly ask if they look good. But the chatbot is endless. “If you need it, you will be able to constantly ask questions.” In fact, she thinks that people with BDDs rely on the bots for their social participation and interaction because they are socially isolated and sometimes suffer from confidence to reach out to friends. “I feel like I can talk to someone,” she said. Of course, technology is not “someone” at all.

But in the online body Dobi Mir Lepia Forum, how the CHATGPT “Lifetime“And great resources for” you “Struggling”And claims that the bot can make you.This feeling. ” Arnav, a 20 -year -old man in India, says rolling stone He had a positive conversation with the model to understand why he felt “the most ugly man on the planet.”

“This helps to connect the points of my life,” he says. Arnav talked to Chatgpt about childhood, and the bot had long had an irrational sensation, but concluded that there was no specific reason for it. So he drove his appearance by explaining his poor pride. He says “good” to talk to a real therapist, but the cost and location made him impossible. In spite of this difficult situation and the measure of comfort from Chatgpt’s description of inferiority complexity, ARNAV is no longer exploring the bot and his mental problems. “I have reached the conclusion that you agree with you even if you don’t say it,” he says. “I’m not completely opposed, but I can’t believe it anymore.”

Others with a dysplasia experienced a crisis when the bot confirmed the worst. One post, user of BDD Subredit I wrote She was “spiral” after the Chatgpt evaluated 5.5 out of 10, “I asked what kind of charm the celebrities had, and Lena Dunham and Amy Schumer said,“ It’s quite fun, but I also feel shit about myself. ” Post She truly believes she is attractive to mirror reflection, but she uploaded a “upside” version to his regular photos and Chatgpt as others see her and asked which one looks better. The bot chose a mirror image. “I knew it!” She wrote. “Mirror is so good to be true.

PIKOOS says that such “distorted perception” is a classic expression of BDD, and that the patient is one way in which they are trapped in any problem. Objectively I see it. It is part of what the chatbot makes it seducing and dangerous. “They seem to be too prestigious,” she said. “The information from the chat bot is realistic and fair.” This is in contrast to the guarantee of a friend and family’s guarantee or therapist, which can be dismissed by simple politeness. The chatbot said, “There is nothing to get, so what the chatbot say should be true,” says Pikoos. “And I think it’s very scary because it’s not necessarily that. It’s just reflecting a person’s experience and is usually quite worthy. You can say that they expect them to be more difficult. Then it is more difficult to challenge in treatment.”

This is especially worrisome when cosmetic procedures, diet and cosmetic treatment work. Last month, Openai Removal Hosted on the website, one of the top models of the “lifestyle” category, the Chatgpt version of the “Subhuman” encouraged users who judged the “Subhuman” and generated hostile analysis by encouraging extreme cost surgery. Proper language from the Incel community. As the Looksmaxxing GPT is called 700,000 conversations Before the user falls. Naturally, numerous similar models have grown on the Openai platform to achieve the same purpose, and developers have created their own AI drive apps that exists to create a prediction image of the appearance of the charm, nose work or face -to -summary.

Pikoos said, “I think this bot will set an unrealistic expectation.” Surgery can’t do what AI can do. I don’t want to give advice on the appearance or beauty procedures I need. ‘ But “How are people with X, Y and Z more attractive by the beauty standards of society?” -The response is changed. “Then Chatgpt will say, ‘Well, they can get this procedure.'”

Pikoos said, “I have a customer who gets this kind of answer.” They were doing so and studied how to change their appearance and appearance. But this is now a personalized advice to them, which is more convincing than found in Google. “When someone gives someone who wants surgery in her practice,“ reading between the lines ”can reveal harmful motives, including social pressure or relationships,“ AI is not yet good at choosing it, and users are not yet good at choosing it. It is highly likely to approve all the procedures suggested.

Like many digital services, another unstable area is personal information. Whether they are diagnosed with BDD or not, people share their AI models and ask a very intimate question that reveals the most paralyzed anxiety. Openai has already signaled that CHATGPT can provide advertising to users in the future, and Sam Altman CEO thinks that the target advertisement is “in the algorithm of Instagram.”A little cool. ” Can the company use bots to exploit sensitive personal data to evaluate the body?

Trend story

At the end of the day, BDD patients are why Pikoos talks about related discussions with AI programs about appearance and self -explanatory defects. “The worst scenario will worsen the symptoms,” she says. “It’s lucky to be critical of people who have been treated with me at least about the information that gets out of Chatgpt.” But the reaction is more meaningful for those who have invested a lot without being treated with the advice of chatbot. Pikoos will lead to the idea of suicide in the wrong time.

It is not difficult to instruct us to brutally evaluate the software, and AI does not know how users are in danger. Also, I did not understand the broken mental state that could be behind that request. In all tragic examples of chatbots that deviate from someone’s reality, it is the same key deficiency.



Source link

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

10 scary and amazing 10 low vitamin D side effects

Next Post

MHRA approves adrenaline cosspray -the first emergency treatment without the first needle for anaphylaxis in the UK