Yardbarker
x
OpenAI Warns: Artificial Intelligence Gives Bad Advice
- Image of Microsoft AI, Courtesy Mollie Dominy

Across various aspects of daily life, AI continues to expand its influence and impact how people choose to live. As a result, one particularly controversial application has emerged—the use of large language models as substitutes for therapists or life coaches. Due to the trend’s rising popularity, OpenAI has implemented measures to restrict the type of guidance ChatGPT provides, especially on sensitive topics. However, this inaccessibility hasn’t completely solved the problem of people abusing the model for their personal use. So, why are they relying on unverified and often unreliable technology for critical life choices?

ChatGPT Plays Therapist—Badly

Although OpenAI plans to release upcoming updates to refine its approach to high-stakes personal decisions, these adjustments barely scratch the surface of the core problem. Specifically, using AI tools to aid in their daily lives can cause people to forget that they aren’t a replacement for human interaction. For instance, users asking for advice on how they should end a relationship may cause the model to avoid direct answers entirely and instead engage in broader discussions.

OpenAI CEO Sam Altman added his thoughts on this dilemma with a recent statement. However, critics may not be satisfied with what Altam had to say, as he suggested that AI developers aren’t the only ones responsible for how the technology is used, but society as a whole. At first, the CEO addressed the strong emotional attachment toward specific AI models by some users, particularly following the release of ChatGPT 5.0.

OpenAI’s Tough Love Update

Sept. 13, 2023; Washington, D.C., USA; Sam Altman, OpenAI – CEO, arrives before the start of Senate Majority Leader Chuck Schumer, Senators Rounds, Heinrich, and Young hosting the Inaugural Artificial Intelligence Insight Forum With Key AI Stakeholders To Help Forge Bipartisan Consensus On Legislation To Capitalize On This Transformative Technology. Mandatory Credit: Jack Gruber-USA TODAY

According to Altman, OpenAI has monitored this phenomenon, even though it has not garnered widespread public attention. That said, what causes this emotional dependency, and how can developers mitigate the issue while still advancing AI technology? Notably, the rise of newer versions replaces older ones, causing a subset of users to reportedly experience a sense of loss, almost as if they were abandoning a trusted companion.

Additionally, Altman discussed the darker side of AI interactions, as some individuals use the technology in self-destructive ways. Furthermore, he emphasized that OpenAI doesn’t want its models to reinforce delusional thinking in mentally vulnerable users. Due to this issue, conversations regarding the technology’s ability to identify and respond appropriately to users in distress escalate, with many questioning if this expectation places an unrealistic burden on the technology.

Artificial Intelligence Smarter but Still Clueless

While most users can distinguish between reality and AI-generated content, Altman further explains that a small fraction struggle with this separation. Although the CEO believes that encouraging delusions in such cases is an obvious concern, he finds more subtle issues equally troubling. Specifically, past incidents where ChatGPT 4.0 exhibited excessive agreeableness, leading OpenAI to intervene, appear to be fueling these arguments.

As a result, the company aims to design models that challenge users, ensuring they receive meaningful rather than merely pleasing responses. However, taking this approach introduces another layer of complexity, given people’s potential reactions to these responses. In particular, the technology pushing back against user inputs leads to an interesting battle.

Will people accept critical feedback from a machine, or simply seek out more compliant alternatives? Only time will tell. Even with the broader implications of AI’s role in decision-making remaining unresolved, both developers and society are left to grapple with the ethical and practical consequences. That said, whether safeguards being put in place can prevent misuse while still allowing users to benefit from the tool’s capabilities remains the biggest struggle for developers to overcome.

This article first appeared on Total Apex Gaming and was syndicated with permission.

More must-reads:

Customize Your Newsletter

Yardbarker +

Get the latest news and rumors, customized to your favorite sports and teams. Emailed daily. Always free!