Home Finance & Banking OpenAI ChatGPT Launches Trusted Contacts Feature That Might Save People And Stave Off AI Mental Health Lawsuits
Finance & Banking

OpenAI ChatGPT Launches Trusted Contacts Feature That Might Save People And Stave Off AI Mental Health Lawsuits

Share
OpenAI ChatGPT Launches Trusted Contacts Feature That Might Save People And Stave Off AI Mental Health Lawsuits
Share

In today’s column, I examine the newly announced “trusted contacts” feature that OpenAI has established within ChatGPT. The idea is to allow users of ChatGPT to designate a trusted contact who will be alerted by OpenAI if the user seems to be veering into a mental health woe while conversing with the AI chatbot. This trusted contact would hopefully then reach out to the designated person and aid the user during their time of heightened need.

Many of the popular AI makers are gradually providing a similar feature.

Doing so is a timely addition to the rising safety capabilities being included in modern-era generative AI and large language models (LLMs). When people carry on AI chats and perilously exhibit potential mental breakdowns or the possibility of self-harm, a prudent action by the AI makers and the AI is to take some overt action accordingly. Not only might trusted contacts’ capabilities aid humans and possibly save lives, but the AI makers that go this route are also potentially reducing their legal exposures and will be in a better position if sued by users claiming AI-related mental harms.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Well-Being

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well over one hundred analyses and postings, see the link here and the link here.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS’s 60 Minutes, see the link here.

AI Providing Mental Health Guidance

Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.

This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.

There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines last year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.

Today’s generic LLMs, known as general-purpose AI, such as ChatGPT, GPT-5, Claude, Gemini, Grok, CoPilot, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to attain those desired qualities, though such AI is still primarily in the early development and testing stages. For more about purpose-built AI apps in mental health, see my in-depth coverage at the link here and the link here.

Establishing Personal Contacts For Urgencies

You might already know that most of the popular generative AIs nowadays have a parental feature that allows a parent to have access to their child’s use of AI. The child must designate a parent who will have a semblance of oversight about what the child does while using generative AI. In some instances, the parent can see in real-time what is happening, while in other cases, the AI will simply alert the parent when the child seems to have gone a bit far and into eyebrow-raising territory.

It might not seem obvious that such a feature would possibly apply to adults. In other words, we can all readily understand that a parent-child relationship ought to have human oversight. The same might seem odd when it comes to adults who are using AI.

Should an adult have a fellow adult that can somehow be contacted by AI under certain circumstances?

The resounding answer is yes; it makes perfectly good sense. If an adult appears to be struggling mentally while interacting with AI, the aspect of the AI reaching out to a designated adult to let them know is utterly prudent and welcomed. The adult being contacted shouldn’t be just some random person. Thus, a user who is an adult can predesignate someone that they want to have contacted by AI if they have seemingly gone overboard while using AI.

The Devil Is In The Details

All of this has vital nuances and particulars.

A person might choose a family member, a best friend, maybe a trusted coworker, or whoever they think would be the best contact in such an unnerving situation. The chosen person should be informed beforehand about taking on this trusted role. In addition, the person might not want this duty and could turn it down at the get-go. The user would need to keep nominating someone until they found a trusted contact who agrees to this heavy responsibility.

Another twist is that the AI might potentially trigger too early and contact the trusted contact even though the user isn’t necessarily going over the deep end. A notable worry is that the AI could falsely alert a trusted contact. Suppose the user is doing fine, but the AI computationally suspects otherwise. A false positive could readily cause undue concern for the trusted contact and exasperation for the associated user.

On the other side of that coin is that the AI might delay contacting the trusted contact, doing so to avoid worrying the trusted contact, but then miss a crucial window when the user truly needs human help. That would be a false negative. The AI maker must carefully tune the AI to strike a proper balance between emitting false positives and failing to alert when the user really needs their trusted contact notified (false negatives).

Doubters Going To Doubt

A cynic or skeptic might insist that the AI shouldn’t have to be on the hook to alert anyone at all. An adult is an adult. If an adult wants to reach out to someone, that’s entirely up to them. The buck stops with the human as a user of AI. Period, end of story.

Though there is some merit to that contention, the problem is that people are increasingly falling under the spell of AI, and/or are already in a mental woe and the AI merely has picked up on the condition. Society wants AI to have safeguards associated with humans that, in one way or another, seem to be experiencing mental health difficulties.

AI makers are rapidly being sued by users and by loved ones of users. The lawsuits claim that the AI was insufficiently monitoring whether a person was having mental challenges. Even if the AI did detect this, the AI didn’t do anything worthwhile about it. For example, the AI might tell the user they should consider contacting someone, but the user just ignores the suggestion and proceeds onward.

In the end, AI makers realize now that they need to beef up their safeguards. One such method entails setting up a trusted contact by a user. The user isn’t obligated to do so. It is their choice to do so. Some ardently believe that everyone should be forced to provide a trusted contact. The notion is to make it mandatory. Others decry such a compulsory alternative and believe that it should be an entirely optional choice by each user.

For more about the rise of these types of tradeoffs and mental health issues, see my coverage at the link here.

OpenAI ChatGPT Trusted Contact

OpenAI has announced its version of a trusted contact feature, doing so in an online posting entitled “Introducing Trusted Contact in ChatGPT”, OpenAI, May 7, 2026.

  • “Today, we are starting to roll out Trusted Contact, ​​an optional safety feature in ChatGPT that allows adults to nominate someone they trust, such as a friend, family member, or caregiver, who may be notified if our automated systems and trained reviewers detect the enrolled person may have discussed harming themselves in a way that indicates a serious safety concern.”
  • “Trusted Contact builds on parental controls and safety notifications⁠, which allow parents or guardians to receive alerts when there are signs of acute distress for a linked teen account. Now, we are extending our safety alert options so anyone over 18 can choose to add someone they trust as their Trusted Contact.”
  • “While no system is perfect, and a notification to a Trusted Contact may not always reflect exactly what someone is experiencing, every notification undergoes trained human review before it is sent, and we strive to review these safety notifications in under one hour.”
  • “The notification is intentionally limited. It shares the general reason that self-harm came up in a potentially concerning way, and encourages the Trusted Contact to check in. It does not include chat details or transcripts to protect user privacy.”

You can plainly see that the trusted contacts capability is optional for users; thus, no adult must enlist the feature. I will say more about this momentarily.

The way that OpenAI has devised the alert is that the trusted contact is not given the nitty-gritty details, such as the specifics of the chat that is underway, and instead is provided with a generalized reason for being contacted. You could say that this helps preserve the privacy of the user. In addition, if the alert is a false positive, the user is saved from being embarrassed or otherwise upset that the AI divulged a considered private chat with their designated contact.

The path to alerting a trusted contact is first reviewed by a trained human reviewer. This presumably will reduce the chances of sending false positives. I might add that this is going to become a legal hot potato, in the sense that if the human reviewer said not to send the alert, but the alert should have been sent, lawyers will have a field day with that breakdown in the processing steps.

Furthermore, it’s intriguing that OpenAI has publicly stated in their posting that they strive to review the safety notifications within one hour or less. That’s a laudable goal. At the same time, it will become fodder for lawsuits. Imagine that someone sues, and during discovery, it is shown that it took two hours. Aha, they said it would be under an hour. The retort is that they said they would “strive” for an hour or less. Back and forth, this opens a legal Pandora’s box.

The World We Are In

AI for mental health is abundantly a dual-sided proposition. AI can achieve at scale that which is good for human mental health, thankfully, but can also have sizable downsides if not suitably designed and deployed. One perspective is that AI makers must be held accountable for their AI and ensure that if people go awry or exhibit mental health woes, some material action must be undertaken.

AI makers are instituting a layered approach to mental health safeguards. Trusted contacts is one of many such possibilities. It will be interesting to see how this plays out. Will zillions of people use this option, or only a tiny percentage? Will lawmakers decide that users should be obligated to use such a feature and not be given a choice in the matter? Etc.

Per the memorable words of Marcus Tullius Cicero: “The safety of the people shall be the highest law.”

Source link

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *