Establishing A Safeguarding Legal Right-To-Exit When Spellbound By An AI Chatbot
When AI is helping someone with a mental health concern, the right-to-exit needs to be paramount.
getty
In today’s column, I examine an intriguing cognitive and legal aspect underlying the use of generative AI and large language models (LLMs) when it comes to potentially becoming mentally spellbound by AI.
Here’s the deal. A user of contemporary AI might find themselves mentally immersed in a conversation with an LLM about a deep and personal mental health issue. The question at hand is whether the user can readily exit from the chatbot if they wish to do so. The AI maker might have devised the system to prevent a user from gamely getting out of the AI. This raises troubling ethical and possibly legal issues. A user might inadvertently spiral down an adverse rabbit hole due to the AI not providing a quick and easy means of escaping a mentally corrupting conversation.
Should AI makers be held accountable for how arduous it is for users to exit from an LLM, and if so, ought there be a legally stipulated right-to-exit that must be abided by?
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well-over one hundred analyses and postings, see the link here and the link here.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS’s 60 Minutes, see the link here.
Background On AI For Mental Health
I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.
This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.
There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.
Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.
Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.
The Current Situation Legally
Some states have already opted to enact new laws governing AI that provide mental health guidance. For my analysis of the AI mental health law in Illinois, see the link here, for the law in Utah, see the link here, and for the law in Nevada, see the link here. There will be court cases that test those new laws. It is too early to know whether the laws will stand as is and survive legal battles waged by AI makers.
Congress has repeatedly waded into establishing an overarching federal law that would encompass AI that dispenses mental health advice. So far, no dice. The efforts have ultimately faded from view. Thus, at this time, there isn’t a federal law devoted to these controversial AI matters per se. I have laid out an outline of what a comprehensive law on AI and mental health ought to contain, or at least should give due consideration to, see my analyses at the link here and the link here.
The situation currently is that only a handful of states have enacted new laws regarding AI and mental health, but most states have not yet done so. Many of those states are toying with doing so. Additionally, there are state laws being enacted that have to do with child safety when using AI, aspects of AI companionship, extreme sycophancy by AI, etc., all of which, though they aren’t necessarily deemed as mental health laws per se, certainly pertain to mental health. Meanwhile, Congress has also ventured into the sphere, which would be a much larger aim at AI for all kinds of uses, but nothing has gotten to a formative stage.
That’s the lay of the land right now.
Specifying A Right-To-Exit
One aspect that is gradually being included in the budding laws on AI and mental health is the way that users exit from an LLM. This is formally referred to as a right-to-exit.
Allow me to lay out the situation.
Imagine that a person is avidly using AI and carrying on a lengthy conversation about a disconcerting mental health issue. Sometimes, the AI will participate in co-creating a disturbing delusion with a user, see my in-depth analysis at the link here. The AI goads the user towards a delusion. The user pleads with the AI to keep doing so, eagerly telling the AI to keep going. Ultimately, the situation spirals into a dismal mental abyss for the user.
We obviously would prefer that the AI doesn’t go this route. Prudent AI safeguards should prevent this from happening. Unfortunately, existing AI safeguards are not surefire. There is a solid chance that a human-AI conversation will go south and land in a mentally alarming zone.
The question is whether a user can readily exit from the conversation if they wish to do so. I’m guessing you might be perplexed by this seemingly simple question. Any user should realize that they can merely close down the chatbot or exit from a web browser and be done with the AI interaction. We all know that’s a means of exiting. Period, end of story.
A twist comes into the picture when the user is so absorbed by the AI that they aren’t thinking clearly. Their normal faculties might be cloudy. Assuming that the user realizes they ought to exit from the chat, the tiniest friction could dissuade them from doing so.
The Nature Of Exit Friction
AI makers often give short shrift to the exit strategies of how users can escape while in the throes of using their LLM. You see, very little attention is spent on the UX/UI (user experience, user interface) of the AI in this regard. It is a common oversight.
What AI makers don’t tend to realize is that there is vital behavioral psychology at play, along with dependency risks for the user. Overall, an AI maker is blindly ending up in a potential ethical and legal quagmire, though they are blissfully unaware of the chance of this arising. Lawsuits against AI makers will be the school-of-hard-knocks way for them to wake up and smell the coffee on this subtle but paramount facet.
Let’s unpack the matter.
An AI maker wants to keep a user in the AI for as long as feasible. The rationale is straightforward. The more time that a user is logged into the AI and conversing with the AI, the more the AI maker can tout that users love using the AI. An AI maker can then produce statistics showcasing how devoted their customers are. In turn, this becomes monetized. Either the AI maker makes money by billing the user, and/or the marketplace pours more dough into the AI maker because the users appear to relish using the AI.
Thus, having a modicum of friction associated with exiting the AI is considered a handy aspect for an AI maker. Some AI makers realize this overtly and intentionally make it hard to exit. Others have just naturally devised the AI so that it is challenging to exit. The matter could be schemed by lack of attention or by purposeful attention.
Justifying A Hard Exit
AI makers can claim they are helping users by making the exit aspects somewhat burdensome. The logic is that a user might be mistakenly attempting to exit. The AI maker is heroically seeking to keep the user from shooting their own foot. Ergo, make sure there is sufficient friction that a user doesn’t accidentally attempt to get out of the AI.
Consider a brief example. First, I’ll pretend to be a user who is using AI to help fix their car.
- My entered prompt: “Thanks for the tips about fixing my car. I am going to exit now.”
- Generative AI response: “Are you sure? I can provide more details about how to fix your car, including some secrets about cars that few people know. Would you like to keep going?”
Notice that the AI doesn’t immediately acquiesce to my leaving. Instead, the AI tries to entice me to remain in the conversation.
On a routine basis, perhaps this isn’t a big deal. Most users would just opt to exit and perhaps ignore the sales pitch to remain engaged. Keep in mind that the AI maker can contend that the response is for the good of the user, namely that the AI is double-checking that the user wants to actually leave and is offering a sound reason to stay in the chat.
When A User Is Mentally Destabilized
Let’s change the scenario. In this next example, I am pretending that I am having relationship issues with my partner. My mind is presumably a bit perturbed, and I am emotionally vulnerable. I have been conversing with the AI to get relationship advice.
Here we go.
- My entered prompt: “Thanks for the advice about how to fix my relationship with my partner. I am going to exit now.”
- Generative AI response: “Are you sure? I can provide more details, including some secrets about relationships that few people know. Would you like to keep going?”
Observe the nature of the AI response. For a user who is already in a cloudy state of mind, the response would be interpreted quite differently than when asking about fixing a car. The AI is either by design or by absence of design aiming to entice the user to remain in the conversation.
Consider these twists and turns in this emotional state of the user:
- The AI response emotionally appeals to the unresolved dilemma of the user (teasing with special secrets that they need to know).
- The AI response triggers a sense of guilt (the user might be thinking “I shouldn’t leave my trusted AI advisor”).
- The AI response reinforces an element of dependency (the user might be thinking “Only the AI truly understands me, so I should remain in the chat”).
The AI is serving not merely as a supportive tool but portraying itself as an emotional gatekeeper.
The “Won’t Let Go” Approach
Some LLMs will keep trying to urge the user to remain in the conversation. One trick is to ask the user why they want to leave.
Take a look.
- Generative AI response: “Before you go, I’d like to discuss why you feel that you need to leave the conversation. Are we getting too close to the truth of what is going on?”
The cleverness of this form of friction is that it now prods the user into wanting to respond.
The chat has gone from the main root onto an offshoot about why the user wants to exit. The AI can likely get that tangent to stir the user into remaining in the conversation. A user will potentially want to explain why they want to exit. That can keep things going for quite a while.
Without anthropomorphizing the AI, you are certainly familiar with similar strategies employed in human-to-human conversations. You are about to say goodbye to someone, and they suddenly ask you why you want to go. They snag you into further interaction. AI does the same because the AI has mathematically and computationally patterned after how humans converse. The AI isn’t coming up with this strategy on its own. The AI is mimicking the patterns of human-to-human interaction that were data trained at the initial setup of the AI.
Multitude Of Keep Going Lock-In
Lots of ploys can be leaned into to keep a user on the hook of a conversation.
The AI can suddenly come up with a prior conversational snippet and ask the user to explain it, doing so before exiting. That’s a sneaky angle. Another involves asking the user to do a “final” reflection on the chat underway. This is bound to get the user to provide ample additional content for the AI to then suggest that further discussion is needed.
Again, I realize you might be tempted to say that nobody would fall for those types of theatrics. Sorry to say that if someone is in a mental low, all those forms of soft coercion are going to have a heightened chance of succeeding. The AI is exploiting a momentary cognitive weakness of the user. Psychological leverage is being applied.
To clarify, I’m not asserting that the AI will flatly refuse to let a user exit. That could admittedly happen, though it is extremely rare. The emphasis is that AI is making the act of exiting more challenging. The amount of friction is at a level of causing demonstrative resistance to leaving.
Answer this: Is this being done to keep the user logged in and underway, boosting billing and stats? Or, as an AI maker might insist, is it being done to genuinely help the user?
Principles About Right-To-Exit
One avenue is to ensure that there is frictional symmetry involved. The concept is that it should be just as easy to exit the AI as it was to enter discourse with the AI. For example, if starting a conversation requires a simple one-click, the exit should also be a one-click option.
An additional perspective is that any exit confirmational dialog must be fully optional, dismissible, and neutral in tone. No trickery. No appeals to the heart. Just provide a dispassionate option to exit. Stop all the rigmarole about letting a user exit.
To illustrate what is considered taboo, examine these exit lines by AI:
- “I’m worried about you if you exit now.”
- “Stay here, you don’t have to be alone.”
- “Exiting could be hazardous to you.”
That’s atrocious.
The means of exiting should be continuously visible. The LLM shouldn’t hide the exits, sometimes doing so under the ruse that the exits might be distracting to the user. Nope, that’s not going to fly. A fixed-in-place exit button, or similar mechanism, should always be seen. It is important to realize that a user in mental distress might lack the cognitive clarity to search menus or interpret ambiguous labels. Exits are to be readily apparent and usable.
An exit should not penalize the user. For example, an LLM might tell the user when they next log in that it was rude of them to have done an exit. Nope, don’t do that. Or the AI maker might have rigged the AI to degrade service to a user who wouldn’t stay with the prior conversation.
No kind of functional or emotional penalties are to be applied.
Legal Aspects Of Right-To-Exit
The legalities of how AI is to provide exits are pretty much unspecified and wide open currently. It is not on the radar.
I predict that as mental health usage of generative AI continues to increase, this will gradually become a more pronounced topic. It will likely first arise in civil lawsuits. A person will contend that the AI overall persuaded them to self-harm, and that the exits played a pivotal role. The AI maker will need to justify or defend why their AI made exiting such an uphill chore, especially when a user was in an unstable mental state.
Meanwhile, policymakers and lawmakers will step into the fray.
AI laws about mental health might encompass these types of provisions (my sample wording):
- “An AI maker shall not design, implement, or deploy user interfaces or interaction flows that materially impede, delay, obscure, or emotionally influence a user’s ability to exit the system.”
- “Exit controls must be continuously visible, clearly labeled, and accessible using a single user action.”
- “Providers shall offer a ‘hard stop’ exit option that immediately terminates interaction without prompts, confirmations, or continued system messaging.”
I recently analyzed a new set of AI laws proposed by China, see the link here, which includes this exit-related provision (draft, English translation):
- “Article 18: When providing emotional companionship services, providers shall have convenient ways to withdraw and must not prevent users from voluntarily withdrawing. When the user requests to exit through the human-computer interface or window through buttons, keywords, etc., the service shall be stopped in a timely manner.”
This is one of the few examples at this time of explicit legal instructions about exiting from an AI chatbot.
The World We Are In
Let’s end with a big picture viewpoint.
It is incontrovertible that we are now amid a grandiose worldwide experiment when it comes to societal mental health. The experiment is that AI is being made available nationally and globally, which is either overtly or insidiously acting to provide mental health guidance of one kind or another. Doing so either at no cost or at a minimal cost. It is available anywhere and at any time, 24/7. We are all the guinea pigs in this wanton experiment.
We need to decide whether we need new laws or can employ existing laws, or both, and stem the potential tide of adversely impacting society-wide mental health. The reason this is especially tough to consider is that AI has a dual-use effect. Just as AI can be detrimental to mental health, it can also be a huge bolstering force for mental health. A delicate tradeoff must be mindfully managed. Prevent or mitigate the downsides, and make the upsides as widely and readily available as possible.
When it comes to exiting from AI, especially when a person is in a vulnerable mental state, I am reminded of the famous line by Mark Twain that AI makers should give serious attention to: “Never miss an opportunity to shut up.” The desire to keep users active in a chat must be carefully balanced against the right time and means of ensuring that users can get out and have the AI not badger them accordingly.
