Policymakers And Lawmakers Want Your Private AI Mental Health Chats As A Gauge Of Societal Well-Being
Collecting AI mental health chats to gauge national well-being is a possibility but poses serious ethical and legal concerns.
getty
In today’s column, I examine a promising but also quite controversial proposition that private mental health chats found in generative AI and large language models (LLMs) should be legally collected together in a large-scale federal database so that insights into societal psychological well-being can be measured, assessed, and potentially used to drive public mental health policies, actions, and investments.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well over one hundred analyses and postings, see the link here and the link here.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS’s 60 Minutes, see the link here.
Background On AI For Mental Health
I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.
This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.
There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.
Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.
Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.
The Current Situation Legally
Some states have already opted to enact new laws governing AI that provide mental health guidance. For my analysis of the AI mental health law in Illinois, see the link here, for the law in Utah, see the link here, and for the law in Nevada, see the link here. There will be court cases that test those new laws. It is too early to know whether the laws will stand as is and survive legal battles waged by AI makers.
Congress has repeatedly waded into establishing an overarching federal law that would encompass AI that dispenses mental health advice. So far, no dice. The efforts have ultimately faded from view. Thus, at this time, there isn’t a federal law devoted to these controversial AI matters per se. I have laid out an outline of what a comprehensive law on AI and mental health ought to contain, or at least should give due consideration to, see my analyses at the link here and the link here.
The situation currently is that only a handful of states have enacted new laws regarding AI and mental health, but most states have not yet done so. Many of those states are toying with doing so. Additionally, there are state laws being enacted that have to do with child safety when using AI, aspects of AI companionship, extreme sycophancy by AI, etc., all of which, though they aren’t necessarily deemed as mental health laws per se, certainly pertain to mental health. Meanwhile, Congress has also ventured into the sphere, which would be a much larger aim at AI for all kinds of uses, but nothing has gotten to a formative stage.
That’s the lay of the land right now.
Vast Store Of Mental Health Data
One topic that is increasingly gaining attention is that modern-era generative AI contains a unique set of mental health data on a massive scale. The data consists of the interactions that users are having with LLMs. Each day, perhaps hundreds of millions of people are discussing mental health aspects with AI. These people are expressing their deepest issues and pouring out their hearts and souls about what is going on with their mental health.
This is an incredible population-level store of mental health indicators. But it sits out there and is untapped. Other than the occasional whim of an individual AI maker, there isn’t any coordinated look at what the status of society is when it comes to mental health via this data. Even if an AI maker decides to explore their collected data, they might opt not to report what they find or only examine portions that catch their attention. And they are generally confined to just examining the data found within their AI.
Suppose we could collect the mental health data that resides in LLMs across the board. It would be an immense and rich source of the status of societal mental health. This would be far beyond any such source of mental health data that has ever been compiled. It is digital, it is being prepared minute by minute, it offers longitudinal analysis to reflect changes over time, and it offers a gold mine that is currently locked away and untapped.
From a public health perspective, we could collect this data not simply on a one-time basis, but continuously collect the data on a moment-to-moment basis. It would be a national, always-on mental health signaling system. Surveys of mental health are typically static, a snapshot in time, expensive to run, and limited due to framing effects and the questions that are asked. AI contains what is actively on the minds of the populace and ranges across a myriad of dimensions regarding mental health.
The advantages of leveraging this data are enormous, including:
- Trend analysis of anxiety, depression, insomnia, burnout, loneliness.
- Early detection of population-level stressors (economic shocks, disasters, pandemics).
- Evaluation of policy effects (e.g., unemployment benefits, school closures).
- Access to mental health disclosures that people would rarely reveal in surveys.
- Likely to include populations historically underrepresented in formal healthcare systems.
- Etc.
Obtaining The Data
Your first thought might be that since AI is online, all that needs to be done is ask the AI makers to set up access via an API connection so that the government could readily collect the data. Or perhaps ask the AI makers to routinely electronically ship the data to some centralized online repository. The mechanism of collecting the data is bound to be relatively straightforward. The data is already online and merely needs to be copied into a convenient online storage location.
The core difficulty is not a technical barrier. It is a complex and controversial matter of legal and ethical considerations.
Mental health chats are unlike the data collected via surveys. The chats tend to be highly personal, emotionally raw, confessional, and of the most intimate nature. They are ostensibly similar to private diaries and essentially are on a plain of what might be kept in therapy notes. People would be undoubtedly disturbed if they knew that their mental health chats were being conveyed into a national database. The privacy intrusion would be massive and chilling.
One notable aspect that is already taking place is that people are broadly unaware that the AI makers have in their online licensing agreements that they can tap into the data that users enter into generative AI. AI makers usually specify that their AI developers can inspect the chats. They can even reuse the chats for further training of the AI. In that sense, most users have already willingly ceded much of their privacy when signing up and using LLMs. For more on the privacy issues of contemporary AI, see my discussion at the link here.
In any case, few seem to realize that they have already somewhat let the horse out of the barn. A primary reason they are unaware of this issue is that, so far, AI makers have not taken overt action to showcase this accessibility. They might be doing so quietly, behind the scenes. Until we get instances of AI makers using chats on a highly visible basis, the public won’t realize the conditions under which they are using the data.
Legal And Ethics Concerns
Would sharing mental health chats with the government cross the line?
That’s where new AI laws come into the picture. Policymakers and lawmakers would need to make ready a legal path for the AI makers to undertake this sharing process. Without a formal regulatory path, the AI makers would naturally be hesitant to share the chats since they could encounter public outrage, suffer reputationally, and be subject to an incredible number of lawsuits claiming that they had gone outside their bounds.
Anonymization might lessen the privacy risks.
Here’s how that could be undertaken. The mental health chats would be conveyed to the central database in a de-identified format. No user is named. It is just a slew of mental health chats. They are anonymous.
This isn’t quite as easy as it seems. Imagine that a person has stated in their mental health chats their name or other details that could re-identify them. The risk is that once this is in the collected database, there might be clever ways to re-associate them with the data. Geographic references in the data could be a clue. The pattern of how someone writes their chats could be a clue. Etc.
It could be that some mental health chats would have to be reduced and redacted to the degree that they no longer offer much value for being in the database. Anonymization might render wide swaths of the data as basically no longer adequately reflective of mental health facets.
What Is A Mental Health Chat
Another twist is trying to draw the line of what constitutes a mental health chat.
Imagine that someone is chatting with AI about fixing their car. During that chat, the user brings up that their car being out of service is depressing to them. Should this constitute a mental health chat?
You could contend that it legitimately is a mental health chat due to the user indicating they are experiencing depression. The other side of that coin is that they might have been casually using a figure of speech and did not intend to claim they are clinically depressed. All sorts of false positives and false negatives of what a mental health chat is could end up getting collected or inadvertently not collected when they should have been collected.
The arduous process of legally defining what constitutes mental health chats would need to be undertaken. A large set of rules would almost surely be required. Furthermore, the rules would need to be precise enough that AI makers could readily comply. If the effort to select and scrub was Byzantine, it might legally expose the AI makers and inhibit their sharing of the data, and be overly costly to undertake.
Legal Quagmire
Assume that the details could be worked out about defining the essence of mental health chats. Legal hurdles remain and are formidable.
First, a legal argument could be made that disclosing mental health chats is a violation of the First Amendment of the Constitution (I’m focusing on a U.S. context, and will, in a future posting, revisit this same topic on a global or international basis). Do your mental health chats represent freedom of speech, freedom of thought and beliefs, and associated privacy boundaries?
Second, despite AI makers generally not coming under HIPAA health data privacy frameworks currently, this added aspect of having the AI makers contribute mental health chats to a centralized database might upend that assumption. If they were reclassified as HIPAA-covered entities, this would be a huge change for them. Mandated collection of mental health chats might also clash with state privacy laws and land in the zone of the FTC’s unfair and deceptive practices doctrine. For more about the FTC efforts in the AI mental health realm, see my coverage at the link here.
Third, the thorny question arises of whether the Fourth Amendment of the Constitution would need to be addressed in these circumstances. As you know, the Fourth Amendment establishes privacy rights and protects individuals from unreasonable searches and seizures. If the government were to require the AI makers to hand over their collected mental health chats, this could become a claimed violation of the Fourth Amendment.
Slippery Slope
There is more controversy to be had.
The assumption about this nationally collected store of mental health chats is that it would be used exclusively to gauge public mental health status. Period, end of story.
Suppose a law enforcement agency is trying to find an alleged suspect of a heinous crime. Maybe they only have vague clues about the suspect. They develop a psychological profile based on the scant clues. Aha, they then tap into the central store of mental health chats to see if this person has possibly been using AI. They get matches that appear to reflect the psychological profile. Law enforcement acts on these matches and seeks out those selected individuals in an effort to catch the suspect.
Does that seem appropriate or inappropriate?
You can add national security pursuits as another possibility for accessing the mental health chats. Insurance companies would also be eager to tap into the mental health chats. Employers would likely be interested too. On and on, the presumed need to leverage the federal database would bring forth difficult choices.
Public Reaction Fierceness
The obvious anticipated public reaction is going to be that sharing mental health chats with the government is highly disconcerting (that’s putting the situation mildly). People would potentially protest and fight against the effort to enact such laws. If such laws get passed, they will fervently contest the laws by dragging the AI makers into protracted court cases. The AI makers would be besieged with complaints and hammered by having to contend with the lawsuits underway. It would be a gigantic mess.
There is an intriguing, unanticipated public reaction that could also occur.
People might decide that they no longer will confide in AI about their mental health status. Is this good or bad? If you believe that AI shouldn’t be allowing mental health chats at the get-go anyway, you would be pleased that this is going to lessen the number of people doing so. If you believe that AI mental health chats have benefits, and that people are getting a form of modified therapy that they could not otherwise afford or access, you would be dismayed that fewer are going to lean into AI for this purpose.
Another angle is this. Of those people who decide that they won’t use the popular LLMs for their mental health chats, they instead access underground LLMs that promise to never send their mental health chats to the government. This becomes a potential trap for people. The LLMs might be scams. Maybe the scammers are collecting private data to sell for untoward purposes.
Finally, people might decide to craft fake mental health chats. They create an AI account and use another AI to converse with the other AI on mental health topics. These people are aiming to foul up the central database. They want to essentially pollute the central store of mental health chats. Why? Because they disagree with the requirement or perhaps just generally dislike government interventions.
The World We Are In
The upsides of being able to gauge nationwide mental health via collecting AI chats are an amazing possibility. At the same time, this raises unprecedented ethical and legal questions. Should mental health expression be considered a special category of speech? Also, even if laws strictly state how such collected data is to be used, might this devolve into a form of onerous governmental surveillance? We are faced with a complicated tradeoff of upsides and downsides.
Let’s end with a big picture viewpoint.
It is incontrovertible that we are now amid a grandiose worldwide experiment when it comes to societal mental health. The experiment is that AI is being made available nationally and globally, which is either overtly or insidiously acting to provide mental health guidance of one kind or another. Doing so either at no cost or at a minimal cost. It is available anywhere and at any time, 24/7. We are all the guinea pigs in this wanton experiment.
We need to decide whether we need new laws or can employ existing laws, or both, and stem the potential tide of adversely impacting society-wide mental health. The reason this is especially tough to consider is that AI has a dual-use effect. Just as AI can be detrimental to mental health, it can also be a huge bolstering force for mental health. A delicate tradeoff must be mindfully managed. Prevent or mitigate the downsides, and meanwhile make the upsides as widely and readily available as possible.
John Locke famously made this remark: “The end of law is not to abolish or restrain, but to preserve and enlarge freedom.” The online world and the emergence of generative AI have led us to a cliffhanging precipice. Should we tap into population-level mental health chats and potentially aid the mental well-being on a scale of incredible magnitude? But, in so doing, can we preserve and enlarge our freedoms? Time will tell.
