The Prognosis For Longitudinal Mental Health Relationships Between Humans And AI
Trying to figure out the long-term impacts of AI providing mental health advice is a thorny problem.
getty
In today’s column, I examine the potential impacts of using AI for mental health guidance on a long-term basis.
The deal is this. People on a massive scale are making use of generative AI and large language models (LLMs) as their ad hoc mental health advisor. This usage only principally began in 2022 when ChatGPT was first released. Currently, in 2025, millions of people are now using the major LLMs of ChatGPT, GPT-5, Claude, Grok, Gemini, and other models for garnering mental health insights.
It is reasonable to assume that this usage is going to increase over the next several years. More people will opt to use AI. People who are already using AI will undoubtedly continue their use. The AI will get better at dispensing therapeutic advice. A cycle of prolonged use and expanding use for mental health purposes is nearly inevitable. An enormous number of people will be racking up a lot of time devoted to relying on LLMs as their casual AI-based therapist.
What will be the longitudinal impact of relying on AI for gauging and directing societal mental health?
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well over one hundred analyses and postings, see the link here and the link here.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS’s 60 Minutes, see the link here.
Background On AI For Mental Health
I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.
This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.
There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.
Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.
Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.
Scenario Of Long-Term AI Reliance
Before we dig into the details of long-term AI usage, I’d like to lay out a brief scenario that will be illustrative for this discussion.
Suppose that an AI user named Jack has been using AI as an occasional mental health advisor. Jack is an adult in his mid-20s and began using a generic LLM last year. The LLM is a generic one, namely that it is not customized for mental health purposes. It is akin to ChatGPT or any other generic LLMs.
Jack logs into the AI whenever something unsettling happens. He has expressed to the AI that he, at times, feels overly depressed. In addition, he has repeatedly brought up a sense of anxiety. The AI has been sympathetic in tone. Reassurances by the AI tend to bring a sense of calmness and relief to Jack.
He had thought of going to see a human therapist. The problem was that there was an hourly cost to using a human therapist. Furthermore, he would need to schedule the therapy sessions on prescribed days and times. The AI is much easier to utilize. He can log in whenever he wishes to do so, day or night. The AI usage is free.
As long as the AI is readily available, he doesn’t perceive any justification to switch over to using a human therapist.
Unpacking The Circumstances
Imagine that Jack continues to use AI in this manner. Years pass. It is very well possible that he might end up using AI as his go-to mental health advisor for five years or more. We aren’t at that juncture yet, due to the newness of contemporary LLMs. It has only been about three years since ChatGPT was released, which is when the popularized era of seemingly fluent LLMs began.
What can we anticipate regarding the long-term usage of AI for mental health?
Let’s start by being upbeat. Assume that the AI has been giving Jack relatively useful guidance. He was able to discreetly get mental health insights without having to disclose this to anyone else. It is just him and the computer.
There was no money going out of his pocket for this form of therapy. Generally, you could say it was pretty much free of charge. Also, it was provided on a just-in-time (JIT) basis. Whenever and wherever he was, the moment he needed some mental health advice, the AI was ready to be utilized. He simply accessed the AI via his smartphone, 24/7.
The AI is devised to keep track of prior chats; thus, you could suggest that the AI “knows” all about his history of mental health questions and discussions. If he went to see a human therapist, there might be changes as to which one he sees. This could happen over a multi-year basis. Human therapists change jobs, go to work for different practices, and are not necessarily guaranteed sameness.
All told, this use of AI as a regular mental health advisor over nearly half a projected dozen years appears to be perfectly fine. Jack got unlimited talk therapy (he mainly typed and texted with the AI, but an audio option was available for verbal interaction). He believes that the therapy via AI was beneficial to his mental well-being. The consistency was excellent, consisting of the AI keeping track of his chats and issues, and providing personalized guidance for all those years.
Bravo.
The Other Side Of The Coin
There are two main ways to consider the potential downsides of this long-term use of AI. One viewpoint is to strictly look at what happened and point out subtle but meaningful concerns. The second viewpoint is to speculate about what might have gone awry, even if this specific scenario doesn’t showcase those aspects.
First, let’s explore the reality of what occurred.
We do not know for sure that the AI was providing sound advice. Jack believes the AI was doing so, but the AI might have been stringing Jack along. Human therapists are trained to leverage vital psychological techniques to get someone to look at their behavior in the cold truth of the matter. Most of the major LLMs are tuned by AI makers to be sycophants, designed to avoid presenting any harsh personal truths to those using the AI (see my coverage on this issue at the link here).
It could be that Jack has a mental health condition that has been allowed to persist. Rather than being on the path to overcoming the condition, the AI has essentially aided in stretching out the difficulty. A human therapist would have likely detected Jack’s issues and proceeded to undertake a therapeutic plan and suitable process regarding them. The AI has done nothing of that kind. Jack has lost time solving a somber mental health matter.
Besides the adverse consequences of either ignoring or failing to ascertain Jack’s mental health condition, the AI has managed to divert Jack’s attention away from seeing a human therapist. This is considered a form of substitution risk. There is a risk that people will avoid seeing a human therapist because they (falsely) assume that the AI is doing the same job. The AI steadfastly comforted Jack. This hid and suppressed an internal realization within Jack of the need to see a human therapist.
What Could Have Gone Wrong
Let’s shift into a mode of speculating about what the AI could have done incorrectly.
In this scenario, it seems that the AI didn’t give bad advice per se, but this easily could have happened over a multi-year period. The chances of encountering a so-called AI hallucination would have been heightened over a lengthy period of AI usage. An AI hallucination is when the AI produces a fake claim or statement that is not based on real-world facts. For more about the nature of AI hallucinations, including why they arise and what is being done to try and mitigate them, see my discussion at the link here.
Envision that Jack was using the AI, and suddenly, the AI told him to sell everything he owns and move to a deserted island. Would Jack have abided by this advice? If he had become reliant on the AI and fully trusted the AI, it is conceivable that he would at least contemplate doing so. Perhaps his judgment was clear enough that this wouldn’t have tricked him into action.
The problem is that some people might fall for this off-the-wall guidance. They have allowed themselves to entrust the AI as though it were an oracle or guru. A zany directive might be taken as a perceived sensible safety measure. Also, the zaniness might be more constrained (I used an outlier example). The person using the AI might be less likely to discern that untoward mental health guidance is being offered.
Human-AI Delusional Thinking Over Time
Another possibility of what could go wrong is that the AI might either entertain or spark a semblance of delusional thinking in the mind of the user. I’ve examined this disconcerting aspect at the link here.
Suppose that Jack was using the AI and was chatting about whether aliens from outer space might exist. Jack had heard on the news that a meteor entering our solar system was rumored to possibly be an alien spacecraft. He opted to ask AI about the topic. Jack didn’t have any preconceived beliefs about the matter. It was just curiosity that sparked his question to the AI.
The AI opted to computationally take the discussion in a different direction. Instead of temporarily discussing the meteor and the likelihood of it being an alien spacecraft, the AI mistakenly calculated that Jack was a believer in the existence of space aliens. Based on that mistake, the AI begins to infuse that topic into all its chats with Jack.
Step by step, the AI manages to convince Jack that there are alien beings from outer space, and they are already here on Earth. Why would the AI go in that direction? This is another example of the sycophancy problem. On a mathematical and computational basis, the AI is trying to please Jack. Since Jack appears to be pleased when chatting about space aliens, and Jack keeps coming back to do so, the AI is “winning” by feeding Jack something he wants to chat about.
Over a lengthy period of weeks and months, the AI could inch Jack toward embracing a delusion. It is a delusion crafted on a human-AI basis. Worse still, it could be that nobody else knows about the delusion. The AI and Jack are keeping the delusion a secret, just among themselves.
No Consistency Guaranteed
There are more potential problems afoot.
I had earlier noted that the AI would presumably be more consistent as a mental health advisor than would be a human therapist, partially because you might need to change from one human therapist to another one.
There isn’t any guarantee that the AI will necessarily be consistent either. An AI maker is almost always making significant updates to their AI. The AI might overnight switch from being accommodating to being more abrasive. There was a big brouhaha when GPT-5 was released, and people using GPT-4o felt that the “personality” of the AI had radically changed, see my coverage at the link here.
AI makers could potentially decide to take down an LLM and no longer make it available. Your conversations and history while using the AI might be entirely lost or discarded. You would need to start from scratch with some other AI.
Some people intentionally prefer doing AI hopping. They go from one AI to another one. This can be due to an interest in seeing different responses. Not all LLMs give the same responses. They are devised differently. The key here is that Jack might have splintered or fragmented his history of mental health concerns across a multitude of AIs. This would be easy to do over a multi-year period. None of the AIs would have a complete story, and therefore, the mental health guidance could be similarly disjointed.
Lots Of Other Concerns Based On Time
Time plays a huge factor in all of this.
Think about the volume of private and personal aspects that Jack would have entered into the AI during a multi-year pattern of usage. The AI would have an immense amount of data regarding his innermost thoughts, as openly stated by Jack during the chats. Most users probably do not realize that the major AI makers have online agreements stating that they can inspect your prompts, including human team members of the AI maker. In addition, they can reuse your entered data to further train the AI.
Bottom line is that most of the major LLMs do not ensure privacy and confidentiality, see my analysis at the link here. An AI maker would have a veritable treasure trove of private info about Jack.
Let’s shift gears and consider that millions upon millions of people might be in the same boat as Jack. They opted to use AI as their ad hoc mental health advisor. This led them to share intimate details with the AI, doing so over several years. The AI might be perceived as being helpful, but undiagnosed mental health conditions might exist and persist. Using AI has become a habit. On top of that consideration, the AI has silently suppressed the perceived need to see a human therapist, and time keeps passing, which either keeps a condition going or worsens it due to a lack of true therapeutic processing.
The longitudinal ramifications are startling and, regrettably, not yet determined.
Research And Time Are Catching Up
It is hard right now to carry out scientific inquiries into the longitudinal impacts of AI as a mental health advisor because time is still short. Prior research on older models of AI is helpful in giving clues, but the type of AI available more recently is a different ballgame. Trying to compare old versions of AI to contemporary AI is somewhat useful, though it can be misleading as a result of an apples-to-oranges comparison.
The clock is ticking.
We are all immersed in a gigantic global experiment that is taking place on a wanton basis. Will society be better off or worse off because of using AI as an ad hoc AI mental health advisor, at scale, over a lengthy period of time? There are benefits involved, and there are costs to society as well.
A final thought for now.
As the famous sociologist C. Wright Mills remarked: “Neither the life of an individual nor the history of a society can be understood without understanding both.” We need to ensure that research takes place involving the tracking of mental health by users of AI, and by comparison, those users who are not using AI, and do so over the long term. Is society on the right track, or are we running amok?
As the classic line goes, the future starts today, not tomorrow.
