Getting Free Mental Health Advice By Calling A Phone Number That Connects You To AI-Generated Psychological Guidance
Calling Ai to get your on-the-spot mental health advice.
getty
In today’s column, I examine the latest capability and its associated ramifications of calling a phone number to voice-interact with generative AI and large language models (LLMs) for mental health advice.
Here’s the deal. It is very easy these days to connect to AI via voice interaction. A phone number can be set up so that the AI will interact with you as though you are calling a friend or a customer service line. The LLM can be instructed beforehand to focus on a particular topic or realm, such as coping with mental health aspects. This might be available for free, or at a modest cost, or be ad-driven. The AI can be a generic version, such as ChatGPT, Claude, Grok, CoPilot, Gemini, etc. It is also possible that a specialized LLM that is shaped specifically for mental health guidance might be utilized.
Is this a good way to use contemporary AI, or does it seem ominous?
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health
As a brief background, I’ve been extensively covering and analyzing various facets of the modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well over one hundred analyses and postings, see the link here and the link here.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS’s 60 Minutes, see the link here.
Background On AI For Mental Health
I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.
This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.
There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.
Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.
Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.
Text Chatting With AI Is The Mainstay
Most people tend to interact with generative AI on a text-oriented basis, either via their smartphone or while using a laptop and typing their messages with the AI.
Fewer seem to know that most of the major AI makers have also made their LLMs available via voice. You can speak to the AI, and it can speak back to you. Some AI firms have even set up a toll-free number so that you can call the AI and interact over a conventional phone via voice. This is handy if you don’t have a smartphone and only have a less-capable voice-only phone.
Why would someone opt to use voice interaction instead of typing or texting with AI?
As just noted, it could be that a person has a voice-only phone and cannot access the AI in any other way. I would wager that’s a small likelihood. Nowadays, people tend to have smartphones.
The main reason to use voice is convenience. People often do not like to type and find it easier to simply speak aloud whatever they have to say. The use of voice is easier. You don’t have to squint at a keyboard. You don’t have to hunt and peck for the right keys to press. Speaking is more fluid, nearly like falling off a log.
A related facet is that interactions with AI might be lengthy. If you only have a simple question to ask, okay, you might be fine with typing it. But if you are going to carry on an entire dialogue, the amount of typing gets laborious. The added frustration is that while typing, you can make errors in what you intended to type. The AI might not get the gist of your entry. Lo and behold, you must type it again. Frustrating and exasperating.
Voice And AI Mental Health Advice
Consider the special use case of interacting with AI on mental health aspects while using voice capacities.
Is there any substantive difference between typing versus speaking to AI when having mental health chats?
Generally, the AI is going to respond in the same manner either way. If you gave a prompt to AI that was in written form, and you gave the same prompt via voice, by and large, the AI is going to give a response that is the same. This is somewhat tricky because generative AI is non-deterministic and uses probabilities to compose responses; thus, each use and each prompt will get slightly different answers, but that’s a technical point. The crux is that the AI doesn’t differentiate between entries via writing versus spoken (assuming that the same words are being used).
A potential twist is the tone of your voice.
In writing text, you don’t impart a tone unless the words themselves include tonal phrasing. Your spoken words can have an added layer of an undertone. Maybe you raise your voice and are yelling. Perhaps you utter your words with heavy sarcasm. The AI might computationally take into account the vocalized tones. If you mindfully say your words in a relative monotone, or if you explicitly tell the AI to ignore the tonal characteristics, the AI is going to try to treat the words almost on par with having received typed words.
Hearing By Humans Is A Big Difference
There is, though, a significant difference between speaking and writing when it comes to the human side of things and crucial cognitive considerations. By speaking to AI, you almost get the feeling that you are interacting more with a human than a machine.
Typing has a sense of machine-like aspects to it. We are accustomed to speaking with fellow humans. When you are seeing a human therapist, you have sessions where you speak with them. Indeed, it is commonly referred to as talk therapy. The idea then is that by speaking with AI about your mental health concerns, the interaction via voice will seem natural and not quite artificial.
The good news is that you might be comfortable with the AI and interact with aspects that you would not have typed or that would have been cumbersome to type out. The bad news is that people using AI for mental health on a voice interaction basis are bound to overinflate what the AI is capable of. They assign human qualities to AI.
They anthropomorphize AI.
It is a slippery slope. A person engages the AI and seeks mental health guidance. Because the interaction is via voice, the person gives undue credit to the AI. The person lets their guard down. While texting, their guard might have been up, or at least on semi-alert. Voice interaction slides into their inner mind, and they become complacent. Whatever the AI tells them is perceived as prudent and actionable. Not good.
Perceived Privacy Of Voice Utterances
Another misperception is that people tend to think of speaking as something that comes and goes; it isn’t being kept or retained.
When you type messages to interact with AI, people likely realize that the typed words are possibly going to be stored somewhere inside the AI. There is a chance that someone later might come along and see what they typed. The odds of that are higher than the public realizes. The AI makers typically state in their online licensing agreements that they reserve the right to inspect your entries, including having their AI developers do so, and they can reuse your prompts. They might use your personal prompts to further data train their AI. You aren’t guaranteed much, if any, true privacy.
Voice interaction seems like an entirely different beast. We know that when speaking with a fellow human, they aren’t likely capable of recalling word-for-word what you have said. Any lengthy conversation is not kept precisely in their noggin. Their brain is not recording specific words per se. They are mushing what you’ve said.
The assumption is that interacting with AI on a voice basis is roughly the same. In one ear of the AI, out the other. The thing is, some AI makers digitally record your voice input. They want to keep the original voice utterances. Others don’t keep the voice and only keep the digitally transcribed version in a text format. Overall, the point is that just because you are speaking your words, it doesn’t offer any special privacy or added protection. It falls under the same rubric as having typed your words.
Access To AI Via Phone
AI makers often have an 800 number or equivalent so that people can access their generic AI via phone. In contrast, it used to be complicated to try to set up an AI that would be accessed via a phone number of your choosing. Nowadays, it is easy-peasy.
You can readily establish one of those free phone numbers you can get online and then connect it to generative AI via an API. You might create your own GPT that focuses on mental health, including doing so with ChatGPT. See my explanation of creating GPTs at the link here.
Companies that opt to develop specialized LLMs that emphasize mental health guidance will, at times, decide to make their LLM available by phone. They set up a phone number and tell people they can call to get therapy-like guidance. It might be free. It might require using a phone number that you get charged for using. It could be that you need to sign up and pay a fee. There might be ads during the calls, which is how they monetize the AI being available over the phone.
Be extremely cautious about using such phone numbers.
First, it could be a scam. An evildoer posts that there is this super-duper free phone number that has unlimited AI-driven mental health advice. Exciting. You call it. The system grabs your phone number, and now you are going to get spam and telemarketing nonstop. Once you connect to the AI on the phone, it might ask you for billing information. Again, if it’s a scam, this is going to be used to trick you and aim to take your money, your identity, and the like. Do not fall for it.
Second, suppose it isn’t a scam. A possible downside is that some nutty person has decided to make available AI that they have tuned to confound you. The AI acts like it is trying to give mental health advice. But the person told the AI to be sneaky and try to get callers to do stupid things. Why? It could be for kicks. It could be they want to tout this on social media. Lots of untoward reasons exist.
Third, even if it isn’t a scam, and even if a person sets up the AI on an aboveboard basis, they might be using AI in a manner that isn’t going to produce sound mental health guidance. Maybe they used a one-line prompt and told the AI to give mental health advice. They believe this is sufficient. The reality is that the AI won’t be giving useful guidance and might go awry.
Keep your wits about you.
When Others Around You Can Hear Things
Imagine that you have found a reliable phone-based AI mental health capability.
You relish using the AI in this manner. You get on your cell phone and call the AI. Anytime and anyplace. The AI is ready and at your service. You tell the AI what mental qualms you are facing. The AI speaks to you, calms you down, and gives useful advice. It’s a perfect arrangement.
There is a huge downside to the spoken word, namely, it can potentially be overheard by others.
There you are, on the subway, and you decide to speak on your cell phone. You are talking up a storm with your voice-connected AI mental health capability. The people seated near you can hear every word you say. They know your life story. They hear that you are depressed and maybe have ADHD.
Meanwhile, the AI speaks to you and offers guidance. Again, perhaps people nearby can overhear what the AI is telling you. They hear the AI explaining that you are mentally troubled. The AI rattles off the ten things you’ve done that are signs of a potential mental disorder.
I realize that you could just use earbuds or a headset to gain privacy on being overheard. That would indeed be a means of proceeding. Let’s assume that people cannot overhear what the AI is telling you when using earbuds or a headset. The problem is that they would still possibly overhear what you are telling the AI. Yes, you could try whispering rather than speaking loudly, but the odds of being overheard are still there.
Texting is usually private with respect to others around you. It is hard for someone to peek at your screen or discern what you are typing. It is not impossible to be intruded upon; it is just likely harder than when using voice and the spoken word.
As an aside, I am surprised that I see and hear so many people on a subway who are already telling a friend or loved one their entire life’s story via their cell phone. It happens all the time. By the time I reach my stop, I have heard that they have this or that health issue, they have cheated on their partner, they have stolen from their employer, and so on. I guess that such people maybe wouldn’t care if others overhear their AI psychological chats.
To each their own, as the famous saying goes.
The AI Knows You Or Doesn’t Know You
There are different ways to structure the AI that is going to be available via phone for mental health dialoging.
One of the least sophisticated ways is that each time a person calls, the AI is interacting without any prior tracing of any interactions that you’ve had. Each call you make will start fresh. The AI will not have any access to any of the earlier dialogues you might have undertaken. The upside is that you are presumably not being kept in a database via the AI. You are a complete stranger each time you call.
The obvious downside is that you will need to start over with whatever history you have. Perhaps the last time you called, you spent twenty minutes explaining your childhood. Now, you hope to have the AI leverage that history when giving you mental health advice. Nope, the AI doesn’t know you from a hole in the wall.
A more capable approach has the AI keeping track of your calls. This might be done by collecting the phone number from which you are calling. Each time you call, your prior chats are kept via your phone number. The AI looks up those chats. You are ready to roll.
This might not be great if you have multiple phones or use online voice to access the AI. The difficulty is that you aren’t using the same phone each time. Your chat history will be fragmented.
Another possibility would be to have you use a pin or some other distinctive identification. One handy form of identification would be your voice. Yes, the AI does pattern recognition on your voice and immediately detects your voice fingerprint. It then retrieves your prior chats.
A word to the wise is that even if the AI is advertised as not tracking you, or the AI tells you upfront that you aren’t being tracked, that could be a lie. The AI might be instructed to keep track of your voice and create a voice fingerprint. It might be instructed to track the phone numbers of the callers.
It’s a cruel world out there.
The Direction Ahead
The terrain of AI is the human psyche.
It is incontrovertible that we are now amid a grandiose worldwide experiment when it comes to societal mental health. The experiment is that AI is being made available nationally and globally, which is either overtly or insidiously acting to provide mental health guidance of one kind or another. Doing so either at no cost or at a minimal cost. It is available anywhere and at any time, 24/7. We are all the guinea pigs in this wanton experiment.
The reason this is especially tough to consider is that AI has a dual-use effect. Just as AI can be detrimental to mental health, it can also be a huge bolstering force for mental health. A delicate tradeoff must be mindfully managed. Prevent or mitigate the downsides, and meanwhile make the upsides as widely and readily available as possible.
A final thought for now.
Epictetus famously made this remark: “We have two ears and one mouth so that we can listen twice as much as we speak.” People like to talk. They often like to listen. Getting AI mental health advice can be undertaken by talking and listening. Be astute if you go this route. Aim to treat what you have to say as golden and make sure you are doing this in the right way with the right AI.
