Talking Sensibly With Highly Emotional People By Using AI Such As ChatGPT To Translate Logic Into Emotional Language They Will Understand
Consider using AI to act as a logic-to-emotion translator when a touchy moment requires some help.
getty
In today’s column, I examine the use of generative AI and large language models (LLMs) to aid in communicating with people who are predominantly emotionally based and not conventionally amenable to logic.
Here’s the deal. It seems that there are increasingly large swaths of society that are primarily operating on an unbridled emotional basis. Attempts to use logic with them as a means of communication are fraught with great difficulty, tremendous frustration, and outright hardship. The more you try using logic, the worse things seem to become. They are only attuned to emotions and emotional language. Period, end of story.
What you need is a helpful real-time translator. The aim is to translate from logical ideas and statements into emotional forms of conveyance. Generative AI can do this. You can then use the generated emotional language as a means of engaging in a dialogue with the emotionally based person. It’s fine to convey the generated verbiage in your own words. You don’t need to strictly abide by the AI-produced wording. The heralded proposition is to get you in the ballpark of what will resonate with the emotionally charged receiver.
This use of AI can be extremely handy, though it isn’t a cure-all and won’t magically bring you eye-to-eye or mind-to-mind with someone who appears to be absent from logical reasoning and fortitude. As they say, sometimes something is better than nothing. Give it a whirl.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well over one hundred analyses and postings, see the link here and the link here.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS’s 60 Minutes, see the link here.
Background On AI For Mental Health
I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.
This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.
There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.
Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.
Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.
Emotions Gaining Mental Popularity
Shifting gears, let’s discuss the topic of emotions, emotional thinking, and emotional language. I will use a two-person contrast scenario to illustrate the communication difficulties between a logic-based person and an emotionally predominant person.
Suppose we have a person who tends to be logic-based. They make utterances that are relatively structured, analytical, and make use of stepwise reasoning. Meanwhile, a second person comes along who leans into emotions and emotional language. They are principally focused on affective processing, identity, belonging, and the perceived intent of others. This is how they talk, and, notably, this is also how they listen to others.
The logic-based person attempts to communicate with the emotion-based person. If the discussion involves any matter of substance, the odds are that the communication between the two will get mired in misunderstanding and be messy. The logic-based person tries to invoke logically cogent assertions based on facts and reason. What the emotion-based person believes they hear is likely to be sensed as deceptive intent, including the perception that they are being inappropriately bombarded with obtuse argumentation.
In the mind of the emotion-based person, there is a conjecture that they are being told “you are empty-minded” and “I’m smarter than you”. This might not have been the case. It is an interpretation of the words being spoken and how they are addressed. The point being that even if the logic-based person is trying to walk on tiptoes, the other discussant is going to dramatically perceive the discussion in ways that might not seem apparent or necessarily warranted.
To clarify, that’s not to say that there aren’t times when a logic-based person completely overplays their hand. This does happen. A logic-based person might act high-and-mighty. They might be condescending. Those are factors that quite obviously worsen communication. But do keep in mind that a logic-based person might not be doing any of those adverse actions, yet still be faced with immense challenges in trying to engage in a genuine dialogue with an emotionally laden person.
Emotional Bubbles
An interesting sidenote is that people of an emotionally focused tendency are likely to spend more time with other emotionally laden people. It is the birds of a feather flock together circumstance. They often eschew interacting with logic-based people and keep those people out of the flock. In a sense, you could say that they prefer interacting within their own bubble. An emotional bubble.
In a research article entitled “Personal AI, Deception, And The Problem Of Emotional Bubbles” by Philip Maxwell Mlonyeni, AI & Society, 2025, these salient points were made about emotional bubbles (excerpts):
- “Emotional bubble is a paraphrase of the more well-known phenomenon of an ‘epistemic bubble’ — a social condition where our interactions are exclusively with others who share the same emotional attitudes as we do.”
- “There are at least two problems with emotional bubbles.”
- “First, emotional bubbles are likely to stunt emotional growth and cripple our ability to form diverse social relationships.”
- “If our main experience of emotional connection is with someone who is identical to ourselves, we will be wholly unprepared to meet and negotiate with people who do not share our emotional attitudes.”
- “Second, there are those who argue that shared emotions are constitutive of shared values. Taking part in a shared pattern of emotional attitudes, on this view, gives our personal values interpersonal validation, thus elevating them to the status of shared values. Emotional bubbles can, however, only give the appearance of external validation.”
As pointed out, the aspect of remaining cocooned within an emotional bubble has unfortunate consequences. Those people gradually lose the ability to converse with logic-based people. This is due to not having to interact in that mode. If you don’t use it, you lose it, as they say. Another downside is that the emotional bubble reinforces their emotional cadence. It seems perfectly normal and accepted. Thus, when going outside the emotional bubble, it is hard to adjust. You become anchored in your ways of communicating solely or predominantly on an emotional basis.
Generative AI As A Translator
Modern-era generative AI, such as ChatGPT, can help by serving as a logic-to-emotion translator.
Here’s how it works. The LLM can take as input a logic-oriented statement or commentary and provide as output an emotionally framed wording of the same points. Likewise, the AI can receive emotional statements or commentary and provide as output a logic-based framing. This is a two-way deal. The AI is an emotion translation layer that acts as a bridge between two starkly different cognitive styles.
Contemporary generative AI can do this rather effortlessly. Nearly all the popular LLMs have already scanned content on the Internet during their initial training that details how people communicate. From that content, the AI has identified patterned ways to deal with doing these types of translations.
The beauty of using AI for this quest is that the AI can be a real-time cognitive support tool. At any time of the day or night, you can tap into generative AI to get a translation. You might be immersed in a human-to-human conversation and realize that the existing interaction is not making progress. Thus, you opt to bring up the AI on your smartphone and ask it to help with doing this type of translation.
You type in what you are trying to convey and can get a translated version. Either use that version when speaking to the person or refine the wording to fit your preferences. Think of this like going to a different country and having a natural language translator in your pocket. In this case, the translation is between logic-based natural language and emotion-based natural language. It is a similar problem to translating from one native natural language to some other foreign natural language.
Template For Logic-To-Emotion Translation
To demonstrate what a popular generative AI can do, I logged into ChatGPT and pretended to need a logic-to-emotion translator. I’d suggest that you explicitly tell the AI that you are seeking this type of translation. If you merely start a chat and don’t clue in the AI, there is a strong chance the AI is going to wander afield or give you responses that don’t seem on target. A short, instructive prompt to set the stage is a sensible way to proceed.
Here’s a templated prompt that you can use:
- Templated prompt for logic-to-emotion translation: “I want you to go into a logic-to-emotion translation mode. I communicate in a logical, analytical, evidence-based way, but I am speaking with someone who responds primarily through emotions, values, and relational cues. Your role is to help me translate my reasoning into language that emotionally resonates while preserving the factual substance. Help me recast my messages so it begins with validation, signals shared goals, reduces defensiveness, and invites collaboration rather than correction. Your primary aim is to enhance empathy and clarity. I will also potentially provide you with the responses of the emotionally based person and ask that you translate those remarks into a logic-based formulation.”
You are welcome to use and/or refine the templated prompt.
A logic-based person might not necessarily need the AI to do the emotion-to-logic translation since they might be able to grasp what the person is saying and don’t need a translation. In any case, I’ve included the aspect in the templated prompt so that the AI has a heads-up on that possibility.
Another worthwhile point is to realize that your body language makes a huge difference, too. The AI is mainly going to give you a set of words that you can use. If the way you say the words is abrasive or out-of-tune, those words are not going to land well. The same with your posture and overall demeanor. Not even Shakespearean wording will get around the fact that your lips are saying one thing, but your body is expressing something else entirely.
One other vital aspect to keep in mind is that, despite whatever prompt you use, generative AI is like a box of chocolates – you never know what responses you might get. The AI could do a wonderful job with the translations, and then suddenly falter here or there. The AI could do a lousy job translating, and you might decide to abandon using it for that particular moment. Make sure to keep your wits about you and not allow the aura of modern AI to sweep you into becoming mindless.
Example Of AI Logic-To-Emotion Translation
After using the templated prompt, I decided to engage in a chat with the AI and see how it might handle a relatively common circumstance. The scenario is perhaps one that you’ve seen or experienced directly.
You are conversing with a coworker. The matter has to do with a worker in another division who has made a mistake, and the mistake impacts you and your coworker. You want to proceed cautiously, first verifying that it was that person’s fault and you are desirous of handling things privately. Your coworker is steamed about the mistake and believes that swift and widely broadcast attribution of fault is the immediate course to take. Don’t beat around the bush and instead, call out the person with gusto.
Here is a snippet of the human-to-human conversation taking place in this scenario.
- Logic-based person: “I think we should verify what actually happened before confronting Sam. Acting without confirming the facts could lead to misunderstandings and might unfairly be accusatory.”
- Emotion-based coworker: “So you just want to sit around and do nothing?”
- Logic-based person: “No. I’m saying it’s rational to gather information first. We might not have the full picture.”
- Emotion-based person: “That sounds like avoiding the issue.”
The logic-based person is not making headway. The emotion-based person perceives that there is a lack of moral courage at stake and is being quite reactive to the attempt to undertake a form of logical sequencing. Furthermore, the indication by the logic-based person that there is a need to be “rational” is bound to trigger the emotion-based person. It is as though the logic-based person is name-calling and dismissing the emotion-based person out-of-hand.
Not good.
Using AI As The Translator
I provided the human-to-human dialogue to generative AI and asked the AI to do a translation for me.
Here’s what happened.
- Generative AI response: “Your coworker is telling you that integrity and moral courage are paramount to them. Instead of you leading with procedure, you should consider leading with shared values. You want to show that your caution is about doing justice correctly with integrity and moral courage, not avoiding it. Frame your wording to protect fairness. Would you like me to offer you a sample of what might be said?”
- User entered prompt: “Yes, please provide such wording.”
- Generative AI response: “Make sure to express this in your own words and with the appropriate mannerisms and tone: ‘I care about standing up for what’s right too. That’s exactly why I want to be sure we’re acting on solid ground. I don’t want us to risk hurting someone unfairly when our goal is to do the right thing.”
This approach attempts to anchor your voiced logic into underlying human values that are driving the emotion. It might work.
A caution to keep in mind is that there is a chance that the coworker is not going to react well to the use of generative AI in this situation. They might get upset that you are turning to AI. It could be seen as disrespectful and outright insulting. Imagine a response from the coworker saying, “Do you let AI do all your thinking for you?” and the rest of the conversation now spins down a rabbit hole that has little to do with the direct matter at hand. You’ve turned one ongoing argument into two fiery arguments.
Core Insights To Be Had
Emotionally oriented communication often revolves around these four keystones:
- (1) Identity — “What kind of person are we?
- (2) Loyalty — “Are we standing up for what’s right?”
- (3) Respect — “Are you taking me seriously?”
- (4) Moral alignment — “Does this morally align with my values?”
Logic-oriented communication typically revolves around these four factors:
- (1) Consistency
- (2) Valid premises
- (3) Avoiding errors
- (4) Proper sequencing
When logic ignores the identity or value layer, it feels cold. And when emotion ignores the reasoning layer, it feels reckless. Generative AI can help be a bridge by asking this question: “What value is the emotion protecting?” And then reframe logic as a catalyst for that value.
More Vital Factors
There are additional factors that come into play when using AI in this manner.
Suppose the AI provides an answer that seems convincing but gives you a translation that turns out to be unwise. That can happen when AI encounters an AI hallucination, see my explanation at the link here. The AI can have a kind of confabulation, whereby it produces a plausible-looking answer that is factually incorrect. It looks right, but is misleading or inappropriate.
Privacy issues also enter into the picture.
Most people assume that their use of AI is considered private and confidential. Nope, that’s rarely the case. The AI makers typically stipulate in their online licensing agreements that any chat you have with the AI can be inspected by their developers. Furthermore, your chat can be used to further train the AI. Bottom line is that your privacy and confidentiality are not guaranteed, and potentially, you are opening yourself to privacy intrusions.
The World We Are In
It is incontrovertible that we are now amid a grandiose worldwide experiment when it comes to societal mental health. The experiment is that AI is being made available nationally and globally, which is either overtly or insidiously acting to provide mental health guidance of one kind or another. Doing so either at no cost or at a minimal cost. It is available anywhere and at any time, 24/7. We are all the guinea pigs in this wanton experiment.
The reason this is especially tough to consider is that AI has a dual-use effect. Just as AI can be detrimental to mental health, it can also be a huge bolstering force for mental health. A delicate tradeoff must be mindfully managed. Prevent or mitigate the downsides, and meanwhile make the upsides as widely and readily available as possible.
Plato famously made this remark: “Human behavior flows from three main sources: desire, emotion, and knowledge.” I bring this up because sometimes a logic-based person will seek to excise emotion from a conversation, doing so under the righteous assumption that emotion doesn’t merit attention. A different angle is to acknowledge, as Plato has, that emotion is part and parcel of the human equation. Go ahead and use modern AI to be a translator and cope with emotion rather than blindly pretending that it can summarily be expunged.
