Business & Finance

Emergence Of AI Personas As Simulated Therapists And Synthetic Patients For Psychotherapy Training And Research


In today’s column, I examine the use of AI personas to simulate the roles of therapists and patients in the realm of psychotherapy and mental health, which is increasingly being undertaken for training purposes and to conduct important scientific research.

The use of AI personas is one of the least leveraged and yet amazingly powerful intrinsic features of modern-era generative AI and large language models (LLMs). An AI persona is relatively simple to invoke. You enter an instructive prompt into an LLM and tell the AI to pretend to be a person of some kind or another. It could be that you want the AI to mimic a known celebrity or historical figure, or that you merely want the AI to pretend to be a particular personality or type of person. Voila, the AI will engage you in a dialogue as though you are interacting with that person.

The mental health domain has gradually been adopting and adapting the use of AI personas for a variety of notable purposes. One especially helpful purpose consists of training budding therapists by having them interact with AI personas. This is a safe space for them to try out their emerging skills as psychotherapists. Another vital use entails performing foundational research about the human mind in a simulated environment and performing crucial experiments on theories about psychological tendencies and reasoning.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well-over one hundred analyses and postings, see the link here and the link here.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS’s 60 Minutes, see the link here.

Background On AI Personas

All the popular LLMs, such as ChatGPT, GPT-5, Claude, Gemini, Llama, Grok, CoPilot, and other major LLMs, contain a highly valuable piece of functionality known as AI personas. There has been a gradual and steady realization that AI personas are easy to invoke, they can be fun to use, they can be quite serious to use, and they offer immense educational utility.

Consider a viable and popular educational use for AI personas. A teacher might ask their students to tell ChatGPT to pretend to be President Abraham Lincoln. The AI will proceed to interact with each student as though they are directly conversing with Honest Abe.

How does the AI pull off this trickery?

The AI taps into the pattern-matching of data that occurred at initial setup and might have encompassed biographies of Lincoln, his writings, and any other materials about his storied life and times. ChatGPT and other LLMs can convincingly mimic what Lincoln might say, based on the patterns of his historical records.

If you ask AI to undertake a persona of someone for whom there was sparse data training at the setup stage, the persona is likely to be limited and unconvincing. You can augment the AI by providing additional data about the person, using an approach such as RAG (retrieval-augmented generation, see my discussion at the link here).

Personas are quick and easy to invoke. You just tell the AI to pretend to be this or that person. If you want to invoke a type of person, you will need to specify sufficient characteristics so that the AI will get the drift of what you intend. For prompting strategies on invoking AI personas, see my suggested steps at the link here.

Pretending To Be A Type Of Person

Invoking a type of person via an AI persona can be quite handy.

For example, I am a strident advocate of training therapists and mental health professionals via the use of AI personas (see my coverage on this useful approach, at the link here). Things go like this. A budding therapist might not yet be comfortable dealing with someone who has delusions. The therapist could practice on a person pretending to have delusions, though this is likely costly and logistically complicated to arrange.

A viable alternative is to invoke an AI persona of someone who is experiencing delusions. The therapist can practice and hone their therapy skills while interacting with the AI persona. Furthermore, the therapist can ramp up or down the magnitude of the delusions. All in all, a therapist can do this for as long as they wish, doing so at any time of the day and anywhere they might be.

A bonus is that the AI can afterward playback the interaction and do so with another AI persona engaged, namely, the therapist could tell the AI to pretend to be a seasoned therapist. The therapist-pretending AI then analyzes what the budding therapist said and provides commentary on how well or poorly the newbie therapist did.

To clarify, I am not suggesting that a therapist would entirely do all their needed training using AI personas. Nope, that’s not sufficient. A therapist must also learn by interacting with actual humans. The use of AI personas would be an added tool. It does not entirely replace human-to-human learning processes. There are many potential downsides to relying too much on AI personas; see my cautions at the link here.

Going In-Depth On AI Personas

If the topic of AI personas interests you, I’d suggest you consider exploring my extensive and in-depth coverage of AI personas. As readers know, I have been examining and discussing AI personas since the early days of ChatGPT. New uses are continually being devised. Discoveries about the underlying technical mechanisms within LLMs are showing us more so how AI personas happen under-the-hood.

And the application of AI personas to the field of mental health is burgeoning. We are just entering into the initial stages of leaning into AI personas to aid the field of psychology. Lots more will arise as more researchers and practitioners realize that AI personas provide a wealth of riches when it comes to mental health training and conducting ground-breaking research.

Here is a selected set of my pieces on AI personas that you might wish to explore:

  • Prompt engineering techniques for invoking multiple AI personas, see my discussion at the link here.
  • Role of mega-personas consisting of millions or billions of AI personas at once, see my analysis at the link here.
  • Invoking AI personas that are subject matter experts (SMEs) in a selected or depicted domain of expertise, see my coverage at the link here.
  • Crafting an AI persona that is a simulated digital twin of yourself or someone else that you know or can describe, see my explanation at the link here.
  • Smartly tapping into massive-sized AI persona datasets to pick an AI persona suitable for your needs, see my indication at the link here.
  • Using multiple AI personas “therapists” to diagnose mental health disorders, see my discussion at the link here.
  • Toxic AI personas are revealed to produce psychological and physiological impacts on AI users, see my analysis at the link here.
  • Upsides and downsides of using AI personas to simulate the psychoanalytic acumen of Sigmund Freud, see my examples at the link here.
  • Getting AI personas to simulate human personality disorders, see my elaboration at the link here.
  • AI persona vectors are the secret sauce that can tilt AI emotionally, see my coverage at the link here.
  • Doing vibe coding by leaning into AI personas that have a particular software programming slant or skew, see my analysis at the link here.
  • Use of AI personas for role-playing in a mental health care context, see my discussion at the link here.
  • AI personas and the use of Socratic dialogues as a mental health technique, see my insights at the link here.
  • Leaning into multiple AI personas to create your own set of fake online adoring fans, see my coverage at the link here.
  • How AI personas can be used to simulate human emotional states for psychological study and insight, see my analysis at the link here.

Those cited pieces can rapidly get you up-to-speed. I am continually covering the latest uses and trends in AI personas, so be on the watch for my latest postings.

Stanford CREATE Webinar On AI Personas

The topic of AI personas in mental health was superbly articulated in a webinar on December 10, 2025, by Dr. Torrey Creed as part of the CREATE center at Stanford University. This talk was provocatively entitled “Simulated Humans in Psychotherapy Research: Where Did All the Humans Go?” and you can see a recorded video of the webinar at the CREATE website, see the link here.

CREATE is the Center for Responsible and Effective AI Technology Enhancement of PTSD Treatments. The group is funded by the National Institute of Mental Health (NIMH/NIH) and is a multi-disciplinary ALACRITY center that develops and evaluates LLM-based tools to support evidence-based mental health treatment implementation and quality.

The recently launched CREATE is co-directed by Stanford’s esteemed Dr. Shannon Wiltsey-Stirman, a professor in the Stanford School of Medicine’s Department of Psychiatry and Behavioral Sciences, and enterprising Dr. Johannes Eichstaedt, a Stanford faculty fellow at the Institute for Human-Centered AI (HAI) and assistant professor (research) of psychology in the School of Humanities and Sciences.

For those of you who might be interested in the exciting and innovative research underway at CREATE, you can visit their website at the link here. Handily, there are ongoing webinars featuring renowned experts who showcase their notable efforts to build, evaluate, and implement effective, ethical LLM-based tools to improve mental health treatment.

For my prior coverage of CREATE, take a look at the use of agentic AI for mental healthcare that was addressed in a CREATE webinar on November 5, 2025, see my discussion and analysis at the link here.

Webinar On AI Personas In Psychotherapy

In the webinar of December 10, 2025, Dr. Creed addressed the transformative shift that is arising as AI is increasingly being incorporated into psychotherapy training and research.

Per her bio, Dr. Torrey Creed is an Associate Professor at the University of Pennsylvania’s Perelman School of Medicine and founder of the Penn Collaborative for CBT and Implementation Science. Her work focuses on pragmatic, sustainable strategies to increase access to high-quality mental healthcare in low-resource contexts. Dr. Creed’s research addresses the fundamental challenge of scaling through the development and implementation of AI-based tools designed to facilitate therapist skill development, monitor practice fidelity, and enable efficient supervision. Her research also incorporates behavioral economics and natural language processing to enhance telehealth and culturally responsive care, reflecting her commitment to pragmatic and sustainable solutions in low-resource settings.

The talk by Dr. Creed emphasized the application of AI in training human therapists, evaluating their skillfulness in therapy sessions, and examining the vital need to evaluate AI-delivered treatment. Key challenges were surfaced, including benchmarking the potential harm and safety concerns of LLMs when interacting with vulnerable clients and assessing the quality of AI’s conversational behaviors.

An important cautionary note was that anyone seriously opting to use AI personas in this realm must give due consideration to prioritizing goals to avoid the “rabbit hole” of trying to simulate everything.

Highlights Of Imperative Points

I will briefly highlight some of the many insightful points made during the talk.

When aiming to use AI personas for psychotherapy training, the AI can be used to simulate variations, such as (per the talk):

  • Different therapist interventions
  • Different client reactions
  • Low base rate events
  • Different therapist styles or orientations
  • Different supervisor feedback
  • Entire treatment course alternations

Those are great points.

As I’ve noted in my writings, you can leverage AI personas in a variety of roles and scenarios when training therapists. One approach entails having the human therapist interact with an AI persona that acts as a client or patient. You tell the AI persona which type of client or patient is to be simulated.

Another approach involves telling the therapist to pretend to be a client and have an AI persona that acts like a therapist. This allows the therapist to see what it is like when the shoe is on the other foot. Doing so can be extremely enlightening to a nascent therapist.

Those are the core approaches, but there are many more.

For example, you can have an AI persona that serves as a therapeutic supervisor, providing tips and remarks to a human therapist who is engaged in doing therapy with an AI persona acting as a client. That’s two AI personas going on at once. The AI persona that is the supervisor can be acting in real-time to advise the human therapist, and/or the AI-based supervisorial role can be a debriefing mechanism after the practice therapy session is concluded.

You can mix and match in many ways. There might be an AI persona as a therapist, client, family members of the fake client, a supervisor, an evaluator, and so on. Invoking multiple personas at a time is perfectly fine. One big caution that I have repeatedly made is that when you use multiple AI personas in a therapy simulation, be aware that there is a solid chance that the AI is going to be boastful about its roles in a biased manner.

Here’s what I mean. An LLM is often shaped to make itself look good. The computational underpinnings are tilted by the AI makers in that direction. If you ask an AI persona that is a supervisor to critique its own AI persona that was acting as a therapist, the odds are that the AI supervisor will play softball and not hardball. I often suggest that such setups use a separate LLM that won’t likely have that built-in bias, in the sense that if you might use ChatGPT for the therapist AI persona, and use say Claude for the AI persona of the therapist, there is usually a better chance at getting a straight-ahead assessment.

To clarify, using disparate AIs can be helpful, but it also introduces other potential complications. One such complication is that if the other AI is informed or detects that another AI is being used, that too can bring out a bias (usually against the other AI). Be mindful of the approach you opt to use.

The Fidelity Considerations

Dr. Creed was the lead author on a notable research study that explored the fidelity associated with using AI in these ways.

In a research paper entitled “Knowledge and Attitudes toward an Artificial Intelligence-Based Fidelity Measurement in Cognitive Behavioral Therapy Supervision” by Torrey A. Creed, Patty B. Kuo, Rebecca Oziel, Danielle Reich, Margaret Thomas, Sydne O’Connor, Zac E. Imel, Tad Hirsch, Shrikanth Narayanan, David C. Atkins, Administration and Policy in Mental Health and Mental Health Services Research, May 2022, these salient points were made (excerpts):

  • “Advances in artificial intelligence (AI), including natural language processing and machine learning, offer methods for recognizing patterns in spoken language that predict indicators of fidelity in therapy session recordings, without the rate-limiting factors associated with reliance on humans to review sessions.”
  • “The goal of this study was to understand how community mental health therapists and clinical leadership perceive AI-based automated evaluation and assessment as a supervision tool to provide feedback on CBT fidelity.”
  • “In sum, feedback suggested that community providers in this large public mental health system perceive an AI-based supervision platform for CBT to be acceptable, appropriate, and feasible, and that they have the infrastructure in place to use such a system.”
  • “While perceptions of the tool were overall positive, participants raised questions and concerns that should guide future tool development and strategies for its implementation.”

A rule-of-thumb that I have is that whatever is undertaken with AI personas in a mental health setting, the aim is to try and be true to what would be experienced in the real-world with human clients and patients. If the AI personas are not acting or behaving as humans would, the therapists aren’t getting trained in a proper way. Those therapists might misapply what they believe they learned during the use of the AI personas. Not good.

AI personas are not a silver bullet. It is not a cure-all. Using them needs to be carefully calibrated, tested, and monitored. Don’t use an AI persona and walk away thinking that all is fine and dandy. Always keep your wits about you.

Vital Questions For Keen Pursuit

The talk brought up several vital questions that those studying and doing research on mental health are encouraged to consider pursuing, including (from the talk):

  • (1) What are the likely outcomes of a novel therapy?
  • (2) What is the causal mechanism of therapeutic change?
  • (3) What training sessions optimize therapy acquisition?
  • (4) What is the optimal therapist response in a given moment?
  • (5) How would altering a given technique, or its timing, change outcomes?

Each of those questions is well-worth active research, and I will keep you informed as I continue my coverage on these evolving matters.

Those questions remind me of my favorite aspect about using AI personas for therapy training and research, namely, the capability of rewinding the clock.

Imagine this. A therapist is interacting with an AI persona. The therapist provides a therapeutic remark to the AI persona. Oops, the remark is off-base or inappropriate. If the client were a human being, the remark might stick with them forever. You can’t un-ring the bell, as the adage goes.

Aha, you can indeed un-ring the bell when using AI personas. You can tell the AI persona to forget that the remark was made (this partially works, but not fully). Or, you can enact a new path, known as non-linear branching in generative AI, which I’ve covered at the link here, and leave the old path as it is. It’s almost like you rewound the clock.

I bring up this facet since it dovetails nicely into the question about trying to ascertain the optimal therapist response in a given moment. You can readily rewind the clock and try something new. This can be used for training. I would also urge that this be used for conducting research. Intriguing research can be undertaken that would simulate this clock rewinding on a large-scale, doing these thousands or millions of times, and could reveal nuances about human therapy that could never be readily discovered on a small-scale basis.

The World We Are In

Let’s end with a big picture viewpoint.

My view is that we are now in a new era of replacing the dyad of therapist-client with a triad consisting of therapist-AI-client (see my discussion at the link here). One way or another, AI enters the act of therapy. Savvy therapists are leveraging AI in sensible and vital ways. AI personas are handy for training and research. They can also be used to practice and hone the skills of even the most seasoned therapist. Of course, AI is also being used by and with clients, and therapists need to identify how they want to manage that sort of AI usage (see my suggestions at the link here).

A final thought for now.

The Greek philosopher Heraclitus famously said this remark about change: “There is nothing permanent except change.” The mental health field is undergoing enormous change. AI is a disruptor. Therapists and researchers in psychology are faced with AI and change. You will either break with the wind or bend. Deciding which is your choice.

Please Subscribe. it’s Free!

Your Name *
Email Address *