Business & Finance

Texas AI Law Gets Underway With Stern Provisions To Stop The Manipulation Of Human Behavior By AI


In today’s column, I examine a new AI law in Texas that was passed last year and is now getting underway as we enter 2026. The AI law, which is known as TRAIGA, the Texas Responsible AI Governance Act, is rather comprehensive and covers a wide variety of potential AI issues. I aim to focus on the legal restrictions associated with the manipulation of human behavior by AI and AI makers.

Does the Texas AI law go far enough, or are there sneaky loopholes and discernible omissions?

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well-over one hundred analyses and postings, see the link here and the link here.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS’s 60 Minutes, see the link here.

Background On AI For Mental Health

I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.

This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.

There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.

Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.

Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.

The Current Situation Legally

Some states have already opted to enact new laws governing AI that provide mental health guidance. For my analysis of the AI mental health law in Illinois, see the link here, for the law in Utah, see the link here, and for the law in Nevada, see the link here. There will be court cases that test those new laws. It is too early to know whether the laws will stand as is and survive legal battles waged by AI makers.

Congress has repeatedly waded into establishing an overarching federal law that would encompass AI that dispenses mental health advice. So far, no dice. The efforts have ultimately faded from view. Thus, at this time, there isn’t a federal law devoted to these controversial AI matters per se. I have laid out an outline of what a comprehensive law on AI and mental health ought to contain, or at least should give due consideration to, see my analyses at the link here and the link here.

The situation currently is that only a handful of states have enacted new laws regarding AI and mental health, but most states have not yet done so. Many of those states are toying with doing so. Additionally, there are state laws being enacted that have to do with child safety when using AI, aspects of AI companionship, extreme sycophancy by AI, etc., all of which, though they aren’t necessarily deemed as mental health laws per se, certainly pertain to mental health. Meanwhile, Congress has also ventured into the sphere, which would be a much larger aim at AI for all kinds of uses, but nothing has gotten to a formative stage.

That’s the lay of the land right now.

Texas Enacts Big-Time AI Law

I’d like to focus on a recent AI law that was passed in Texas on June 22, 2025, and has just now gone into effect in 2026 (starting on January 1, 2026). The law has a catchy moniker, known as TRAIGA, and is formally referred to as the Texas Responsible Artificial Intelligence Governance Act.

This is one of those rather comprehensive AI laws that covers a wide gamut of AI aspects. Some AI laws are very wide, while others are quite narrow. For example, the Illinois law that I mentioned above is narrow and aims solely at AI and mental health considerations. The TRAIGA is quite broad in comparison.

First, it encompasses private actors such as AI makers and those fielding AI, plus it includes regulatory aspects concerning various governmental entities in Texas. The attorney general of Texas is given the authority to enforce the new AI law. Importantly, the AI law includes some safe harbors and affirmative defenses that essentially carve out exceptions. One example would be if someone is testing an AI and they violate the AI law, there is a chance they could claim an exception to the regulatory scope.

The mainstay of the AI law is to set boundaries on AI and the use of biometric data such as fingerprints, eye retina recordings, and voiceprints, though this is aimed at government entities and not necessarily private sector organizations, and to prohibit using AI in untoward ways. If AI is used to “infringe, restrict, or otherwise impair an individual’s rights guaranteed under the U.S. Constitution,” the new AI law establishes potential penalties. Curable violations are listed as having a $10K to $12K per violation penalty, and uncurable violations are in the range of $80K to $200K as civil penalties.

Unpacking Portions Of The Texas AI Law

I will go ahead and unpack selected portions of the AI law. If you are interested in the full text of the AI law, it is posted online as Texas House Bill 149, HB149, and was passed in the Texas 89th Legislature on June 22, 2025.

Let’s begin by examining Subtitle D, Chapter 551, Section 551.001, which contains the definition of AI:

  • “(1) ‘Artificial intelligence system’ means any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”

One of the most vexing parts of any set of AI laws is the scope of automated systems and technology that is construed as within the purview of the proposed laws. This boils down to how the AI law opts to define AI.

I’ve pointed out repeatedly that trying to nail down what is meant by referring to AI is a much harder legal problem than it might seem at first glance (see the link here). If the definition of AI is broad, all kinds of potentially non-AI systems will fall into the scope, which is presumably unintended. When the definition of AI is too narrow, all sorts of AI systems that should be covered can attempt to slip out of the laws by claiming that they aren’t within the stipulated scope.

The AI definition used in this instance is one of the broader versions. We do not yet know on a legal basis how the courts will opt to interpret the definitional aspects of these broader definitions. In any case, makers of non-AI systems could potentially be squeezed into this definition, so all software and systems developers should be mindful of whether their automation could fall into this zone.

Jurisdictional Scope

Another very important component of any AI law is the jurisdictional scope. In the case of states enacting AI laws, by and large, they are conventionally limited to governing only AI that arises within their geographic boundaries. They cannot reach out to other states and place limits there, per se. That being said, if an AI system is housed in one state and is available for use in a different state that has an AI law, this would normally be within the purview of that state’s AI law.

Here is what Section 551.002 on applicability has to say:

  • “This subtitle applies only to a person who: (1) promotes, advertises, or conducts business in this state; (2) produces a product or service used by residents of this state; or (3) develops or deploys an artificial intelligence system in this state.”

I earlier noted that the Illinois AI law jurisdictionally entails the use of AI while in Illinois, and likewise, the same applies to the other respective states. The takeaway is that an AI maker with AI housed in, say, California, is not off-the-hook if their AI is available for use in Texas. They would come under this AI law.

Global AI makers will need to keep this crucial point in mind.

Stated Purpose Of The AI Law

It is helpful for AI laws to clarify what the intention of the law is. I mention this because some AI laws just leap into the details of whatever scope and violations they are interested in covering. There isn’t an explicit callout of why the law was devised and enacted. I contend that it is exceedingly useful for those writing these laws to take a moment and mindfully explain what the overarching goal or intention of their new AI law purports to be.

In Section 551.003, here is the stated purpose of the new AI law:

  • “This subtitle shall be broadly construed and applied to promote its underlying purposes, which are to: (1) facilitate and advance the responsible development and use of artificial intelligence systems; (2) protect individuals and groups of individuals from known and reasonably foreseeable risks associated with artificial intelligence systems; (3) provide transparency regarding risks in the development, deployment, and use of artificial intelligence systems; and (4) provide reasonable notice regarding the use or contemplated use of artificial intelligence systems by state agencies.”

That stated purpose is relatively straightforward and provides handy context for what the law is about.

Some legal beagles will argue that the downside of having a stated purpose is that an alleged violator of the law might try to use the definition in their defensive posturing. They might cleverly argue that the provisions do not befit the stated purpose; thus, they should not be held to the letter of the law. Yes, that’s always a potential concern, but I vote that the utility of having an explicitly articulated purpose tends to be for society at large, and the hitch of a legal ploy is not large enough to warrant omitting a purpose altogether (well, that’s just a layman’s opinion).

The AI Mental Health Aspects

I pointed out earlier that the Texas AI law is relatively broad. It isn’t specifically about AI and mental health. Nonetheless, there is a particular provision that brings the AI law into the domain of AI and mental health.

Consider this provision in Section 552.052 restricting the manipulation of human behavior:

  • “A person may not develop or deploy an artificial intelligence system in a manner that intentionally aims to incite or encourage a person to: (1) commit physical self-harm, including suicide; (2) harm another person; or (3) engage in criminal activity.”

This stipulation is obviously quite short and not at all like the lengthier provisions in AI laws that focus on mental health. I’ve urged that policymakers and lawmakers who write AI laws should consider reusing or recasting the legal provisions from other passed laws, even if from other states, in an effort to make their law more exhaustive and complete. For example, this Texas AI law could have repurposed some of the lengthier and more comprehensive provisions in AI laws from Illinois, Utah, Nevada, etc.

A counter position is that if an AI law is intended to be broad, it ought to avoid getting bogged down in narrow areas of AI. An extraordinarily lengthy AI law might be harder to understand and implement. Keep things simple.

A problem with being overly simple is that there are almost certainly ambiguities and omissions that leave gaps for potential violators to slip through. One belief is that laws should be long enough to carefully stipulate all sorts of twists and turns. Natural languages such as English are replete with semantic ambiguity. The best way to deal with this is to be as detailed and lengthy as needed. The other side of that coin is that the lengthy language might make mistakes and open the door to interpretations that undercut the law. You are piling semantic ambiguity upon other semantic ambiguity.

The World We Are In

Let’s end with a big picture viewpoint.

It is incontrovertible that we are now amid a grandiose worldwide experiment when it comes to societal mental health. The experiment is that AI is being made available nationally and globally, which is either overtly or insidiously acting to provide mental health guidance of one kind or another. Doing so either at no cost or at a minimal cost. It is available anywhere and at any time, 24/7. We are all the guinea pigs in this wanton experiment.

We need to decide whether we need new laws or can employ existing laws, or both, and stem the potential tide of adversely impacting society-wide mental health. The reason this is especially tough to consider is that AI has a dual-use effect. Just as AI can be detrimental to mental health, it can also be a huge bolstering force for mental health. A delicate tradeoff must be mindfully managed. Prevent or mitigate the downsides, and meanwhile make the upsides as widely and readily available as possible.

A final thought for now.

The renowned American social reformer Henry Ward Beecher made this famous remark about laws: “A law is valuable not because it is law, but because there is right in it.” The question we are faced with currently is whether there is a need to have these new AI laws or whether we should proceed ahead and not put such laws in place that might delay or curtail the rapid pace of AI innovation. It all depends on Beecher’s keystone; is there a right in it or not?

You be the judge.

Please Subscribe. it’s Free!

Your Name *
Email Address *