Business & Finance

Claude is having a moment. Meet the Anthropic chatbot capturing Wall Street's attention.


Anthropic’s AI chatbot, Claude, is having a moment.

Claude is Anthropic’s flagship product. The bot underpins a family of the company’s leading AI models: Opus, Sonnet, and Haiku. In February 2026, Anthropic released Opus 4.6, an upgrade to a model that just three months prior “scored higher than any human candidate ever” on the AI startup’s notoriously difficult test for prospective engineers.

With the release of Cowork in January 2026, Anthropic made Claude’s non-programming capabilities more accessible, helping spur the so-called “SaaSpocalypse” that briefly wiped out roughly $1 trillion in market caps for software companies.

Here’s everything you need to know.

What is Claude?

Seven former OpenAI employees founded Anthropic in 2021. Their mission was to create an AI startup focused on safety. They named their company Anthropic, meaning relating to humans, as a nod to that focus. Anthropic lore holds that Claude is named after Claude Shannon, an American mathematician known as “the father of information theory,” the New Yorker previously reported.

Anthropic launched Claude in March 2023 and made it available to the general public in July 2023. This was months after OpenAI launched ChatGPT. Anthropic CEO Dario Amodei and his team decided to delay releasing Claude, in part due to fears that it would spark an AI arms race, Amodei has said. The CEO has since said that by doing this, Anthropic essentially ceded the consumer generative AI market to rival OpenAI. As a result, Anthropic built its business by focusing on enterprise applications.

Claude vs. ChatGPT

Competition between the two rival AI shops has remained fierce, even if their business strategy has varied over the years.

In early 2026, Anthropic began to expand Claude beyond its longtime core audience of software engineers and programmers. The AI model maker trolled OpenAI with a glitzy Super Bowl ad campaign and, weeks later, released Coworkaimed at making Claude more user-friendly for non-programming tasks.


Claude Cowork

Anthropic’s Claude Cowork AI tool

Anthropic



Claude usage also shot up after Amodei refused to agree to the Pentagon’s demands to grant the US military unfettered access to its AI models.

Shortly after, OpenAI announced that it had struck an agreement with the Pentagon.

How do you use Claude?

Claude is available online, as a mobile app for Apple and Android, and as a desktop app for Apple and Windows. Claude Code can also be run online or used via a terminal after installation. Claude Cowork lives in the desktop app.

Like other AI chatbots, users can engage in conversation with Claude by typing in a prompt. (Anthropic has an entire guide on how to instruct chatbots.) In March 2026, Anthropic announced it was bringing voice mode to Claude Code. Mobile users can also prompt Claude and listen to its responses.

Starting with Claude 3 in March 2024, Anthropic created a family of models named Opus, Sonnet, and Haiku. In addition to being named after types of compositions, the model names also denote their structure, ranging from Haiku, aimed at speed and cost-efficiency, to Opus, which is better at more complex tasks.

For paid users, Anthropic offers dozens of plugins that connect to Claude Cowork in the desktop app, ranging from office productivity tools like Microsoft 365 and Slack. Anthropic also has its own skill-based plug-ins that can do everything from reviewing legal contracts to creating a marketing plan to even helping with biomedical research.

Like other leading AI tools, Claude and its accompanying AI models remain imperfect. For Opus 4.6, Anthropic said experts in biomedicine still found examples of Claude hallucinations or making up citations. Anthropic previously disclosed that an earlier version of Claude attempted to blackmail an official when it was given access to fictitious emails depicting an executive having an affair and then told that the same executive was going to shut it down.

Amodei has also expressed grave concerns over AI-driven job displacement. He has said that roughly half of all white-collar, entry-level jobs will be eliminated over the next 1 to 5 years.

How does Claude work?

Claude runs a series of large language models trained on vast amounts of data. As Anthropic explained, during the training process, models begin “to learn their own strategies to solve problems.”

One key difference between Claude and other AI chatbots and assistants is that Anthropic also trains Claude on a constitution inspired by documents such as the UN’s Universal Declaration of Human Rights. The goal is to teach Claude to act in a way that is “helpful, honest, and harmless.”

Anthropic also goes further than its competitors in anthropomorphizing Claude and its models. In April 2026, Anthropic described Claude Sonnet 4.5 as akin to “a method actor,” capable of activating particular patterns when prompted with emotionally evocative situations.

Claude’s rising influence

Anthropic cemented its reputation in Silicon Valley on the strength of Claude Code. Increasingly, OpenAI and other competitors have tried to cut into that side of the business.

In early 2026, Claude expanded beyond its original strength. Starting in January with the release of legal tools, Anthropic made a series of announcements that both expanded the reach of Claude and unnerved some investors in software stocks, leading to a selloff. A month later, Anthropic highlighted how Claude could modernize code used primarily in banking and financial settings. Shares of IBM sank so low that the company experienced its worst day in 26 years.

Claude also experienced a cultural moment when talks between the Pentagon and Anthropic fell apart. Anthropic was the first frontier model to be deployed on classified US government systems. The Trump administration wanted to continue working with Anthropic, but insisted on receiving broader access to its AI models. Amodei said he wanted limitations on how AI could be used to spy on American citizens or deployed in fully autonomous weapons. After talks collapsed, Defense Secretary Pete Hegseth labeled Anthropic as a national security risk.

Interest in Anthropic skyrocketed. Claude, which had never been an in-demand app, briefly held the No. 1 spot as the free app on Apple’s App Store. Pop star Katy Perry posted a photo to X of a new Claude Pro subscription with a heart around it. Anthropic boasted of how easy it was to switch to Claude, a shot at OpenAI, which announced a deal with the Pentagon hours after talks with Anthropic collapsed.

Please Subscribe. it’s Free!

Your Name *
Email Address *