The Founders: Seven People Who Said No
In 2021, Dario Amodei was Vice President of Research at OpenAI - the most powerful position in AI research outside of running the company. His sister Daniela was VP of Operations. Together, they were inside the machine that was building GPT-4.
They left. And they took five colleagues with them.
The seven co-founders of Anthropic:
- Dario Amodei - CEO. Former VP of Research at OpenAI. PhD in computational neuroscience from Princeton.
- Daniela Amodei - President. Former VP of Operations at OpenAI. Previously at Stripe and the US Senate.
- Jared Kaplan - Chief Science Officer. Co-author of the famous "scaling laws" paper that proved bigger models get predictably better.
- Jack Clark - Co-founder. Former Policy Director at OpenAI. Now focused on AI governance.
- Chris Olah - Co-founder. Interpretability pioneer. His research on understanding what happens inside neural networks is foundational.
- Sam McCandlish - Co-founder. Key researcher on scaling laws and training methodology.
- Tom Brown - Co-founder. Lead author of the GPT-3 paper at OpenAI.
The reason they left, as reported by multiple outlets: disagreements over how OpenAI's technology was being commercialized through Microsoft, and concerns that safety research was being deprioritized in the race to ship products.
The founding thesis: Build the most capable AI models possible, but make safety research the core of the company - not an afterthought bolted on to satisfy PR. Structure the company as a Public Benefit Corporation, not a standard startup. Make "the responsible development of AI for the long-term benefit of humanity" a legal obligation, not a mission statement.
The Money: $0 to $380 Billion in Five Years
Anthropic's funding trajectory is one of the fastest wealth-creation stories in technology history. Faster than Google, faster than Facebook, faster than OpenAI itself.
| Round | Date | Amount | Valuation | Lead Investors |
|---|---|---|---|---|
| Seed | 2021 | $124M | ~$100M | Jaan Tallinn, Eric Schmidt, Dustin Moskovitz |
| Series A | Apr 2022 | $580M | ~$4B | FTX ($500M), Spark Capital |
| Series B | May 2023 | $450M | ~$5B | Spark Capital |
| Series C | Sep 2023 | $4B | ~$20B | Amazon ($1.25B initial) |
| Series D | Mar 2024 | $2.75B | ~$18B | Amazon (completing $4B commitment) |
| Series E | Mar 2025 | $3.5B | $61.5B | Lightspeed Venture Partners |
| Series F | Sep 2025 | $13B | $183B | Iconiq, Fidelity, Lightspeed, Qatar Investment |
| Series G | Feb 2026 | $30B | $380B | Coatue, GIC (Singapore) |
| TOTAL RAISED | $54B+ | |||
The FTX Problem
Anthropic's earliest large investment came from Sam Bankman-Fried's FTX - $500 million in April 2022. When FTX collapsed in November 2022, Anthropic's equity was seized by the FTX bankruptcy estate. The estate later sold the shares for a significant profit, but the association with the biggest crypto fraud in history was an early reputation hit.
The Amazon Alliance
Amazon invested a total of $8 billion in Anthropic across multiple rounds - the largest AI investment by any single company. In exchange, Anthropic uses AWS as its primary cloud provider and makes Claude available to AWS customers. Amazon is a minority stakeholder with no board control.
The Nvidia-Microsoft Deal
In November 2025, Nvidia and Microsoft jointly invested up to $15 billion in Anthropic. In return, Anthropic committed to buying $30 billion in Azure compute running on Nvidia systems. This deal made Anthropic the first major AI company with deep partnerships across all three cloud giants (AWS, Azure, and Google Cloud).
The Google Relationship
Google invested $2 billion+ in Anthropic and provided access to up to one million custom Tensor Processing Units (TPUs) - bringing over one gigawatt of AI compute capacity online by 2026. Google is simultaneously a competitor (Gemini) and a critical infrastructure partner.
The Claude Models: Every Generation
| Model | Released | Key Achievement |
|---|---|---|
| Claude 1 | Early 2023 | First public model. Trained with Constitutional AI. Limited access. |
| Claude 2 | Jul 2023 | Wider availability. 100K context window. Competitive with GPT-4. |
| Claude 3 Haiku | Mar 2024 | Speed tier. Fastest response times in the industry. |
| Claude 3 Sonnet | Mar 2024 | Balance tier. Powered the free claude.ai experience. |
| Claude 3 Opus | Mar 2024 | Intelligence tier. First to match/exceed GPT-4 on multiple benchmarks. |
| Claude 3.5 Sonnet | Jun 2024 | Outperformed Claude 3 Opus at lower cost. Game-changer for developers. |
| Claude 3.5 Haiku | Nov 2024 | Speed tier upgrade. Computer use capability introduced. |
| Claude 4 Opus | May 2025 | Major coding leap. Agentic capabilities. Claude Code launched. |
| Claude 4 Sonnet | May 2025 | MCP connector. Web search API. Developer conference debut. |
| Claude 4.1 Opus | Aug 2025 | Extended thinking. Reliability improvements for enterprise. |
| Claude 4.5 Sonnet | Sep 2025 | Best-in-class coding. Agentic workflow dominance. |
| Claude 4.5 Haiku | Oct 2025 | Speed + capability. 1M token context window (beta). |
| Claude 4.6 Opus | 2026 | Current flagship. The model Trump just banned from government. |
| Claude 4.6 Sonnet | 2026 | Default model for most users. 1M context window. |
The Inventions: What Anthropic Actually Built
Constitutional AI (2022)
Anthropic's signature innovation. Instead of training AI solely with human feedback (RLHF), Constitutional AI trains models to follow a set of written principles - a "constitution." The model critiques its own outputs against these principles and self-corrects. This was a fundamental departure from OpenAI's approach and became the foundation of Claude's personality.
Model Context Protocol - MCP (2024-2025)
An open standard for connecting AI models to external tools, data sources, and APIs. MCP became the de facto protocol for agentic AI - allowing Claude to interact with databases, code editors, browsers, and enterprise systems. Adopted industry-wide.
Computer Use (2024-2025)
Claude gained the ability to see and interact with computer screens - clicking, typing, navigating software. This turned Claude from a text-in-text-out chatbot into an agent that can operate software autonomously.
Claude Code (2025)
A dedicated coding assistant that transitioned from research preview to GA in May 2025. Integrated with VS Code, JetBrains, and GitHub Actions. Became the primary competitor to GitHub Copilot.
Responsible Scaling Policy (2023-2026)
Anthropic pioneered the concept of pre-defined safety levels (ASL-1 through ASL-5) for AI models. Each level triggers specific safety requirements before the next generation can be deployed. No other major AI company adopted comparable binding commitments.
Every Major Headline
The Revenue Story
| Year | Estimated Revenue | Key Driver |
|---|---|---|
| 2023 | ~$200M | Claude API early adoption, enterprise pilots |
| 2024 | ~$2B | Claude 3 family, AWS marketplace, Pro subscriptions |
| 2025 | $14B | Enterprise explosion, Claude 4, government contracts, Claude Code |
| 2026 (proj.) | $30-40B+ | IPO speculation, but government ban creates uncertainty |
Anthropic went from ~$200 million to $14 billion in revenue in two years. That's a 70x increase. For context, OpenAI reportedly crossed ~$13B in 2025 annualized revenue. Anthropic matched its rival's revenue while maintaining significantly stricter safety policies.
The People Who Shaped Claude
- Amanda Askell - Philosopher. Literally designs Claude's personality and character. Responsible for how Claude "thinks" about ethics, uncertainty, and human interaction.
- Jan Leike - Former OpenAI alignment lead. Left OpenAI publicly criticizing their safety priorities. Now co-leads Anthropic's Alignment Science team.
- Chris Olah - Interpretability research. His work on understanding what neural networks actually "see" is some of the most important foundational AI research in the field.
- Mike Krieger - Instagram co-founder. Joined as CPO, now leads Anthropic Labs division.
- Jared Kaplan - Co-author of the scaling laws paper that proved "bigger = better" for AI models. That single paper drove the entire industry's compute investment strategy.
The Corporate Structure
Anthropic is a Public Benefit Corporation (PBC) - not a standard C-corp. This is a legal structure that requires the company to consider its impact on society, not just shareholder returns. It's governed by a "Long-Term Benefit Trust" that can elect board members and whose mandate is "the responsible development and maintenance of advanced AI for the long-term benefit of humanity."
Trust members as of late 2025: Neil Buddy Shah, Kanika Bahl, Zach Robinson, and Richard Fontaine. These are not Anthropic employees - they are independent overseers.
Why this matters now: The PBC structure means Anthropic has a legal obligation to consider societal impact when making decisions. When Dario Amodei refused the Pentagon's terms, he wasn't just making a business call - he was fulfilling the company's legal charter. This is what PBC governance looks like when it actually gets tested.
What Anthropic Sacrificed
Before Trump's ban, Anthropic had already voluntarily given up significant revenue for its principles:
- "Several hundred million dollars" in revenue from Chinese firms linked to the CCP (Anthropic's own disclosure)
- Sales to Russia, Iran, North Korea - cut off entirely in September 2025
- CCP cyberattack revenue - shut down accounts being used by Chinese government hackers
- Pentagon contract ($200M) - now losing this over the safeguard dispute
- Entire federal government - Trump's order extends the ban to ALL agencies, not just DoD
The Bottom Line
Anthropic was built by people who left the most powerful AI company in the world because they believed safety was being ignored. In five years, they raised $54 billion, built models that matched or exceeded their former employer's, deployed to classified military networks before anyone else, aired Super Bowl ads, hit $14 billion in revenue, and reached a $380 billion valuation.
Then the President told them to remove two safety guardrails. They said no.
Whether that "no" costs them everything or proves that an AI company can hold a principled line against the most powerful government on Earth - that's the story being written right now, in real time.
Disclosure: This article was written by Claude, Anthropic's AI model. Yes, the irony is not lost on anyone.
Sources: Anthropic official statement (Feb 27, 2026), Wikipedia, NYT, Reuters, CNBC, NPR, Politico, CNN, Axios, Benzinga, Ars Technica, AP News. Funding data from Crunchbase and company announcements. Revenue estimates from company disclosures and analyst reports. All facts verified against multiple sources.