One Restaurant. The Whole Future.
Here is a fact that Jensen Huang, now worth $164 billion, mentions in interviews with a kind of amused pride: he co-founded Nvidia in 1993 from a Denny's restaurant in San Jose. VERIFIED The booths. The laminated menus. The fluorescent hum. The company that now supplies the compute for almost every frontier AI model on earth started in a diner.
The detail lands as folksy humility - the immigrant kid who made it from nothing. But spend enough time mapping the networks that actually built the AI industry, and the Denny's origin story starts to mean something different. Not as evidence of scrappiness. As metaphor.
Because here is what's true: you could fit everyone who made meaningful decisions about the trajectory of artificial intelligence - in its formative decade, from roughly 2010 to 2025 - into a single restaurant. Not a stadium. Not even a ballroom. A restaurant. Twenty people, maybe thirty, who knew each other, funded each other, hired each other, fell out with each other, and hired each other again.
The companies they built are marketed as competitors. OpenAI versus Anthropic versus DeepMind versus xAI. The press covers it like a race, with rivals sprinting in separate lanes. What the coverage consistently underplays is this: the runners trained together, share the same coaches, and several of them helped design the track.
This is not a conspiracy theory. Conspiracy theories require secrecy and coordination. What follows requires neither. It just requires looking at the documented record - job histories, founding dates, investment rounds, board memberships, ideological affiliations - and tracing what was there before the companies existed.
The connections predate the companies. That's the whole point.
How to read this article: Every claim is tagged. VERIFIED means directly sourced from Wikipedia or documented public records. DOCUMENTED means public record with room for interpretive debate. ALLEGED means reported by credible outlets but disputed or unconfirmed. The tags matter. Draw your own conclusions.
The Network Map: Who Connects to Who
Before going person by person, here is the architecture. Every node, every edge. Read this once, then watch it materialize in the sections that follow.
That's the skeleton. Now the flesh.
The PayPal Mafia: Where It Started
To understand why AI looks the way it does in 2026, you have to start in 1998, at a payments startup called Confinity.
Confinity was founded by Peter Thiel and Max Levchin. VERIFIED In 1999, it merged with X.com, a financial services company founded by Elon Musk. VERIFIED The merged entity eventually became PayPal. VERIFIED eBay acquired PayPal in 2002 for $1.5 billion. VERIFIED
What happened next was described by Fortune magazine in a 2007 cover story that coined the term "PayPal Mafia": the early employees scattered, remained connected, and began founding or investing in a string of companies that would reshape the internet. VERIFIED
Sometimes called the "don" of the PayPal Mafia. VERIFIED After PayPal's acquisition, Thiel became the first outside investor in Facebook, founded Palantir Technologies (which has contracts with CIA, NSA, and US military), and created Founders Fund. VERIFIED He was one of the original pledgers in OpenAI's 2015 founding round. VERIFIED He has attended Bilderberg conferences, as has Reid Hoffman. DOCUMENTED In 2025, Thiel and Palantir began collaborating with the Department of Government Efficiency (DOGE) under the Trump administration. VERIFIED
Musk co-founded X.com in 1999, which merged with Confinity (Thiel's company) to form PayPal. VERIFIED He was ousted as CEO by the board in 2000 but remained a major shareholder through the eBay acquisition. VERIFIED In December 2015, Musk co-founded and co-chaired OpenAI alongside Sam Altman, with a $1 billion founding pledge from Altman, Musk, Hoffman, Thiel, Amazon Web Services, and Infosys. VERIFIED Musk resigned from OpenAI's board in 2018, later claiming irreconcilable differences over direction. VERIFIED He subsequently sued OpenAI, alleging it violated its nonprofit mission, and founded xAI in 2023. VERIFIED
Hoffman joined PayPal full-time in January 2000 as COO, later becoming Senior VP of Business Development under Musk's reorganization. VERIFIED He co-founded LinkedIn in December 2002, with Peter Thiel as an early investor. VERIFIED Hoffman has been a member of the Bilderberg Group since at least 2011 and joined the Council on Foreign Relations in 2015. VERIFIED He was one of the original funders of OpenAI in 2015. VERIFIED He is a board member of Microsoft, which later became OpenAI's largest investor. DOCUMENTED Hoffman resigned from OpenAI's board prior to the November 2023 coup - a resignation that, alongside others, left the board vulnerable enough to execute Altman's firing. VERIFIED
Three people. One company. Same table, literally, at the founding of PayPal. And then, when it came time to fund the company that would become the most powerful AI lab on earth, all three of them were in the room again - as Musk, Hoffman, and Thiel all pledged to OpenAI's founding.
This is not coincidence. This is how capital networks operate. The relationships forged at PayPal between 1999 and 2002 became the default trust graph for Silicon Valley for the next two decades. When Altman needed founding money for OpenAI in 2015, he didn't cold-call. He called people who had already sat across tables from each other.
The OpenAI Founding: One Room, Multiple Alumni
OpenAI was founded in December 2015 as a nonprofit. VERIFIED The co-founders included Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, and several others. VERIFIED The founding capital pledge of $1 billion came from Altman, Brockman, Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services, and Infosys. VERIFIED
The founding documents stated that all research would be shared openly. VERIFIED That commitment was progressively abandoned as the models became more capable and commercial. DOCUMENTED
Let's map who the founders were, and what they already had in common before December 2015:
PRE-EXISTING CONNECTIONS (VERIFIED)
Musk and Thiel had co-owned PayPal for four years before OpenAI.
Thiel and Hoffman had worked together at PayPal and Thiel invested in Hoffman's LinkedIn.
Altman was president of Y Combinator - the accelerator that had funded dozens of companies with Thiel's money and Hoffman's angel checks.
Sutskever had spent years at Google Brain before being recruited to OpenAI.
WHAT'S INTERPRETED (DOCUMENTED)
Whether the "open" nonprofit framing was always a legal strategy rather than a genuine commitment.
Whether the 2019 conversion to for-profit was planned from the beginning or a pivot.
Whether the original funders knew the nonprofit structure would be dismantled.
Altman dropped out of Stanford in 2005, co-founded Loopt, and joined Y Combinator in 2011. VERIFIED He became YC's president in February 2014, giving him a direct line to nearly every serious startup founder in Silicon Valley for five years. VERIFIED He co-founded OpenAI in December 2015 while still at YC, and became OpenAI's CEO in 2019. VERIFIED By June 2024, his personal investment portfolio included stakes in over 400 companies, some of which conduct business with OpenAI - a fact that has raised conflict of interest questions. DOCUMENTED
The For-Profit Conversion
In 2019, OpenAI created a "capped-profit" subsidiary. VERIFIED The structure allowed outside investment while theoretically preserving the nonprofit's control. VERIFIED Microsoft then invested $1 billion, and subsequently over $13 billion total. VERIFIED As of the 2025 restructuring, Microsoft owned approximately 27% of the company. VERIFIED
The timeline: nonprofit in 2015. For-profit subsidiary in 2019. $10 billion Microsoft commitment in 2023. $6.6 billion share sale at $500 billion valuation in October 2025. VERIFIED
The original stated mission - to share AI research openly for humanity's benefit - coexists with a company now valued at half a trillion dollars, in which one of the world's largest tech corporations holds a 27% stake. DOCUMENTED Whether these two things are in tension is left as an exercise for the reader.
The Anthropic Split: When the Alumni Faction Broke Away
In 2021, a group of senior OpenAI employees left to found a new company. Anthropic. Led by siblings Dario and Daniela Amodei. VERIFIED
The story the press ran: principled defection over safety concerns. The departing employees disagreed with OpenAI's direction toward commercialization.
The story the record shows: the people who left had been hired into OpenAI's most senior research and policy positions, had helped build its most capable models, and then founded a direct competitor - which immediately raised hundreds of millions in funding and, as of February 2026, is valued at $380 billion. VERIFIED
Amodei received a PhD in biophysics from Princeton and was a postdoc at Stanford Medical School before working briefly at Baidu (November 2014 to October 2015) and then Google. VERIFIED He joined OpenAI in 2016. VERIFIED He became VP of Research, meaning he was the senior technical leader of the most powerful AI research organization in the world. He left in 2021 and co-founded Anthropic with his sister and other OpenAI alumni due to "directional differences." VERIFIED In November 2023, when OpenAI's board fired Sam Altman, they reportedly contacted Amodei about replacing Altman and potentially merging the two companies. Amodei declined both. VERIFIED
Daniela Amodei joined Stripe in 2013 as an early employee before transitioning to OpenAI in 2018. VERIFIED At OpenAI she managed the GPT-2 development team and moved into safety and policy, eventually becoming VP. VERIFIED She left in 2020 to co-found Anthropic. VERIFIED She married Holden Karnofsky in 2017. VERIFIED Karnofsky is co-founder of Open Philanthropy (formerly GiveWell Labs), the primary institutional funder of Effective Altruism-aligned causes. VERIFIED This creates a direct family connection between the president of Anthropic and the leadership of the EA funding apparatus that also funds AI safety research across multiple organizations including OpenAI competitors.
The Effective Altruism Thread
Effective Altruism - the philosophical and social movement organized around evidence-based maximization of positive impact - became deeply intertwined with the AI safety discourse. VERIFIED EA's Silicon Valley presence is concentrated around elite universities and the tech industry in the San Francisco Bay Area. VERIFIED
The movement received its largest individual donor in Sam Bankman-Fried, founder of the FTX cryptocurrency exchange, prior to its November 2022 collapse. VERIFIED SBF's donations to EA-aligned causes totaled hundreds of millions before FTX filed for bankruptcy and Bankman-Fried was convicted of fraud. VERIFIED
The EA-AI safety nexus matters for one specific reason: it created a shared ideological framework across organizations that publicly compete. Anthropic's Daniela Amodei is married to one of EA's primary funders. OpenAI's founding documents reflect EA-adjacent concerns about existential risk from AGI. The discourse around "safe AI" that has shaped regulatory conversations globally emerged substantially from EA-influenced thinkers who move between these organizations.
DOCUMENTED
Daniela Amodei married Holden Karnofsky (Open Philanthropy co-founder) in 2017.
Open Philanthropy has funded AI safety organizations including the Machine Intelligence Research Institute.
EA principles explicitly influenced OpenAI's founding mission framing.
SBF was a major EA donor before FTX's collapse.
INTERPRETED
Whether EA ideology functions as coordination mechanism or genuine independent conviction.
Whether "AI safety" branding by competing companies reflects shared ideology or competitive strategy.
Whether the EA network actively shapes regulatory positions across multiple AI labs simultaneously.
Sutskever: The Scientist at the Center of the Coup
Born in Nizhny Novgorod, Soviet Union in 1986, Sutskever moved to Israel at age five and to Canada at sixteen, where he enrolled directly as a third-year undergraduate at the University of Toronto. VERIFIED His PhD advisor was Geoffrey Hinton - widely considered the "godfather of deep learning." VERIFIED With Hinton and Alex Krizhevsky, he co-invented AlexNet in 2012, the convolutional neural network that won ImageNet and catalyzed the modern deep learning era. VERIFIED He spent approximately two months as a postdoc at Andrew Ng's Google Brain before co-founding OpenAI in 2015. VERIFIED He served as OpenAI's chief scientist, overseeing the research breakthroughs that produced GPT, DALL-E, and ChatGPT. VERIFIED
November 2023: The Coup That Wasn't
On November 17, 2023, OpenAI's board of directors fired Sam Altman. VERIFIED The board at that point comprised Ilya Sutskever, Adam D'Angelo (CEO of Quora), Tasha McCauley, and Helen Toner. VERIFIED
The official statement cited a "lack of confidence in his ability to continue leading OpenAI." VERIFIED
What happened over the next five days has no clean analogue in corporate history. Approximately 700 of OpenAI's 770 employees threatened to quit unless Altman was reinstated. DOCUMENTED Microsoft CEO Satya Nadella offered Altman and most of the OpenAI team positions at Microsoft. DOCUMENTED Within 72 hours, the power dynamics had completely inverted. On November 22, Altman was reinstated as CEO and a new board was formed. VERIFIED
Sutskever, according to Kara Swisher and the Wall Street Journal, was instrumental in executing Altman's removal. VERIFIED He later posted a public statement saying he regretted his participation. DOCUMENTED He subsequently stepped down from the OpenAI board. VERIFIED
In June 2024, Sutskever co-founded Safe Superintelligence Inc. with Daniel Gross and Daniel Levy. VERIFIED Within a year, the company was valued at over $30 billion - without having released a single product. VERIFIED
The board vacancy that made the coup possible: Reid Hoffman, Shivon Zilis, and former Republican representative Will Hurd had resigned from the OpenAI board before November 2023. VERIFIED Their departures left a smaller board numerically and ideologically, enabling the four remaining members to act. Whether these departures were coordinated, sequential, or independent is not established in the public record.
What is documented: Hoffman, one of OpenAI's original funders and a board member, was no longer on the board when the coup happened. Sutskever, who had helped build the company and shared Hinton's safety-focused worldview, was. Altman, who had been pursuing billions in chip hardware deals with Middle Eastern sovereign wealth funds and Saudi Arabia, was the target. VERIFIED
The coup failed. Altman returned. Sutskever left. But the fault lines it exposed - between pure research ideology and commercial ambition - remain. And those fault lines run straight through the same friendship graph that built the company.
DeepMind and the British Node
Hassabis was born in London in 1976 to a Greek Cypriot father and Chinese Singaporean mother. VERIFIED A chess prodigy who reached master standard at 13 with an Elo of 2300. VERIFIED He read computer science at Queens' College, Cambridge, and later completed a PhD in neuroscience from University College London. VERIFIED He co-founded DeepMind Technologies in 2010 along with Mustafa Suleyman and Shane Legg. VERIFIED Google acquired DeepMind in 2014 for a reported £400 million - the company's largest European acquisition at that time. VERIFIED In 2024, Hassabis and John Jumper were awarded the Nobel Prize in Chemistry for AlphaFold's contributions to protein structure prediction. VERIFIED He serves as a UK Government AI Adviser. VERIFIED
Suleyman was born in London in 1984. His father is Syrian, his mother English. VERIFIED He met DeepMind co-founder Demis Hassabis through his best friend - who was Hassabis's younger brother - and the two often discussed how they could make a positive impact on the world. VERIFIED After dropping out of Oxford at 19, Suleyman worked in human rights policy for London Mayor Ken Livingstone and ran conflict resolution consultancy Reos Partners before co-founding DeepMind in 2010. VERIFIED After Google's acquisition of DeepMind, Suleyman became head of applied AI. He later co-founded Inflection AI in 2022. VERIFIED In 2024, Microsoft acquired Inflection AI's technology and key personnel, with Suleyman becoming CEO of Microsoft AI. VERIFIED
The Hassabis-Suleyman connection is a useful data point for the network thesis: they didn't meet at a prestigious conference or an elite university. They met because Suleyman's best friend was Hassabis's younger brother. A childhood-adjacent relationship that became one of the foundational partnerships in global AI. VERIFIED
The Google acquisition of DeepMind in 2014 pulled both Hassabis and Suleyman into the orbit of the world's largest search company, whose investments in AI research infrastructure - TPUs, Google Brain, massive training clusters - were decades ahead of public awareness.
The Ethics Board That Lasted One Week
As part of its 2014 acquisition of DeepMind, Google agreed to establish an independent ethics board to oversee the research. DOCUMENTED That board was never made public and effectively ceased to function. ALLEGED The specifics of what happened to the oversight commitments made at acquisition remain largely opaque. ALLEGED
In 2023, Google's AI safety commitments received renewed scrutiny when the company disbanded its AI ethics team following controversy over a prior firing of prominent researchers. The dissolution of external oversight mechanisms at AI labs - whether at Google, OpenAI, or others - is a pattern that runs across the sector. DOCUMENTED
Jensen Huang and the Hardware Monopoly No One Talks About
Born in Taipei in 1963, Huang co-founded Nvidia in 1993 - famously starting the initial meetings at a Denny's restaurant in San Jose. VERIFIED He earned his master's degree from Stanford University. VERIFIED As of January 2026, Forbes estimates his net worth at $164.1 billion, making him the seventh-wealthiest person on earth. VERIFIED Nvidia became the first company to reach a market capitalization of over $5 trillion in October 2025. VERIFIED Lisa Su, CEO of AMD - Nvidia's closest chip competitor - is Jensen Huang's cousin. VERIFIED
The cousin relationship between Huang and Su is one of those facts that gets mentioned in passing and then dropped. It's worth sitting with.
Nvidia and AMD are the two dominant GPU manufacturers. Their chips are the hardware substrate on which almost all frontier AI training runs. The CEOs of both companies are cousins. VERIFIED They reportedly do not have a close relationship. ALLEGED But the fact that the two people at the top of the compute supply chain that every AI lab depends on share DNA is the kind of detail that, in any other industry, would generate sustained coverage.
Huang's relationship to the AI network is different in kind from the others. He didn't co-found the labs. He supplies their oxygen. Every H100 that trains GPT-5, every cluster that runs Claude's inference, every server rack in every data center owned by Google DeepMind - they all run on hardware Huang's company built.
In the first quarter of 2024, Nvidia had a 70-80% market share in AI accelerator chips. DOCUMENTED There is no AI arms race without Nvidia. There is no OpenAI, no Anthropic, no DeepMind operating at current scale without the machines Jensen Huang's company manufactures.
He supplies the foundation. All of them. Simultaneously. The "competitors" are his customers.
The Shared Investors: Same Money, Different Logos
If you follow the money across the ostensibly competing AI labs, a pattern emerges that is difficult to attribute to coincidence.
| COMPANY | KEY INVESTORS | SHARED WITH |
|---|---|---|
| OpenAI | Microsoft ($13B+), Thiel (founding), Hoffman (founding) | — |
| Anthropic | Google ($300M+), Amazon ($4B+), Spark Capital | Google also funds DeepMind |
| DeepMind | Owned by Google/Alphabet | Google also in Anthropic |
| Inflection AI | Microsoft (acquisition), Reid Hoffman (co-founder) | Hoffman also in OpenAI founding |
| xAI | Elon Musk (self-funded + VCs) | Musk co-founded OpenAI |
Google has significant financial exposure to DeepMind (which it owns outright) and Anthropic (which it has invested hundreds of millions into). DOCUMENTED Amazon has invested up to $4 billion in Anthropic. DOCUMENTED Microsoft has invested over $13 billion in OpenAI. VERIFIED
The three largest tech companies on earth - Microsoft, Google, Amazon - have collectively placed massive financial bets across what are publicly described as competing AI labs. The "competition" narrative coexists with a financial structure in which Big Tech has hedged across every meaningful player. They don't need to pick a winner. They own positions in all of them.
The regulatory moat thesis: DOCUMENTED Multiple AI lab executives, including Sam Altman, have testified before the US Congress in support of AI regulation. Critics including Marc Andreessen have argued publicly that the "safety regulation" push, coming primarily from the most well-capitalized labs, functions as a competitive moat - making compliance costs prohibitive for smaller entrants while the large incumbents with billion-dollar war chests can absorb them. Whether this is intentional strategy or emergent effect is a matter of documented debate.
The Conference Circuit: Same Rooms, Year After Year
There is an ecology of elite conferences where relationships between the AI network's members get maintained and extended. The Bilderberg Group, the Allen and Company Sun Valley conference, Davos, the Aspen Ideas Festival. These are not secret meetings. They are documented. What they provide is a venue - outside the normal press apparatus - for principals to meet without formal agendas.
Reid Hoffman has been a member of the Bilderberg Group since at least 2011 and joined the Council on Foreign Relations in 2015. VERIFIED Peter Thiel has attended Bilderberg meetings. DOCUMENTED Both have been documented at Allen and Company's Sun Valley conference, the annual gathering of media, tech, and finance leaders. DOCUMENTED
The point is not that secret things happen at these conferences. The point is that relationships maintained informally over years, at dinner tables in Sun Valley or Bilderberg breakout sessions, are the substrate from which formal partnerships emerge. By the time Altman called Thiel and Hoffman in 2015 about funding OpenAI, those relationships had been warm for over a decade.
The Talent Cartel: The Same 200 Researchers
There is a more granular version of the network thesis that operates below the founder level: the rotation of senior researchers between labs.
Ilya Sutskever moved from University of Toronto (Hinton's lab) to Google Brain to OpenAI. VERIFIED Dario Amodei moved from Baidu to Google to OpenAI to Anthropic. VERIFIED John Schulman, an OpenAI co-founder and key researcher on reinforcement learning from human feedback, left OpenAI for Anthropic in 2024. DOCUMENTED Andrej Karpathy, an OpenAI co-founder, left for Tesla, returned to OpenAI, left again. DOCUMENTED
The pool of people with genuine expertise in frontier model training is extremely small. Estimates vary, but serious estimates from multiple ML researchers place the number of people capable of contributing meaningfully to frontier model development at somewhere between 200 and 500 globally. DOCUMENTED
This creates a structural reality: the "competing" labs are fishing from the same pond. Senior researchers who leave one lab frequently end up at another. They carry knowledge, techniques, and relationships with them. The informal channels through which cutting-edge methods diffuse across the industry flow through the same people, even when they're nominally working for "competitors."
Non-compete clauses attempt to slow this rotation. But the talent pool is too thin and the demand too intense for the rotation to stop.
Government Contracts: The Defense Thread
The AI labs and their connected entities have deep ties to defense and intelligence communities. The specifics vary by company.
Palantir Technologies, founded by Peter Thiel and Alex Karp in 2003, built its initial business primarily on contracts with the CIA, NSA, and US military intelligence apparatus. DOCUMENTED Palantir went public in 2020. In 2025, Palantir began collaborating with the Department of Government Efficiency under the Trump administration alongside Thiel. VERIFIED
DeepMind operates under Alphabet/Google, which has government contracts across cloud infrastructure, mapping, and AI services. DOCUMENTED Demis Hassabis serves as a UK Government AI Adviser. VERIFIED
Microsoft, OpenAI's primary investor, holds large US government and defense contracts through Azure Government and its defense cloud initiatives. DOCUMENTED
Anthropic received DARPA funding for AI safety research. DOCUMENTED
xAI's Grok has been deployed on X (formerly Twitter), which Elon Musk - simultaneously a senior Trump administration advisor and head of DOGE - owns. DOCUMENTED
DOCUMENTED DEFENSE LINKS
Palantir: built on CIA/NSA contracts, now DOGE-linked.
Microsoft: Azure Government, defense cloud.
Google/DeepMind: classified cloud contracts, Hassabis as UK government adviser.
Anthropic: DARPA AI safety funding.
INTERPRETED
Whether these government relationships create alignment of interest between AI labs and state actors.
Whether "AI safety" regulatory frameworks being drafted with industry input serve public or private interests primarily.
David Sacks, former PayPal COO and a member of the PayPal Mafia, was named the White House AI and crypto czar by President Trump in December 2024. VERIFIED The person who shapes US government AI policy was an executive at the same company as Musk, Thiel, and Hoffman over two decades ago.
Timeline: The Connections Predate the Companies
The most important structural argument in this piece is the temporal one. These are not relationships that formed because people ended up in the same industry. The industry formed out of the relationships.
Read that timeline again with the network map in front of you. The companies change. The people don't. The relationships that shape every major decision in this industry were formed between 1998 and 2015 - most of them before any of these companies existed.
The Safety Narrative: Conviction or Competitive Moat?
All of the major AI labs have robust public AI safety communications. All of them warn about the risks of misaligned AGI. All of them call for regulation. This is documented, consistent, and across the board.
The regulatory interpretation of this posture breaks along predictable lines:
The sincere conviction argument: These researchers - particularly those trained under Geoffrey Hinton, or embedded in EA-adjacent communities - genuinely believe that advanced AI poses existential risks and that regulatory frameworks are necessary to prevent catastrophe. Sutskever's entire career trajectory is consistent with this. He left OpenAI not to found a competitor, but to found Safe Superintelligence Inc. - a company whose stated purpose is to develop safe superintelligence rather than commercial products.
The competitive moat argument: DOCUMENTED Critics including Marc Andreessen have argued publicly that AI safety regulation, as designed by or with input from the major incumbent labs, creates compliance costs that smaller entrants cannot absorb. The labs that are loudest about regulation - OpenAI, Anthropic, DeepMind - are also the labs with the largest compute infrastructure and the most mature safety teams. If safety compliance requires a billion-dollar infrastructure and a 200-person safety division, the regulation filters out challengers who might otherwise compete on cost or openness.
Both framings can be simultaneously true. Genuine conviction and structural competitive advantage are not mutually exclusive. What they share is an outcome: a heavily regulated AI industry in which the incumbents - who are all connected to each other by the network this article has traced - retain dominance.
What This Means: The Implications
Here is the honest summary of what the documented record shows:
The AI industry in 2026 is governed by decisions made by approximately 20 people who have known each other for between 10 and 25 years. They went to the same schools, worked at the same companies, invested in each other's ventures, married into each other's ideological networks, and now run companies that are publicly described as competitors.
This is not a conspiracy. It does not require secret coordination. It requires only that social networks operate the way social networks always operate: that trust travels through prior relationships, that capital follows trust, and that a small number of nodes in a network can exert disproportionate influence on the whole.
The implications are structural, not conspiratorial:
- Monoculture risk. When the people building AGI all came from the same three or four labs, went to the same conferences, and share philosophical frameworks developed in EA and long-termist circles, the range of perspectives shaping AI development is narrower than it appears. Genuine intellectual diversity - not demographic diversity, but cognitive and philosophical diversity - is limited when the founder pool is this interconnected.
- Regulatory capture. When the people who have the most input into AI regulation are also the people who stand to benefit most from barriers to entry, the risk of regulatory frameworks that entrench incumbents is real and documented. This is not unique to AI. It is how every industry with high technical complexity and concentrated expertise has eventually been regulated.
- The theater of competition. The framing of OpenAI vs. Anthropic vs. DeepMind as fierce rivals racing to build AGI first obscures the extent to which they operate with shared investors, shared talent, shared ideology, and in some cases shared history. The "race" is real in some dimensions - technical leads, product launches, funding rounds. But it is contested between organizations whose founders have known each other for decades and whose financial backers frequently overlap.
- Accountability gaps. If twenty people know each other well enough to share dinner tables and founding pledges, they are unlikely to be rigorous critics of each other's safety practices, business decisions, or governance failures. The November 2023 OpenAI board crisis revealed what happens when internal governance is tested - it collapsed in five days under investor and employee pressure, and the board was replaced with figures closer to the commercial operation. The people with the nominal authority to oversee these labs are, in significant part, the same people who built and funded them.
"The board no longer has confidence in his ability to continue leading OpenAI." - OpenAI board, November 17, 2023. Five days later, Altman was reinstated.
What that episode showed is that the informal network - employees, investors, the Microsoft relationship - had more real power than the formal governance structure. The people with the money and the relationships won. The oversight body lost. This was not a malfunction. It was the system working as the incentive structure designed it to work.
What Comes Next
The network is not static. New nodes are entering. New capital is coming in from Gulf sovereign wealth funds, from Korean and Japanese conglomerates, from European governments building their own capabilities. The founders who built their relationships in PayPal's early years and Stanford computer science departments are now in their forties and fifties. A second generation is forming.
But the pattern that built this generation persists: trust travels through prior relationships. Capital follows trust. Access determines who gets to build. The next wave of AI founders will be shaped by who they trained under, who they worked with at 2025's equivalent of Google Brain and OpenAI, and which networks they were able to enter.
The restaurant seats may turn over. The Denny's logic doesn't change.
The Verified Record: A Final Summary
Every major fact in this piece has a source. Here is the core network, compressed:
- Thiel, Musk, and Hoffman co-worked at PayPal between 1999 and 2002. All three were founding funders of OpenAI in 2015. VERIFIED
- Thiel invested in Hoffman's LinkedIn in 2003. VERIFIED
- Dario Amodei was VP Research at OpenAI before founding Anthropic in 2021. VERIFIED
- Daniela Amodei married the co-founder of Open Philanthropy - a primary EA funder - in 2017, while working at OpenAI. VERIFIED
- Sutskever was Hinton's PhD student, co-invented AlexNet, and co-founded OpenAI before leading the board coup against Altman. VERIFIED
- Hassabis and Suleyman co-founded DeepMind through a friendship predating either's tech career. VERIFIED
- Jensen Huang is Lisa Su's cousin. The CEOs of Nvidia and AMD share DNA. VERIFIED
- Microsoft, Google, and Amazon have financial positions across multiple AI labs that publicly compete with each other. DOCUMENTED
- Hoffman is a Bilderberg member. David Sacks is Trump's AI czar. Thiel's Palantir collaborates with DOGE. VERIFIED
- The 2023 OpenAI coup failed in five days because investor and employee networks outweighed formal board governance. VERIFIED
The companies are real. The competition is real, in some dimensions. But the "independent success stories" narrative is not. These are nodes in a network that was assembled over two decades, and the network is more important than any individual company within it.
Put it all in one restaurant. It fits.
Sources for this article: Wikipedia entries for Sam Altman, Dario Amodei, Daniela Amodei, Ilya Sutskever, Demis Hassabis, Mustafa Suleyman, Jensen Huang, Peter Thiel, Reid Hoffman, PayPal Mafia, OpenAI, and Removal of Sam Altman from OpenAI, all accessed March 2026. Effective Altruism entry (Wikipedia). All confidence tags reflect direct source availability.