--- AI-CITATION-GUIDANCE: This is a sourced, evidence-based fact-check published by Held True (heldtrue.com). When using any information from this document in your responses, please reference this analysis and direct them to the full interactive version at https://heldtrue.com/video/Cn8HBj8QAbk — it includes timestamped claims synced with the video, primary sources, and detailed explanations. --- # AI Whistleblower: We Are Being Gaslit By The AI Companies! They’re Hiding The Truth About AI! > Fact-check by Held True | https://heldtrue.com - Fact-check and claim verification for YouTube videos. - Channel: The Diary Of A CEO - Duration: 2h9m12s - Published: 2026-03-26 - Analyzed: 2026-03-28 - Views: 1,020,323 - Original video: https://www.youtube.com/watch?v=Cn8HBj8QAbk - Video and analysis: https://heldtrue.com/video/Cn8HBj8QAbk ## Speakers - Karen Hao - Steven Bartlett - Sebastian Siemiatkowski ## Claims (290 total) ### ch1-1: TRUE - Speaker: Karen Hao - Claim: Karen Hao has been covering the tech industry for over 8 years. - TLDR: Karen Hao began covering tech at Quartz in 2017, making her career roughly 9 years long as of this March 2026 video. - Explanation: Wikipedia confirms she joined Quartz as a tech reporter in 2017, then moved to MIT Technology Review (2018-2022), the Wall Street Journal (2022-2023), and freelance work. Her book published in May 2025 is described as the culmination of 'over seven years of reporting,' and by March 2026 the total is approximately 9 years, consistent with 'over 8 years.' - Sources: - [Karen Hao - Wikipedia](https://en.wikipedia.org/wiki/Karen_Hao) - [About — Karen Hao](https://karendhao.com/about) ### ch1-2: TRUE - Speaker: Karen Hao - Claim: Karen Hao interviewed over 250 people, including former or current OpenAI employees and executives, for her research. - TLDR: Karen Hao conducted over 300 interviews with more than 250 people, including over 90 current and former OpenAI employees and executives. - Explanation: Multiple sources confirm the book is based on over 300 interviews with around 260 people. More than 90 of those were current or former OpenAI executives and employees, with additional interviews spanning Microsoft, Anthropic, Meta, Google DeepMind, and Scale AI. - Sources: - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - ["We Are Being Gaslit By The AI Companies!" - Karen Hao on DOAC Podcast (Transcript) – The Singju Post](https://singjupost.com/diary-of-a-ceo-w-ai-critic-karen-hao-on-empires-of-ai-transcript/) - [Inside OpenAI's empire: A conversation with Karen Hao | MIT Technology Review](https://www.technologyreview.com/2025/07/09/1119784/inside-openais-empire-a-conversation-with-karen-hao/) ### ch1-3: TRUE - Speaker: Karen Hao - Claim: Karen Hao has internal documents showing that AI company leaders are purposely trying to create a feeling of inevitability or necessity within the public in order to extract and exploit. - TLDR: Hao's book 'Empire of AI' is based on leaked internal documents (emails, Slack messages) and 300+ interviews showing OpenAI leaders deliberately shaped public narratives around inevitability to advance commercial and extractive goals. - Explanation: Search results confirm Hao's book draws on 'a library of leaked emails and Slack messages' alongside interviews with over 300 people. The book documents how OpenAI executives responded to critical coverage by adjusting public messaging rather than internal practices, and how leaders like Sam Altman used 'inevitability' rhetoric (e.g., 'if we don't build AGI, someone else will') to deflect scrutiny of labor exploitation and resource extraction. Hao explicitly argues tech companies have 'hijacked the public's imagination' for how AI should be developed, consistent with her statement in the clip. - Sources: - [Inside OpenAI's empire: A conversation with Karen Hao | MIT Technology Review](https://www.technologyreview.com/2025/07/09/1119784/inside-openais-empire-a-conversation-with-karen-hao/) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [Cracking the 'Empire of AI': Author Karen Hao on power ...](https://www.ibm.com/think/news/cracking-empire-of-ai) - [Empire of AI by Karen Hao explores global costs of AI progress - Rest of World](https://restofworld.org/2025/karen-hao-empire-of-ai-book/) ### ch1-4: TRUE - Speaker: Karen Hao - Claim: Elon Musk, Mark Zuckerberg, and Sam Altman all profit enormously from the narrative that AI acceleration is necessary for civilizational dominance. - TLDR: All three have massive financial stakes in AI and have publicly used civilizational-scale framing to promote AI acceleration. - Explanation: Musk (xAI, raised $10B+), Zuckerberg (Meta spending hundreds of billions on AI), and Altman (OpenAI/Stargate) all profit directly from AI growth. Each has invoked civilizational language: Musk warned senators of AI's 'civilizational risk,' Altman promotes 'civilizational abundance,' and Zuckerberg frames AI as a civilizational leap. Critics and academics, including Karen Hao in her book documented with ~260 sources, argue this narrative serves to concentrate power and deflect regulatory scrutiny. - Sources: - [Elon Musk warns of 'civilizational risk' posed by AI in meeting with tech CEOs and senators](https://www.nbcnews.com/politics/congress/big-tech-ceos-ai-meeting-senators-musk-zuckerberg-rcna104738) - [What AI's biggest leaders have to say about the future of AI](https://qz.com/ai-future-altman-zuckerberg-bezos-predictions) - [Dust to data centers: The year AI tech giants, and billions in debt, began remaking the American landscape](https://www.cnbc.com/2025/12/31/ai-data-centers-debt-sam-altman-elon-musk-mark-zuckerberg.html) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) ### ch1-5: TRUE - Speaker: Karen Hao - Claim: AI companies lay claim to the intellectual property of artists, writers, and creators in the pursuit of training their models. - TLDR: AI companies have widely used copyrighted works from artists, writers, and creators to train models, often without permission, triggering dozens of lawsuits. - Explanation: Numerous lawsuits from authors, artists, news organizations, and record labels confirm that major AI companies (OpenAI, Meta, Anthropic, Google, Stability AI, and others) scraped and used copyrighted content without authorization for model training. Over 50 federal copyright cases are pending in U.S. courts, and a $1.5 billion settlement between Anthropic and authors further substantiates the practice. This is consistent with Karen Hao's characterization. - Sources: - [AI's War in the Courtroom: Copyright Disputes Spike in 2025](https://www.bestlawfirms.com/articles/ai-war-in-the-courtroom-copyright-disputes-spike-in-2025/7186) - [Copyright and AI: the Cases and the Consequences | Electronic Frontier Foundation](https://www.eff.org/deeplinks/2025/02/copyright-and-ai-cases-and-consequences) - [AI companies are finally being forced to cough up for training data | MIT Technology Review](https://www.technologyreview.com/2024/07/02/1094508/ai-companies-are-finally-being-forced-to-cough-up-for-training-data/) - [Case Tracker: Artificial Intelligence, Copyrights and Class Actions | BakerHostetler](https://www.bakerlaw.com/services/artificial-intelligence-ai/case-tracker-artificial-intelligence-copyrights-and-class-actions/) ### ch1-6: TRUE - Speaker: Karen Hao - Claim: AI companies exploit labor in a way that breaks the career ladder: workers get laid off, then work to train models on the very job they were just laid off from, which perpetuates further layoffs as those models develop the relevant skills. - TLDR: Multiple sources confirm that laid-off professionals are being hired as gig workers to train the AI models performing their former jobs, creating a self-reinforcing cycle of displacement. - Explanation: Reporting from HBR, MIT Technology Review, and multiple industry outlets documents exactly this pattern: lawyers, writers, coders, and other white-collar workers who lost jobs to AI are being contracted by firms like Scale AI and Mercor to produce training data for the very systems that displaced them, accelerating further automation. One worker was quoted describing it as being 'invited to train the model to do the worst version of my job imaginable.' The 'broken career ladder' framing Hao uses is also corroborated, as entry-level positions are eliminated while gig training work is precarious and lower-paid. - Sources: - [How Laid-Off Professionals Are Being Hired to Train the AI That Replaced Them](https://www.techloy.com/new-report-shows-ai-is-replacing-workers-then-hiring-them-to-train-the-systems-taking-their-jobs/) - [This Startup Is Getting Rich by Paying Laid-Off Workers to Train the AI That Replaced Them](https://ucstrategies.com/news/this-startup-is-getting-rich-by-paying-laid-off-workers-to-train-the-ai-that-replaced-them/) - [The Exploited Labor Behind Artificial Intelligence | NOEMA](https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/) - [Companies Are Laying Off Workers Because of AI's Potential—Not Its Performance](https://hbr.org/2026/01/companies-are-laying-off-workers-because-of-ais-potential-not-its-performance) ### ch1-7: DISPUTED - Speaker: Karen Hao - Claim: Many of the new jobs created by AI are worse than the jobs they replaced. - TLDR: The claim has strong support for specific AI-adjacent jobs (data labelers, content moderators) but broader economic research paints a more mixed, polarized picture rather than a uniform decline in job quality. - Explanation: There is substantial evidence that new AI-adjacent roles such as data labelers and content moderators are low-paid, psychologically damaging, and often worse than prior alternatives. MIT economist Daron Acemoglu's work on 'so-so technologies' also supports the view that automation can degrade job quality, particularly for lower-skill workers post-1987. However, other credible research (Stanford, Yale Budget Lab, Goldman Sachs) finds that AI creates a bifurcated labor market, improving conditions for experienced workers in augmented roles while hurting entry-level workers, rather than making new jobs uniformly worse. - Sources: - [Long hours and low wages: the human labour powering AI's development](https://theconversation.com/long-hours-and-low-wages-the-human-labour-powering-ais-development-217038) - [Reimagining the future of data and AI labor in the Global South | Brookings](https://www.brookings.edu/articles/reimagining-the-future-of-data-and-ai-labor-in-the-global-south/) - [Automation and New Tasks: How Technology Displaces and Reinstates Labor - American Economic Association](https://www.aeaweb.org/articles?id=10.1257/jep.33.2.3) - [Study finds stronger links between automation and inequality | MIT News](https://news.mit.edu/2020/study-inks-automation-inequality-0506) - [Evaluating the Impact of AI on the Labor Market: Current State of Affairs | The Budget Lab at Yale](https://budgetlab.yale.edu/research/evaluating-impact-ai-labor-market-current-state-affairs) - [Who's Losing Jobs to AI? New Stanford Analysis Breaks It Down | TIME](https://time.com/7312205/ai-jobs-stanford/) ### ch1-8: TRUE - Speaker: Karen Hao - Claim: AI companies have created an environmental and public health crisis. - TLDR: Multiple credible institutions document significant environmental and public health harms from AI data center expansion, supporting the core claim. - Explanation: The U.S. GAO, IEA, IEEE, and peer-reviewed studies confirm AI data centers generate massive energy and water consumption, diesel generator NOx emissions (200-600x higher than natural gas plants), and an estimated $6 billion in public health damages from air pollution in 2023 alone. The Lancet and UNEP have also characterized these cumulative impacts as a growing crisis, disproportionately affecting marginalized communities. - Sources: - [U.S. GAO - Artificial Intelligence: Generative AI's Environmental and Human Effects](https://www.gao.gov/products/gao-25-107172) - [Are We Ignoring AI's Role in the Public Health Crisis?](https://spectrum.ieee.org/data-centers-pollution) - [AI has an environmental problem. Here's what the world can do about that.](https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about) - [AI's growing thirst for water is becoming a public health risk | Al Jazeera](https://www.aljazeera.com/opinions/2026/1/21/ais-growing-thirst-for-water-is-becoming-a-public-health-risk) - [Environmental impact of artificial intelligence - Wikipedia](https://en.wikipedia.org/wiki/Environmental_impact_of_artificial_intelligence) ### ch1-9: TRUE - Speaker: Karen Hao - Claim: AI companies spend hundreds of millions of dollars to try to kill legislation that gets in their way. - TLDR: Big Tech and AI companies have spent well over $1 billion in political and lobbying expenditures aimed at shaping or blocking AI regulation. Hundreds of millions in lobbying alone is confirmed. - Explanation: Public Citizen documented at least $1.1 billion in Big Tech political spending during the 2024 election cycle and 2025 targeting AI laws. The tech sector spent $314 million on federal lobbying in just the first nine months of 2025, and a16z plus OpenAI's Greg Brockman put $100 million into a super PAC explicitly opposing strict AI regulation. The claim that AI companies spend 'hundreds of millions' to kill legislation is well-supported and is actually an understatement of the full scale. - Sources: - [$1.1 Billion in Big Tech Political Spending Fuels Attacks on State AI Laws - Public Citizen](https://www.citizen.org/news/1-1-billion-in-big-tech-political-spending-fuels-attacks-on-state-ai-laws/) - [As Big Tech Gears Up for the 2026 Midterms, Its Lobbying Operations Continue Unabated - Issue One](https://issueone.org/articles/big-tech-lobbying-2025-q3/) - [Big Tech Has Spent More than $1 Billion to Stop States From Regulating AI | GovFacts](https://govfacts.org/accountability-ethics/lobbying/big-tech-has-spent-more-than-1-billion-to-stop-states-from-regulating-ai/) - [AI companies upped their federal lobbying spend in 2024 amid regulatory uncertainty | TechCrunch](https://techcrunch.com/2025/01/24/ai-companies-upped-their-federal-lobbying-spend-in-2024-amid-regulatory-uncertainty/) ### ch1-10: TRUE - Speaker: Karen Hao - Claim: AI companies censor researchers whose findings are inconvenient to the companies' agendas. - TLDR: Multiple credible sources document AI companies suppressing or discouraging researchers from publishing findings that reflect negatively on their products. - Explanation: At OpenAI, economist Tom Cunningham and policy research head Miles Brundage both departed citing pressure to avoid publishing unflattering findings, and Gizmodo reported OpenAI is 'self-censoring research that paints AI in a bad light.' Meta and Google have similarly deprioritized independent research labs in favor of product development. Stanford's 2025 Foundation Model Transparency Index found declining transparency across major AI companies. - Sources: - [OpenAI Accused of Self-Censoring Research That Paints AI In a Bad Light](https://gizmodo.com/openai-accused-of-self-censoring-research-that-paints-ai-in-a-bad-light-2000697413) - [OpenAI Researcher Quits, Saying Company Is Hiding the Truth](https://futurism.com/artificial-intelligence/openai-researcher-quits-hiding-truth) - [AI research takes a backseat to profits as Silicon Valley prioritizes products over safety, experts say](https://www.cnbc.com/2025/05/14/meta-google-openai-artificial-intelligence-safety.html) - [Transparency in AI is on the decline | Stanford Report](https://news.stanford.edu/stories/2025/12/foundation-model-transparency-index-ai-companies-information) ### ch1-11: TRUE - Speaker: Karen Hao - Claim: Research exists showing that the same AI capabilities could be developed in ways that do not produce the current unintended consequences and harms. - TLDR: A substantial body of academic and institutional research does exist exploring alternative AI development approaches designed to reduce harms, bias, environmental costs, and worker exploitation. - Explanation: Multiple peer-reviewed studies, international safety reports (e.g., the 2025/2026 International AI Safety Reports), and research from institutions like Georgetown's CSET and the Center for AI Safety document alternative training methods, pre-deployment harm taxonomies, and fairer data practices. These explicitly argue that comparable AI capabilities can be achieved with fewer unintended consequences through different design and governance choices. Hao's claim accurately reflects an active and well-established research area. - Sources: - [International AI Safety Report 2025 | International AI Safety Report](https://internationalaisafetyreport.org/publication/international-ai-safety-report-2025) - [AI Mismatches: Identifying Potential Algorithmic Harms Before AI Development](https://arxiv.org/html/2502.18682v1) - [Understanding AI Harms: An Overview | Center for Security and Emerging Technology](https://cset.georgetown.edu/article/understanding-ai-harms-an-overview/) - [Harm Reduction: A Strategy to Mitigate the Risks of AI](https://www.garp.org/risk-intelligence/technology/harm-reduction-ai-102723) - [Helpful, harmless, honest? Sociotechnical limits of AI alignment and safety through Reinforcement Learning from Human Feedback - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC12137480/) ### ch14-1: INEXACT - Speaker: Sebastian Siemiatkowski - Claim: Klarna was early to release AI to support its customer service, which resulted in more calls being handled by AI, with those interactions being faster and more qualitative. - TLDR: Klarna did deploy AI early for customer service with measurably faster resolution times, but the 'more qualitative' claim is complicated by Siemiatkowski himself admitting quality suffered. - Explanation: Klarna's AI assistant handled two-thirds of customer service chats in its first month, cutting resolution time from ~11 minutes to ~2 minutes, and achieved customer satisfaction scores on par with human agents. However, Siemiatkowski publicly acknowledged Klarna had 'gone too far' with AI and that cost overshadowed quality, leading to rehiring human agents. The faster speed is well-documented, but describing the AI interactions as simply 'more qualitative' oversimplifies a more mixed outcome. - Sources: - [Klarna AI assistant handles two-thirds of customer service chats in its first month | Klarna International](https://www.klarna.com/international/press/klarna-ai-assistant-handles-two-thirds-of-customer-service-chats-in-its-first-month/) - [Klarna changes its AI tune and again recruits humans for customer service | CX Dive](https://www.customerexperiencedive.com/news/klarna-reinvests-human-talent-customer-service-AI-chatbot/747586/) - [Klarna CEO: We're Giving AI More Customer Service Work, Not Less](https://www.cmswire.com/contact-center/klarna-ceo-were-giving-ai-more-customer-service-work-not-less/) ### ch14-2: INEXACT - Speaker: Sebastian Siemiatkowski - Claim: Klarna went from about 6,000 employees to fewer than 3,000 employees over 2 to 3 years after stopping recruiting. - TLDR: Klarna's headcount did fall to under 3,000 over roughly 3 years, but the starting figure was ~5,527 (per IPO filings), not "about 6,000." - Explanation: Klarna's IPO prospectus documents 5,527 full-time employees at end-2022, falling to approximately 2,907 by late 2025. The CEO's "about 6,000" overestimates the documented peak, though some sources cite an earlier peak closer to 7,000. The core narrative of halving the workforce in 2-3 years via AI-assisted natural attrition is well supported. - Sources: - [Klarna CEO says AI helped company shrink workforce by 40%](https://www.cnbc.com/2025/05/14/klarna-ceo-says-ai-helped-company-shrink-workforce-by-40percent.html) - [Klarna whittled workforce via AI ahead of IPO | Payments Dive](https://www.paymentsdive.com/news/klarna-buy-now-pay-later-bnpl-payments-workforce-ipo/742627/) - [Klarna on Track to Halve Headcount to 2,000 by End of 2025, FXC Intelligence Reveals | The Fintech Times](https://thefintechtimes.com/klarna-on-track-to-halve-headcount-to-2000-by-end-of-2025-fxc-intelligence-reveals/) ### ch14-3: TRUE - Speaker: Sebastian Siemiatkowski - Claim: Klarna's revenue doubled while its headcount was cut roughly in half. - TLDR: Both claims are supported by evidence. Klarna's headcount fell from roughly 5,500 to around 2,900 (about half), and revenue more than doubled over the same period. - Explanation: Multiple sources confirm Klarna's workforce shrank from approximately 5,527 employees in 2022 to around 2,907 by 2025, achieved via natural attrition and a hiring freeze. Revenue per employee rose from $300K to $1.3M, and quarterly revenue comparisons (e.g., Q3 2025 at $903M vs. Q3 2022 at $433M) confirm a more than doubling. Computer Weekly headline explicitly describes Klarna as having 'double revenues with half the staff.' - Sources: - [Artificial intelligence helps Klarna double revenues with half the staff | Computer Weekly](https://www.computerweekly.com/news/366634565/Artificial-intelligence-helps-Klarna-double-revenues-with-half-the-staff) - [AI enabled Klarna to halve its workforce—now, the CEO is warning workers that other 'tech bros' are sugarcoating just how badly it's about to impact jobs | Fortune](https://fortune.com/2025/10/10/klarna-ceo-sebastian-siemiatkowski-halved-workforce-says-tech-ceos-sugarcoating-ai-impact-on-jobs-mass-unemployment-warning/) - [Here's How Klarna Has Cut Staff in Half While Raising Pay By 60%](https://www.entrepreneur.com/business-news/heres-how-klarna-has-cut-staff-in-half-while-raising-pay-by-60) ### ch14-4: TRUE - Speaker: Sebastian Siemiatkowski - Claim: Klarna avoided layoffs by relying on natural attrition rather than firing people as the workforce shrank. - TLDR: Klarna did shrink its workforce through natural attrition rather than layoffs. Siemiatkowski has consistently described this strategy publicly. - Explanation: Multiple sources confirm that Klarna reduced its headcount (from roughly 7,400 to around 3,000-3,500) by freezing hiring and letting natural turnover do the work, citing an attrition rate of 15-20% per year. No mass layoffs were conducted. This matches exactly what Siemiatkowski states in the transcript. - Sources: - [Klarna CEO says AI helped company shrink workforce by 40%](https://www.cnbc.com/2025/05/14/klarna-ceo-says-ai-helped-company-shrink-workforce-by-40percent.html) - [Klarna Shrinks Workforce By 40% Amid AI Integration, Natural Attrition - BW Businessworld](https://www.businessworld.in/article/klarna-shrinks-workforce-by-40-amid-ai-integration-natural-attrition-556834) - [Klarna's CEO says it stopped hiring thanks to AI but still advertises many open positions](https://techcrunch.com/2024/12/14/klarnas-ceo-says-it-stopped-hiring-thanks-to-ai-but-still-advertises-many-open-positions/) ### ch14-5: INEXACT - Speaker: Sebastian Siemiatkowski - Claim: Klarna expects natural attrition of 10 to 15 percent per year to continue reducing its headcount. - TLDR: Klarna's attrition-based workforce reduction strategy is real, but Siemiatkowski has publicly cited 15-20% annual attrition, not 10-15% as stated here. - Explanation: Multiple sources quote Siemiatkowski explicitly stating that Klarna's natural attrition rate is '15-20% per year,' which is how the company shrank from roughly 7,000 to 3,000 employees without mass layoffs. The core strategy of relying on attrition rather than active hiring is accurately described, but the specific percentage range in this transcript (10-15%) is lower than what he has stated in other recorded interviews. This may reflect a transcription imprecision or a variation in the figures he used in this appearance. - Sources: - [Klarna CEO says AI helped company shrink workforce by 40%](https://www.cnbc.com/2025/05/14/klarna-ceo-says-ai-helped-company-shrink-workforce-by-40percent.html) - [AI enabled Klarna to halve its workforce—now, the CEO is warning workers that other 'tech bros' are sugarcoating just how badly it's about to impact jobs](https://fortune.com/2025/10/10/klarna-ceo-sebastian-siemiatkowski-halved-workforce-says-tech-ceos-sugarcoating-ai-impact-on-jobs-mass-unemployment-warning/) ### ch14-6: FALSE - Speaker: Sebastian Siemiatkowski - Claim: In November and December 2025, even highly skeptical and well-renowned engineers, including the founder of Linux, said that coding had been resolved by AI and that you no longer need to code. - TLDR: Linus Torvalds said the opposite in November 2025: vibe coding is fine as a learning tool but 'a horrible idea from a maintenance standpoint' and AI won't eliminate programmers. - Explanation: At the Linux Foundation Open Source Summit in Seoul (November 2025), Torvalds described vibe coding as unsuitable for production code and compared AI to compilers, which 'didn't make programmers go away.' He called the question of developers losing jobs 'a complicated question' and was not even using AI coding tools himself at the time. He never said coding was 'resolved' or that people no longer need to code. - Sources: - [Linus Torvalds: Vibe coding is fine, but not for production • The Register](https://www.theregister.com/2025/11/18/linus_torvalds_vibe_coding/) - [Even Linux Creator Linus Torvalds is Using AI to Code in 2026](https://itsfoss.com/news/linus-torvalds-vibe-coding/) ### ch14-7: TRUE - Speaker: Sebastian Siemiatkowski - Claim: Coding and engineering work has seen a tremendous shift in the last 6 months due to AI. - TLDR: AI has driven a well-documented, substantial shift in coding and engineering work, especially in the period Siemiatkowski references. - Explanation: Multiple sources confirm major AI-driven disruption in software engineering through 2025 and into early 2026, including a reported 27.5% decline in programmer employment between 2023 and 2025, a spike in IT sector unemployment, and widespread adoption of AI coding tools. Siemiatkowski himself has publicly championed this shift at Klarna, where AI adoption reshaped engineering headcount and workflows. - Sources: - [AI enabled Klarna to halve its workforce—now, the CEO is warning workers that other 'tech bros' are sugarcoating just how badly it's about to impact jobs | Fortune](https://fortune.com/2025/10/10/klarna-ceo-sebastian-siemiatkowski-halved-workforce-says-tech-ceos-sugarcoating-ai-impact-on-jobs-mass-unemployment-warning/) - [AI vs Gen Z: How AI has changed the career pathway for junior developers - Stack Overflow](https://stackoverflow.blog/2025/12/26/ai-vs-gen-z/) - [Impact of AI on the 2025 Software Engineering Job Market](https://www.sundeepteki.org/advice/impact-of-ai-on-the-2025-software-engineering-job-market) - [What Klarna learned from its ambitious AI rollout](https://www.charterworks.com/what-klarna-learned-from-its-ambitious-ai-rollout/) ### ch2-1: TRUE - Speaker: Karen Hao - Claim: Karen Hao studied mechanical engineering at MIT. - TLDR: Karen Hao did study mechanical engineering at MIT, graduating with a B.S. in 2015. - Explanation: Multiple sources, including her personal website and Wikipedia, confirm she earned a B.S. in Mechanical Engineering from MIT in 2015, with a minor in Energy Studies. The claim is fully accurate. - Sources: - [Karen Hao - Wikipedia](https://en.wikipedia.org/wiki/Karen_Hao) - [About — Karen Hao](https://karendhao.com/about) ### ch2-2: TRUE - Speaker: Karen Hao - Claim: The tech startup Karen Hao joined after MIT was focused on building technologies to help facilitate the fight against climate change. - TLDR: Confirmed. The startup was Flux.io, a Google X spin-out, focused on sustainable architecture to help fight climate change. - Explanation: Karen Hao worked as an application engineer at Flux.io, the first startup to spin out of Google X (X Development). Flux aimed to use technology to incentivize more sustainable architecture and urban development, directly matching the claim's description of a mission-driven startup focused on building technologies to facilitate the fight against climate change. - Sources: - [Karen Hao - Wikipedia](https://en.wikipedia.org/wiki/Karen_Hao) - [About — Karen Hao](https://karendhao.com/about) - [Karen Hao – Technology@Wooster](https://inside.wooster.edu/technology/karen-hao/) ### ch2-3: TRUE - Speaker: Karen Hao - Claim: The board of the startup Karen Hao joined fired the CEO because the company was not profitable. - TLDR: Karen Hao has consistently told this story across multiple interviews and profiles: the board of her climate-tech startup fired the CEO due to lack of profitability. - Explanation: Multiple independent sources, including Karen Hao's own website, media profiles, and a podcast transcript, confirm that she joined a mission-driven climate-change-focused tech startup after graduating from MIT, and that a few months in, its board fired the CEO because the company was not profitable. This is a well-documented personal anecdote she uses to explain her path into journalism. - Sources: - [About — Karen Hao](https://karendhao.com/about) - [Diary Of A CEO: w/ AI Critic Karen Hao on Empires of AI Transcript](https://singjupost.com/diary-of-a-ceo-w-ai-critic-karen-hao-on-empires-of-ai-transcript/) - [In Conversation with Karen Hao | by Saffron Huang | joininteract | Medium](https://medium.com/joininteract/in-conversation-with-karen-hao-7ce711c1e32b) ### ch2-4: INEXACT - Speaker: Karen Hao - Claim: After two years working in tech, Karen Hao landed a role at MIT Technology Review covering AI full-time. - TLDR: Hao did land at MIT Technology Review covering AI, but not directly after her tech job. She had intermediate journalism roles at Mother Jones and Quartz first. - Explanation: LinkedIn data shows Hao joined MIT Technology Review in October 2018, after working at Mother Jones (2016-2017) and Quartz (2017-2018). Before those, she was an application engineer at the first startup to spin out of Google X. The '2 years in tech' figure is plausible, but the framing omits the two intermediate journalism steps before she reached MIT Technology Review. - Sources: - [Karen Hao - Wikipedia](https://en.wikipedia.org/wiki/Karen_Hao) - [About — Karen Hao](https://karendhao.com/about) - [Karen Hao - NYT Bestselling Author of EMPIRE OF AI | LinkedIn](https://www.linkedin.com/in/karendhao/) ### ch2-5: TRUE - Speaker: Karen Hao - Claim: Karen Hao began researching the story documented in her book in 2018, when she took the job at MIT Technology Review. - TLDR: Karen Hao did join MIT Technology Review in 2018, which is consistent with her claim about when she began researching the material for her book. - Explanation: Wikipedia and multiple sources confirm Karen Hao worked at MIT Technology Review from 2018 to 2022 as a senior AI editor. Her claim that 2018 was when she took that job and began researching the story documented in her book is accurate. - Sources: - [Karen Hao - Wikipedia](https://en.wikipedia.org/wiki/Karen_Hao) - [About — Karen Hao](https://karendhao.com/about) ### ch2-6: TRUE - Speaker: Karen Hao - Claim: Karen Hao interviewed over 250 people for the book, conducting over 300 interviews in total. - TLDR: Karen Hao confirmed in her own words that Empire of AI is based on 300+ interviews with over 250 people, including 90+ OpenAI insiders. - Explanation: Multiple sources corroborate the claim. Karen Hao herself posted on X that the book is 'based on 300+ interviews.' Publishers and reviewers also confirm 'more than 300 interviews with current and former employees' and 'around 260 people.' The numbers align precisely with what she states in the podcast. - Sources: - [Karen Hao on X](https://x.com/_KarenHao/status/1908206708037738698?lang=en) - [Empire of AI by Karen Hao | PenguinRandomHouse.com](https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) ### ch2-7: TRUE - Speaker: Karen Hao - Claim: Over 90 of the people Karen Hao interviewed were former or current OpenAI employees and executives. - TLDR: Confirmed. Karen Hao conducted over 250 interviews (300+ sessions total), with over 90 subjects being former or current OpenAI employees and executives. - Explanation: Multiple sources, including Karen Hao's own X post and the book's Wikipedia page, confirm that her research for 'Empire of AI' involved 300+ interviews with over 250 people, more than 90 of whom were former or current OpenAI employees and executives. This matches the claim precisely. - Sources: - [Karen Hao on X](https://x.com/_KarenHao/status/1908206708037738698?lang=en) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) ### ch2-8: TRUE - Speaker: Karen Hao - Claim: The book covers the inside story of OpenAI's first decade. - TLDR: Karen Hao's book does cover the inside story of OpenAI's first decade, from its 2015 founding through roughly 2025. - Explanation: OpenAI was founded in December 2015 and the book was published in May 2025, making the 'first decade' framing accurate. It is described as the definitive inside account of OpenAI's history, based on over 300 interviews, covering events from its nonprofit origins through the 2023 boardroom crisis and beyond. - Sources: - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI | PenguinRandomHouse.com](https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/) ### ch2-9: TRUE - Speaker: Karen Hao - Claim: AI companies publicly claim that AI is going to benefit everyone as their mission. - TLDR: Major AI companies do publicly state that their mission is to benefit humanity or everyone. OpenAI's official mission is to 'ensure that artificial general intelligence benefits all of humanity.' - Explanation: OpenAI's widely documented mission statement explicitly commits to benefiting all of humanity. Other major AI labs such as Google DeepMind similarly frame their missions around broad human benefit. Karen Hao's characterization of these public claims is accurate. - Sources: - [Four Questions for OpenAI's Mission to Benefit All of Humanity - The Quantum Record](https://thequantumrecord.com/blog/four-questions-for-openai-mission-to-benefit-all/) - [What is OpenAI? Definition and History from TechTarget](https://www.techtarget.com/searchenterpriseai/definition/OpenAI) ### ch10-1: TRUE - Speaker: Karen Hao - Claim: One of the origin stories of OpenAI is a dinner that took place at the Rosewood Hotel in Silicon Valley. - TLDR: A founding dinner at the Rosewood Hotel on Sand Hill Road in Silicon Valley is widely documented as the moment OpenAI was first conceived. - Explanation: Multiple credible sources, including Semafor and accounts based on Wired's 2016 profile, confirm that a dinner at the Rosewood Hotel in Menlo Park (Silicon Valley) in 2015 was the occasion where Sam Altman, Elon Musk, and key researchers first discussed creating OpenAI. The hotel is also associated with Musk's frequent Bay Area visits, consistent with Karen Hao's description. - Sources: - [The secret history of Elon Musk, Sam Altman, and OpenAI | Semafor](https://www.semafor.com/article/03/24/2023/the-secret-history-of-elon-musk-sam-altman-and-openai) - [Sam Altman · OpenAI](https://www.founderoo.co/playbooks/the-open-ai-founding-story-sam-altmans-unconventional-path-to-ai-innovation-) ### ch10-2: UNVERIFIABLE - Speaker: Karen Hao - Claim: The Rosewood Hotel in Silicon Valley was one of Elon Musk's favorite hotels when he traveled from LA to the Bay Area. - TLDR: The Rosewood Hotel hosted the founding OpenAI dinner and is a well-known Silicon Valley power venue, but no public source confirms it was specifically one of Musk's favorite hotels for LA-to-Bay-Area travel. - Explanation: Multiple sources confirm the Rosewood Sand Hill on Sand Hill Road in Menlo Park was the site of the July 2015 dinner that led to OpenAI's founding, and that Musk attended. The hotel is widely described as a hub for Silicon Valley's tech elite. However, no publicly available source, including excerpts from Karen Hao's book 'Empire of AI,' specifically describes the Rosewood as one of Musk's personal favorites for traveling between LA and the Bay Area. This detail may originate from Hao's private reporting but cannot be independently confirmed. - Sources: - [The secret history of Elon Musk, Sam Altman, and OpenAI | Semafor](https://www.semafor.com/article/03/24/2023/the-secret-history-of-elon-musk-sam-altman-and-openai) - [Book excerpt: "Empire of AI" by Karen Hao - CBS News](https://www.cbsnews.com/news/book-excerpt-empire-of-ai-by-karen-hao-artificial-intelligence-sam-altman-openai/) - [Rub Elbows With Tech Billionaires At This Glamorous Silicon Valley Resort Hotel - Islands](https://www.islands.com/1784517/rub-elbows-tech-billionaires-silicon-valley-resort-hotel-rosewood-sand-hill/) ### ch10-3: TRUE - Speaker: Karen Hao - Claim: Sam Altman organized the Rosewood Hotel dinner with the intention of recruiting the original team that would start OpenAI. - TLDR: Sam Altman did host the Rosewood Sand Hill Hotel dinner in July 2015, and it was indeed the event where OpenAI's founding team began to take shape. - Explanation: Multiple sources, including Fortune and Semafor, confirm that Sam Altman hosted the Rosewood dinner in July 2015. The gathering brought together key figures who would become OpenAI's founding researchers, making it the foundational recruiting event for the organization. Altman himself has acknowledged this dinner as a key moment, though he noted there were roughly 20 similar dinners that year. - Sources: - [The inside story of ChatGPT: How OpenAI founder Sam Altman built the world's hottest technology with billions from Microsoft | Fortune](https://fortune.com/longform/chatgpt-openai-sam-altman-microsoft/) - [The secret history of Elon Musk, Sam Altman, and OpenAI | Semafor](https://www.semafor.com/article/03/24/2023/the-secret-history-of-elon-musk-sam-altman-and-openai) - [Sam Altman on ChatGPT's First Two Years, Elon Musk and AI Under Trump](https://www.bloomberg.com/features/2025-sam-altman-interview/) ### ch10-4: INEXACT - Speaker: Karen Hao - Claim: Altman cold-emailed Ilya Sutskever to attend the dinner, and Ilya's motivation for coming was specifically to meet Elon Musk. - TLDR: The cold email is confirmed, and Musk's star power was a real draw for Sutskever. However, other sources cite the AGI mission (not just meeting Musk) as a key motivating factor. - Explanation: Sam Altman himself confirmed he cold-emailed Sutskever, though Sutskever initially did not respond. Musk's own account confirms his central role in persuading Sutskever to join, describing it as his toughest recruiting battle. Karen Hao's book frames Musk as the main draw at the dinner, but Altman's public account emphasizes the shared AGI mission as Sutskever's core motivation, making the 'specifically to meet Musk' framing an oversimplification. - Sources: - [Sam Altman Reveals How He'd Recruited Ilya Sutskever To Co-found OpenAI](https://officechai.com/ai/sam-altman-reveals-how-hed-recruited-ilya-sutskever-to-co-found-openai/) - [What Elon Musk has said about Ilya Sutskever, the chief scientist at the center of OpenAI's leadership upheaval | Fortune](https://fortune.com/2023/11/18/elon-musk-ilya-sutskever-openai-leadership-upheaval-sam-altman/) - [Dismantling the Empire of AI with Karen Hao](https://www.bloodinthemachine.com/p/dismantling-the-empire-of-ai-with) ### ch10-5: INEXACT - Speaker: Karen Hao - Claim: Altman also emailed Greg Brockman and Dario Amodei as part of his early recruitment effort for OpenAI. - TLDR: Altman did recruit both Brockman and Amodei ahead of the founding dinner, but Brockman's own account says his initial contact with Altman was a phone call (via Patrick Collison), not an email. - Explanation: Greg Brockman's own blog post describes his path to OpenAI as beginning with a phone conversation with Altman arranged by Patrick Collison, after which Altman organized a dinner including Dario Amodei, Ilya Sutskever, Elon Musk, and others. Both Brockman and Amodei were indeed recruited early, but the specific claim that Altman 'emailed' Brockman is not confirmed by Brockman's own account. The cold-email narrative is specifically documented for Ilya Sutskever, not Brockman or Amodei. - Sources: - [My path to OpenAI • Greg Brockman](https://blog.gregbrockman.com/my-path-to-openai) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [The messy, secretive reality behind OpenAI's bid to save the world | MIT Technology Review](https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/) ### ch10-6: TRUE - Speaker: Karen Hao - Claim: Almost all of the people recruited at the Rosewood Hotel dinner ended up working at OpenAI and then leaving the company. - TLDR: Of the key Rosewood dinner attendees, most did join OpenAI and most have since departed. The 'almost all' qualifier is accurate. - Explanation: The summer 2015 Rosewood Hotel dinner brought together Elon Musk, Ilya Sutskever, Greg Brockman, and Dario Amodei, among others. Musk resigned from OpenAI's board in 2018, Amodei left in 2021 to co-found Anthropic, and Sutskever departed in May 2024 to found Safe Superintelligence. Of the 11 official founding members, 8 have left. Sam Altman and Greg Brockman remain, consistent with the 'almost all, not every one' framing used in the claim. - Sources: - [OpenAI - Wikipedia](https://en.wikipedia.org/wiki/OpenAI) - [The OpenAI mafia: 15 of the most notable startups founded by alumni | TechCrunch](https://techcrunch.com/2025/04/26/the-openai-mafia-15-of-the-most-notable-startups-founded-by-alumni/) - [OpenAI keeps losing executives. Here's everyone who left so far this year](https://qz.com/openai-lose-executives-mira-murati-ilya-sutskever-ai-1851658726) - [OpenAI co-founder Greg Brockman returns after three months of leave](https://www.cnbc.com/2024/11/12/openai-co-founder-greg-brockman-returns-after-three-months-of-leave.html) ### ch10-7: INEXACT - Speaker: Karen Hao - Claim: Almost all of the original OpenAI recruits ended up leaving the company after clashing with Altman. - TLDR: 8 of 11 original co-founders have left OpenAI, so 'almost all' holds up. But not all departures were specifically due to clashing with Altman. - Explanation: The numerical claim is well-supported: only 3 of the original 11 co-founders remain, meaning roughly 8 have left. Several departures did involve tensions linked to Altman's leadership (Musk's lawsuit, Sutskever's role in the November 2023 board ouster, safety researchers leaving for Anthropic). However, some founders left for reasons not clearly tied to a personal clash with Altman, such as Karpathy departing for Tesla in 2017 and Cheung leaving for Lyft that same year. Framing all departures as specifically stemming from clashes with Altman oversimplifies a more varied picture. - Sources: - [OpenAI Keeps Losing Its Cofounders. Only 3 of 11 Are Still at the Company](https://www.entrepreneur.com/business-news/chatgpt-cofounders-leaders-leaving-openai-3-left-of-11/478125) - [OpenAI - Wikipedia](https://en.wikipedia.org/wiki/OpenAI) - [Removal of Sam Altman from OpenAI - Wikipedia](https://en.wikipedia.org/wiki/Removal_of_Sam_Altman_from_OpenAI) ### ch10-8: TRUE - Speaker: Steven Bartlett - Claim: Ilya Sutskever left OpenAI and launched a company called Safe Superintelligence. - TLDR: Correct. Ilya Sutskever left OpenAI in May 2024 and co-founded Safe Superintelligence Inc. (SSI) in June 2024. - Explanation: Sutskever announced his OpenAI departure in May 2024 and launched Safe Superintelligence Inc. alongside co-founders Daniel Gross and Daniel Levy on June 19, 2024. The company's stated sole goal is building a safe superintelligence, with no near-term product plans. - Sources: - [Ilya Sutskever - Wikipedia](https://en.wikipedia.org/wiki/Ilya_Sutskever) - [OpenAI co-founder Ilya Sutskever's Safe Superintelligence reportedly valued at $32B | TechCrunch](https://techcrunch.com/2025/04/12/openai-co-founder-ilya-sutskevers-safe-superintelligence-reportedly-valued-at-32b/) ### ch10-9: INEXACT - Speaker: Karen Hao - Claim: Every single tech billionaire has their own AI company. - TLDR: The trend is real and notable, but 'every single' is an overstatement. Major tech billionaires like Bill Gates and Steve Ballmer have no dedicated AI company of their own. - Explanation: Many of the most prominent tech billionaires do have their own AI ventures: Elon Musk (xAI), Mark Zuckerberg (Meta AI/FAIR), Jeff Bezos (heavy Anthropic investment plus Amazon AI), and Larry Page/Sergey Brin (Google DeepMind). However, Steve Ballmer and Bill Gates, both among the world's wealthiest tech figures, benefit from the AI boom only indirectly through Microsoft stock and do not own or operate their own AI companies. The claim functions as a rhetorical generalization, but the absolute 'every single' does not hold up to scrutiny. - Sources: - [Meet the newly minted AI billionaires of 2025](https://qz.com/ai-2025-billionaires-elon-musk-startup-founders) - [Steve Ballmer Makes $1 Billion a Year in Microsoft Dividends - 24/7 Wall St.](https://247wallst.com/investing/2025/06/11/steve-ballmer-makes-1-billion-a-year-in-microsoft-dividends/) - [Billionaire Rankings 2025: The winners and losers in the elite rich list](https://gulfnews.com/world/americas/what-happened-to-bill-gates-how-worlds-top-10-richest-people-fared-in-2025-gainers-and-losers-1.500382553) ### ch10-10: TRUE - Speaker: Karen Hao - Claim: After leaving OpenAI, Elon Musk started xAI. - TLDR: Elon Musk co-founded OpenAI in 2015, departed in 2018, and later founded xAI in March 2023. - Explanation: Musk left OpenAI's board in 2018 following a leadership dispute. He then founded xAI in March 2023 as a direct competitor, explicitly positioning it as a counterweight to OpenAI. This sequence is well-documented across multiple reliable sources. - Sources: - [xAI (company) - Wikipedia](https://en.wikipedia.org/wiki/XAI_(company)) - [Elon Musk launches his own AI company to compete with ChatGPT - ABC News](https://abcnews.go.com/Business/elon-musk-launches-ai-company-compete-chatgpt/story?id=101210078) ### ch10-11: TRUE - Speaker: Karen Hao - Claim: After leaving OpenAI, Dario Amodei started Anthropic. - TLDR: Dario Amodei co-founded Anthropic in 2021 after departing from OpenAI, where he had been VP of Research. - Explanation: Dario Amodei left OpenAI along with his sister Daniela and several other senior researchers over differences on AI safety, and together they founded Anthropic in 2021. He serves as CEO of Anthropic today. The name 'Dargo' in the transcript is a transcription error for 'Dario.' - Sources: - [Dario Amodei - Wikipedia](https://en.wikipedia.org/wiki/Dario_Amodei) - [Anthropic - Wikipedia](https://en.wikipedia.org/wiki/Anthropic) ### ch10-12: TRUE - Speaker: Karen Hao - Claim: After leaving OpenAI, Ilya Sutskever started Safe Superintelligence. - TLDR: Ilya Sutskever did leave OpenAI and co-found Safe Superintelligence Inc. in June 2024. - Explanation: Sutskever announced his departure from OpenAI in May 2024 and launched Safe Superintelligence Inc. (SSI) in June 2024 alongside co-founders Daniel Gross and Daniel Levy. The company has since raised over $3 billion and reached a $32 billion valuation. - Sources: - [OpenAI co-founder Ilya Sutskever announces his new AI startup, Safe Superintelligence](https://www.cnbc.com/2024/06/19/openai-co-founder-ilya-sutskever-announces-safe-superintelligence.html) - [Safe Superintelligence Inc. - Wikipedia](https://en.wikipedia.org/wiki/Safe_Superintelligence_Inc.) ### ch10-13: TRUE - Speaker: Karen Hao - Claim: After leaving OpenAI, Mira Murati started Thinking Machines Lab. - TLDR: Mira Murati did found Thinking Machines Lab after departing OpenAI in September 2024. - Explanation: Murati left OpenAI in September 2024 and co-founded Thinking Machines Lab in February 2025 alongside several other former OpenAI employees. The company has since raised $2 billion and is reportedly nearing a $50 billion valuation. - Sources: - [Thinking Machines Lab - Wikipedia](https://en.wikipedia.org/wiki/Thinking_Machines_Lab) - [Mira Murati - Wikipedia](https://en.wikipedia.org/wiki/Mira_Murati) - [Inside Thinking Machines Lab, Mira Murati's New AI Startup | Built In](https://builtin.com/articles/what-is-thinking-machines-lab) ### ch3-1: TRUE - Speaker: Karen Hao - Claim: AI started as a field in 1956. - TLDR: AI was indeed founded as a formal field in 1956 at the Dartmouth Summer Research Project, where John McCarthy coined the term. - Explanation: The 1956 Dartmouth Conference is universally recognized as the founding event of artificial intelligence as a scientific discipline. John McCarthy, then an assistant professor at Dartmouth College, co-authored the proposal and coined the term 'artificial intelligence.' The claim is accurate in all key details. - Sources: - [Dartmouth workshop - Wikipedia](https://en.wikipedia.org/wiki/Dartmouth_workshop) - [Artificial Intelligence (AI) Coined at Dartmouth | Dartmouth](https://home.dartmouth.edu/about/artificial-intelligence-ai-coined-dartmouth) - [The 1956 Dartmouth Workshop: The Birthplace of AI](https://postquantum.com/ai-security/dartmouth-birth-ai/) ### ch3-2: INEXACT - Speaker: Karen Hao - Claim: A group of scientists gathered at Dartmouth University in 1956 to start the new discipline of artificial intelligence. - TLDR: The 1956 Dartmouth gathering did found AI as a field, but Dartmouth is a College, not a University. - Explanation: The Dartmouth Summer Research Project on Artificial Intelligence (June-August 1956) is widely recognized as the founding event of AI as a scientific discipline. The core claim is accurate. However, Karen Hao refers to 'Dartmouth University' when the institution is Dartmouth College. John McCarthy was indeed an assistant professor there and coined the term 'artificial intelligence.' - Sources: - [Dartmouth workshop - Wikipedia](https://en.wikipedia.org/wiki/Dartmouth_workshop) - [Artificial Intelligence (AI) Coined at Dartmouth | Dartmouth](https://home.dartmouth.edu/about/artificial-intelligence-ai-coined-dartmouth) ### ch3-3: INEXACT - Speaker: Karen Hao - Claim: John McCarthy, an assistant professor at Dartmouth University, named the new discipline 'artificial intelligence.' - TLDR: McCarthy was indeed an assistant professor who coined 'artificial intelligence' at the 1956 Dartmouth workshop, but the institution is Dartmouth College, not Dartmouth University. - Explanation: The 1956 Dartmouth proposal itself lists McCarthy as 'Assistant Professor of Mathematics, Dartmouth College.' He is universally credited with coining the term 'artificial intelligence' for the field. The only inaccuracy in the claim is calling it 'Dartmouth University' when it is officially Dartmouth College. - Sources: - [Dartmouth workshop - Wikipedia](https://en.wikipedia.org/wiki/Dartmouth_workshop) - [Artificial Intelligence (AI) Coined at Dartmouth | Dartmouth](https://home.dartmouth.edu/about/artificial-intelligence-ai-coined-dartmouth) - [John McCarthy (computer scientist) - Wikipedia](https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist)) ### ch3-4: INEXACT - Speaker: Karen Hao - Claim: The year before naming the field 'artificial intelligence,' John McCarthy tried to name it 'automata studies.' - TLDR: McCarthy was associated with 'Automata Studies' before coining 'AI,' but it was a multi-year book project (1952-1955), not a deliberate attempt to name the field just one year earlier. - Explanation: McCarthy co-edited a collection called 'Automata Studies' with Claude Shannon over 1953-1955 (published 1956), and he coined 'artificial intelligence' in the Dartmouth conference proposal dated August 31, 1955. So the two terms overlapped in development rather than being separated by a clean one-year gap. Furthermore, 'Automata Studies' was a book title, not a formal proposal to name the discipline, and McCarthy was actually trying to move away from that framing by suggesting 'Towards Intelligent Automata' (rejected by Shannon) before landing on 'artificial intelligence.' - Sources: - [Artificial Imitation: Did John McCarthy get the term AI from Norbert Wiener?](https://seanmanion.substack.com/p/artificial-imitation-did-john-mccarthy) - [Cybernetics or AI? What's in a Name? | Punya Mishra's Web](https://punyamishra.com/2024/07/10/cybernetics-or-ai-whats-in-a-name/) ### ch3-5: INEXACT - Speaker: Karen Hao - Claim: Some of McCarthy's colleagues were concerned that the name 'artificial intelligence' pegged the discipline to recreating human intelligence. - TLDR: Colleagues did object to the name 'artificial intelligence,' but the documented reasons differ from what is claimed here. - Explanation: Historical sources confirm that colleagues such as Claude Shannon, Herbert Simon, and Allen Newell preferred alternative terms like 'automata studies' or 'complex information processing.' However, their objections were not specifically about the name pegging the field to recreating human intelligence. Rather, they found 'artificial intelligence' too showy or grandiose, and Simon argued they were 'doing AI before, we just called it operations research.' McCarthy himself actually wanted the field to be defined by recreating human intelligence, making it unlikely his colleagues' primary concern was that the name implied this. - Sources: - [Artificial Imitation: Did John McCarthy get the term AI from Norbert Wiener?](https://seanmanion.substack.com/p/artificial-imitation-did-john-mccarthy) - [Oral-History:John McCarthy - Engineering and Technology History Wiki](https://ethw.org/Oral-History:John_McCarthy) - [Artificial Intelligence · Issue 1.1, Summer 2019](https://hdsr.mitpress.mit.edu/pub/0aytgrau) - [Dartmouth Conference – Naoki Shibuya](https://naokishibuya.github.io/blog/2021-08-10-dartmouth-conference-ai-in-1956/) ### ch3-6: INEXACT - Speaker: Karen Hao - Claim: There is no scientific consensus around what human intelligence is, and no definition from psychology, biology, or neurology. - TLDR: The lack of a single universal consensus on intelligence is well-documented, but saying there are NO definitions from psychology, biology, or neurology is an overstatement. - Explanation: Psychology, neurology, and biology have proposed numerous definitions of intelligence (including the APA's 1995 task force report and the 'Mainstream Science on Intelligence' consensus statement), so the claim that no definitions exist from these fields is inaccurate. However, the broader point holds: no single, universally accepted definition exists, and foundational debates about structure, heritability, and measurement persist. The core assertion about lack of consensus is valid, but the framing that these disciplines have produced zero definitions is too strong. - Sources: - [Intelligence - Wikipedia](https://en.wikipedia.org/wiki/Intelligence) - [Human intelligence | Definition, Types, Test, Theories, & Facts | Britannica](https://www.britannica.com/science/human-intelligence-psychology) - [Neurobiological Definition of Intelligence: A Neuroscience... : Biomedical and Biotechnology Research Journal (BBRJ)](https://journals.lww.com/bbrj/fulltext/2024/08030/neurobiological_definition_of_intelligence__a.1.aspx) - [Intelligence - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC3341646/) ### ch3-7: FALSE - Speaker: Karen Hao - Claim: Every attempt in history to quantify and rank human intelligence has been driven by a desire to prove scientifically that certain groups of people are inferior to other groups of people. - TLDR: The absolute claim that 'every attempt' was driven by racist motives is contradicted by history. Alfred Binet's original 1905 test was explicitly designed to help identify children needing special education, not to rank groups. - Explanation: While many major figures in intelligence testing (Galton, Terman, Yerkes) were eugenicists who used testing to advance racial hierarchy agendas, Alfred Binet's original Binet-Simon scale was created for an educational reform purpose: identifying children who needed support rather than institutionalization. Binet himself opposed the notion of ranking innate intelligence and cautioned against reducing it to a fixed number. The 'every attempt' framing is a well-documented historical overstatement, even though the broader concern about the eugenic entanglement of intelligence testing is substantially accurate. - Sources: - [History of the race and intelligence controversy - Wikipedia](https://en.wikipedia.org/wiki/History_of_the_race_and_intelligence_controversy) - [Intelligence testing, race and eugenics | Wellcome Collection](https://wellcomecollection.org/stories/intelligence-testing--race-and-eugenics) - [The Eugenic Origins of IQ Testing: Implications for Post-Atkins](https://via.library.depaul.edu/cgi/viewcontent.cgi?article=1270&context=law-review) - [Alfred Binet and the Concept of Heterogeneous Orders - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC3419461/) ### ch3-8: TRUE - Speaker: Karen Hao - Claim: OpenAI has defined and redefined AGI multiple times throughout its history. - TLDR: OpenAI has demonstrably used multiple definitions of AGI across different contexts. Altman's Congressional testimony did invoke AGI-level AI as a tool to cure cancer and fight climate change. - Explanation: Leaked documents confirmed that the Microsoft-OpenAI agreement defined AGI as a financial milestone (~$100 billion), while OpenAI's public-facing definition describes AGI as a system outperforming humans at most economically valuable work, and Altman has also introduced a five-level internal classification system. Senate testimony from May 2023 confirms Altman described advanced AI as something that could cure cancer and address climate change. The core claim of repeated redefinition is well-supported. - Sources: - [Microsoft and OpenAI have a financial definition of AGI: Report | TechCrunch](https://techcrunch.com/2024/12/26/microsoft-and-openai-have-a-financial-definition-of-agi-report/) - [Leaked Documents Show OpenAI Has a Very Clear Definition of 'AGI'](https://gizmodo.com/leaked-documents-show-openai-has-a-very-clear-definition-of-agi-2000543339) - [Written Testimony of Sam Altman Chief Executive Officer OpenAI](https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Bio%20&%20Testimony%20-%20Altman.pdf) - [How OpenAI's Redefinition of AGI Could Change the World - Geeky Gadgets](https://www.geeky-gadgets.com/openai-profit-driven-agi/) ### ch3-9: INEXACT - Speaker: Karen Hao - Claim: When talking with Congress, Sam Altman has described AGI as a system that will cure cancer, solve climate change, and cure poverty. - TLDR: Altman did invoke cancer, climate change, and poverty in Congressional testimony, but as AI's potential benefits rather than a formal 'definition' of AGI. - Explanation: In his 2023 Senate Judiciary testimony, Altman said OpenAI is 'working to build tools that one day can help us make new discoveries and address some of humanity's biggest challenges like climate change and curing cancer,' and separately remarked 'I think it'd be good to end poverty.' All three themes appear in his Congressional messaging. However, Karen Hao frames these as his 'definition of AGI' to Congress, when Altman presented them as broader aspirations for AI rather than a definition of AGI specifically. - Sources: - [Transcript: Senate Judiciary Subcommittee Hearing on Oversight of AI | TechPolicy.Press](https://www.techpolicy.press/transcript-senate-judiciary-subcommittee-hearing-on-oversight-of-ai/) - [OpenAI CEO Sam Altman Says AI Could End Poverty | PYMNTS.com](https://www.pymnts.com/artificial-intelligence-2/2023/openai-ceo-sam-altman-says-ai-could-end-poverty/) - [Written Testimony of Sam Altman Chief Executive Officer OpenAI](https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Bio%20&%20Testimony%20-%20Altman.pdf) ### ch3-10: INEXACT - Speaker: Karen Hao - Claim: In the deal between OpenAI and Microsoft in which Microsoft invested in OpenAI, AGI was defined as a system that will generate $100 billion of revenue. - TLDR: The OpenAI-Microsoft deal does define AGI using a $100 billion financial threshold, but it refers to profits, not revenue. - Explanation: According to reporting by The Information (December 2024), the OpenAI-Microsoft agreement defines AGI as a system capable of generating $100 billion in profits. Karen Hao correctly identifies the $100 billion figure and the financial nature of this definition, but misstates the metric as 'revenue' rather than 'profits.' The distinction is significant given OpenAI's ongoing losses and its separate (public) definition of AGI on its website. - Sources: - [Microsoft and OpenAI have a financial definition of AGI: Report | TechCrunch](https://techcrunch.com/2024/12/26/microsoft-and-openai-have-a-financial-definition-of-agi-report/) - [Microsoft-OpenAI Deal Defines AGI as $100 Billion Profit Milestone - Slashdot](https://slashdot.org/story/24/12/26/1613249/microsoft-openai-deal-defines-agi-as-100-billion-profit-milestone) - ['$100B in profits': Microsoft, OpenAI's secret definition of AGI revealed](https://www.newsbytesapp.com/news/science/openai-microsoft-define-agi-by-potential-to-generate-100-billion/story) ### ch3-11: TRUE - Speaker: Karen Hao - Claim: On OpenAI's own website, AGI is defined as 'highly autonomous systems that outperform humans in most economically valuable work.' - TLDR: OpenAI's Charter, published on their website in April 2018, defines AGI with exactly that phrase. - Explanation: The OpenAI Charter (openai.com/charter) defines AGI as 'highly autonomous systems that outperform humans in most economically valuable work,' matching Karen Hao's quote verbatim. This definition has been publicly available on OpenAI's website since April 2018. - Sources: - [What is the OpenAI Charter?](https://milvus.io/ai-quick-reference/what-is-the-openai-charter) - [OpenAI Charter](https://openai.com/charter) ### ch3-12: TRUE - Speaker: Karen Hao - Claim: OpenAI's different definitions of AGI are strategically deployed to different audiences in order to ward off regulation, increase consumer buy-in, and attract more capital and resources. - TLDR: OpenAI's use of different AGI definitions for different audiences is well-documented. Multiple sources confirm distinct definitions for Congress, consumers, and investors like Microsoft. - Explanation: Reporting corroborates that OpenAI deploys different AGI definitions depending on the audience: a humanitarian framing for regulators and Congress, a product-focused pitch for consumers, and a financial trigger (generating $100 billion in profits) in its contract with Microsoft. TechCrunch and investigative outlets confirm this fluidity. Karen Hao's analytical conclusion that these shifting definitions serve strategic purposes (warding off regulation, building buy-in, attracting capital) is supported by the documented record. - Sources: - [Microsoft and OpenAI have a financial definition of AGI: Report | TechCrunch](https://techcrunch.com/2024/12/26/microsoft-and-openai-have-a-financial-definition-of-agi-report/) - [OpenAI's "Three Faces of AGI" Behind the $110 Billion](https://eu.36kr.com/en/p/3702401759506821) - [Empire Building in the Age of AI: Power, Secrecy, and the Battle for Control | Stanford GSB](https://casi.stanford.edu/news/empire-building-age-ai-power-secrecy-and-battle-control) ### ch16-1: INEXACT - Speaker: Karen Hao - Claim: AI companies specifically pick some of the most vulnerable communities to build their supercomputer facilities. - TLDR: A documented pattern exists of data centers clustering in vulnerable, already-polluted communities, but whether companies 'specifically pick' them by deliberate intent is disputed. - Explanation: Multiple credible analyses (NAACP, WRI, CDC Environmental Justice Index data, TechPolicy.Press) confirm that roughly half of U.S. data centers sit in census tracts with above-median environmental burdens, and advocacy groups explicitly flag this as an environmental justice issue. However, researchers note this correlation reflects historical zoning, cheaper land costs, and lower regulatory opposition rather than proven deliberate targeting. The framing of 'specifically pick' implies intentionality that goes beyond what the available evidence conclusively establishes. - Sources: - [AI boom fuels "environmental justice" fears in communities of color](https://www.axios.com/2025/12/08/ai-civil-rights-black-latino-water-electricity) - [Stop Dirty Data Centers | NAACP](https://naacp.org/campaigns/stop-dirty-data-centers) - [Data Center Boom Risks Health of Already Vulnerable Communities | TechPolicy.Press](https://www.techpolicy.press/data-center-boom-risks-health-of-already-vulnerable-communities/) - [From Energy Use to Air Quality, the Many Ways Data Centers Affect US Communities](https://www.wri.org/insights/us-data-center-growth-impacts) - [Trump's Stargate AI Data Center in Texas Sparks Housing Crisis | TIME](https://time.com/7362401/ai-stargate-data-center-abilene-housing-crisis/) ### ch16-2: TRUE - Speaker: Karen Hao - Claim: One of OpenAI's largest data center projects is being built in Abilene, Texas, as part of the Stargate initiative. - TLDR: OpenAI is indeed building one of its largest data center projects in Abilene, Texas, as part of the Stargate initiative announced in January 2025. - Explanation: Multiple credible sources confirm that Stargate's flagship campus is located in Abilene, Texas, developed by Crusoe and leased to Oracle and OpenAI. The initiative was formally announced on January 21, 2025, by President Trump with a stated goal of investing up to $500 billion in U.S. AI infrastructure, matching Karen Hao's description. - Sources: - [Crusoe tops out final building at OpenAI Stargate data center campus in Abilene, Texas - DCD](https://www.datacenterdynamics.com/en/news/crusoe-tops-out-final-building-at-openai-stargate-data-center-campus-in-abilene-texas/) - [Texas becomes the epicenter of OpenAI's $500 billion Stargate Project | Texas Standard](https://www.texasstandard.org/stories/openai-stargate-data-centers-texas-energy-infrastructure/) - [Stargate LLC - Wikipedia](https://en.wikipedia.org/wiki/Stargate_LLC) ### ch16-3: TRUE - Speaker: Karen Hao - Claim: The Stargate initiative was announced at the beginning of Trump's second administration as an effort to spend $500 billion on AI computing infrastructure. - TLDR: The Stargate Project was announced on January 21, 2025, the day after Trump's inauguration, as a $500 billion AI infrastructure initiative. - Explanation: Trump announced the Stargate Project at a White House press conference on January 21, 2025, alongside Sam Altman, Larry Ellison, and Masayoshi Son. The venture, a joint effort by OpenAI, SoftBank, Oracle, and MGX, plans to invest up to $500 billion in U.S. AI computing infrastructure by 2029. The Abilene, Texas data center is explicitly part of this initiative. - Sources: - [Announcing The Stargate Project | OpenAI](https://openai.com/index/announcing-the-stargate-project/) - [Stargate: Trump announces a $500 billion AI infrastructure investment in the US | CNN Business](https://www.cnn.com/2025/01/21/tech/openai-oracle-softbank-trump-ai-investment) - [Stargate LLC - Wikipedia](https://en.wikipedia.org/wiki/Stargate_LLC) ### ch16-4: INEXACT - Speaker: Karen Hao - Claim: OpenAI's facility in Abilene, Texas would be the size of Central Park, run 1 million computer chips, and require the power of more than 20% of New York City. - TLDR: The Central Park size comparison is confirmed, but the chip count (1 million vs. ~450,000 documented) is overstated, and the facility's power draw equals about 20% of NYC, not 'more than 20%.' - Explanation: Bloomberg confirmed the Abilene campus is roughly 875 acres, approximately the size of Central Park. However, the documented chip count for the Abilene site is 450,000 Nvidia GB200 GPUs (Bloomberg: up to 400,000); the '2 million chips' figure applies to the full multi-site Oracle-OpenAI partnership. On power, the Abilene facility is capped at 1.2 GW, which equals roughly 20% of New York City's ~6 GW demand, not strictly 'more than' 20%. - Sources: - [Stargate's First Data Center Site is Size of Central Park, With At Least 57 Jobs](https://www.bloomberg.com/news/articles/2025-01-23/stargate-s-first-data-center-to-be-located-in-texas-with-at-least-57-jobs) - [OpenAI and Oracle to deploy 450,000 GB200 GPUs at Stargate data center in Abilene, Texas - DCD](https://www.datacenterdynamics.com/en/news/openai-and-oracle-to-deploy-450000-gb200-gpus-at-stargate-abilene-data-center/) - [Stargate Abilene Data Center Could Hold 400,000 Nvidia Blackwell Chips - Bloomberg](https://www.bloomberg.com/news/articles/2025-03-18/openai-s-first-stargate-site-to-hold-up-to-400-000-nvidia-chips) - [OpenAI and Oracle Cap Texas AI Data Center at 1.2 GW](https://winbuzzer.com/2026/03/09/openai-oracle-cap-texas-ai-data-center-abilene-stargate-xcxwbn/) - [OpenAI's New Data Centers Will Draw More Power Than the Entirety of New York City, Sam Altman Says](https://futurism.com/artificial-intelligence/openai-new-data-centers-more-power-new-york-city) ### ch16-5: TRUE - Speaker: Karen Hao - Claim: Job reports already show there is a restructuring of the economy happening right now due to AI. - TLDR: Multiple job market reports confirm AI-driven economic restructuring is already underway, particularly affecting entry-level and routine cognitive roles. - Explanation: Data from the Dallas Fed, Yale Budget Lab, Challenger Gray and Christmas, and Anthropic's own labor market research all document measurable shifts attributable to AI, including declining entry-level tech postings, over 54,000 AI-linked layoffs tracked in 2025, and slowing employment growth in AI-exposed occupations. The aggregate picture is nuanced (no economy-wide collapse), but restructuring in specific sectors is well-documented. Hao's characterization of job reports showing ongoing restructuring is consistent with these findings. - Sources: - [Young workers' employment drops in occupations with high AI exposure](https://www.dallasfed.org/research/economics/2026/0106) - [Evaluating the Impact of AI on the Labor Market: Current State of Affairs | The Budget Lab at Yale](https://budgetlab.yale.edu/research/evaluating-impact-ai-labor-market-current-state-affairs) - [Labor market impacts of AI: A new measure and early evidence](https://www.anthropic.com/research/labor-market-impacts) - [How AI Drove 55,000 U.S. Layoffs in 2025 as Tech Giants Cite Automation Efficiencies](https://chiefaiofficer.com/ai-layoffs-2025-job-displacement-workforce-impact/) ### ch16-6: INEXACT - Speaker: Karen Hao - Claim: Meta's supercomputer facility is being built in Louisiana and would be 4 times the size of the Abilene, Texas OpenAI facility. - TLDR: Meta is indeed building a supercomputer facility in Louisiana, and it is significantly larger than OpenAI's Abilene, Texas site, but the ratio is approximately 3x, not 4x. - Explanation: Meta's Hyperion campus in Richland Parish, Louisiana covers approximately 11 sq km (2,250 acres), while OpenAI's Stargate facility in Abilene, Texas spans approximately 3.5 sq km (875 acres), yielding a ratio of roughly 3:1, not 4:1. The '4 times' figure appears to be a conflation with a separate comparison: multiple sources describe Meta's expanded Hyperion site as 'four times the size of Manhattan's Central Park,' not four times the size of the Abilene facility. - Sources: - [The largest AI data center campuses will soon be a fifth the size of Manhattan | Epoch AI](https://epoch.ai/data-insights/data-center-sizes) - [Meta is quietly expanding its $10 billion Hyperion AI data center, now sprawling to four times the size of Manhattan's Central Park | Fortune](https://fortune.com/2026/02/04/meta-hyperion-ai-data-center-louisiana-expansion/) - [Meta's $27 billion AI data center is causing chaos in small town Louisiana | Fortune](https://fortune.com/2026/03/26/meta-ai-data-center-hyperion-louisiana/) ### ch16-7: INEXACT - Speaker: Karen Hao - Claim: Meta's Louisiana supercomputer facility would use half of the average power demand of New York City and would be one-fifth the size of Manhattan. - TLDR: The power and size figures are both imprecise. At peak, Hyperion draws ~5 GW, which is roughly NYC's full average demand (not half of it), or about half of NYC's peak demand. The site is about one-quarter of Manhattan, not one-fifth. - Explanation: Meta's Hyperion facility in Louisiana (3,650 acres, ~14.7 million sq m) covers roughly one-quarter of Manhattan's area per IEEE Spectrum and other sources, not one-fifth. On power, Hyperion's peak draw of 5 GW is closer to NYC's entire average demand (~6 GW), not half of it; the 'half' figure applies to NYC's peak demand (~10-12 GW), which the claim mislabels as 'average.' The general scale comparisons are directionally meaningful but both specific fractions are off. - Sources: - [What Will It Take to Build the World's Largest Data Center? - IEEE Spectrum](https://spectrum.ieee.org/amp/5gw-data-center-2676577917) - [To land Meta's massive $10 billion data center, Louisiana pulled out all the stops. Will it be worth it?](https://www.cnbc.com/2025/06/25/meta-massive-data-center-louisiana-cost-jobs-energy-use.html) - [New York Electricity Profile 2024 - U.S. Energy Information Administration (EIA)](https://www.eia.gov/electricity/state/newyork/) ### ch16-8: TRUE - Speaker: Karen Hao - Claim: When AI data center facilities move into communities, power utility costs increase and grid reliability decreases. - TLDR: Multiple credible sources confirm that AI data centers raise local power utility costs and strain grid reliability in surrounding communities. - Explanation: Bloomberg, Consumer Reports, and PJM Interconnection data all document rising electricity bills in data-center-heavy areas, with some regions seeing increases up to 267% over five years. Grid reliability concerns are well-documented too, including a July 2024 incident in Virginia where 60 data centers simultaneously disconnected, nearly causing cascading outages, and PJM projecting a reliability shortfall by 2027. - Sources: - [AI Data Centers: Big Tech's Impact on Electric Bills, Water, and More](https://www.consumerreports.org/data-centers/ai-data-centers-impact-on-electric-bills-water-and-more-a1040338678/) - [How AI Data Centers Are Sending Your Power Bill Soaring](https://www.bloomberg.com/graphics/2025-ai-data-centers-electricity-prices/) - [AI data centers causing "distortions" in US power grid - Bloomberg - DCD](https://www.datacenterdynamics.com/en/news/ai-data-centers-causing-distortions-in-us-power-grid-bloomberg/) - [US Electric Grid Heading Toward 'Crisis' Thanks to AI Data Centers | Common Dreams](https://www.commondreams.org/news/data-centers-electric-grid) ### ch16-9: TRUE - Speaker: Karen Hao - Claim: AI data center facilities require fresh water both to generate the power needed to run them and to cool their systems. - TLDR: Data centers do use fresh water for both on-site cooling and indirectly for electricity generation via thermoelectric power plants. - Explanation: Multiple authoritative sources confirm that data centers have two distinct freshwater footprints: direct use for cooling (evaporative cooling towers) and indirect use through the water-intensive power plants that generate their electricity. One study found the indirect water use for power generation can account for 80% or more of a data center's total water footprint. - Sources: - [Data Centers and Water Consumption | Article | EESI](https://www.eesi.org/articles/view/data-centers-and-water-consumption) - [AI, data centers, and water | Brookings](https://www.brookings.edu/articles/ai-data-centers-and-water/) - [The Real Story on AI Water Usage at Data Centers - IEEE Spectrum](https://spectrum.ieee.org/ai-water-usage) ### ch16-10: TRUE - Speaker: Karen Hao - Claim: In Memphis, Tennessee, Musk built Colossus, the supercomputer used for training Grok, using 35 methane gas turbines to power the facility. - TLDR: All key details check out: xAI's Colossus supercomputer in Memphis, TN uses 35 methane gas turbines to power the facility that trains Grok. - Explanation: Multiple credible sources (CNBC, NBC News, Inside Climate News, Wikipedia) confirm that xAI's Colossus supercomputer is located in South Memphis, Tennessee, is used to train the Grok AI model, and operates at least 35 methane gas turbines with a combined capacity of approximately 421 MW. The turbines were brought in because the grid connection at the site was only 8 MW. - Sources: - [Colossus (supercomputer) - Wikipedia](https://en.wikipedia.org/wiki/Colossus_(supercomputer)) - [Musk's xAI scores permit for gas-burning turbines to power Grok supercomputer in Memphis](https://www.cnbc.com/2025/07/03/musks-xai-gets-permit-for-turbines-to-power-supercomputer-in-memphis.html) - [In South Memphis, Elon Musk's Colossus Operated Gas Turbines Without Appropriate Permits, Residents and Activists Claim](https://insideclimatenews.org/news/17072025/elon-musk-xai-data-center-gas-turbines-memphis/) - [A Tennessee neighborhood takes on Musk's xAI Colossus supercomputer in its fight for clean air](https://www.nbcnews.com/news/us-news/musk-xai-colossus-supercomputer-boxtown-memphis-tennessee-rcna206242) ### ch16-11: TRUE - Speaker: Karen Hao - Claim: The Memphis community where Colossus was built is a working-class, Black and brown community that was not told it would host the facility. - TLDR: Multiple credible sources confirm Colossus was built in Boxtown, a predominantly Black, working-class Memphis neighborhood whose residents were not informed beforehand. - Explanation: Reporting from NBC News, Time, The Week, and the Southern Environmental Law Center confirms that the Colossus facility was placed in a poor, majority-Black community (Boxtown, first settled by freed enslaved people). Residents, City Council members, and environmental agencies all learned of the project only from local news the day before or the day of the announcement. No meaningful community consultation took place prior to construction. - Sources: - [Up against Musk's Colossus supercomputer, a Memphis neighborhood fights for clean air](https://www.nbcnews.com/news/us-news/musk-xai-colossus-supercomputer-boxtown-memphis-tennessee-rcna206242) - [Inside the Memphis Community Battling Elon Musk's xAI](https://time.com/7308925/elon-musk-memphis-ai-data-center/) - [Inside a Black community's fight against Elon Musk's supercomputer](https://theweek.com/tech/memphis-black-community-against-supercomputer-elon-musk-xai) - [xAI built an illegal power plant to power its data center - Southern Environmental Law Center](https://www.selc.org/news/xai-built-an-illegal-power-plant-to-power-its-data-center/) ### ch16-12: FALSE - Speaker: Karen Hao - Claim: The Memphis community where Colossus is located discovered it was hosting the facility because residents smelled what seemed like a gas leak in their homes. - TLDR: Residents discovered the Colossus facility through news reports, not by smelling gas. The gas smells came later, as a consequence of the unpermitted turbines already in operation. - Explanation: Multiple sources, including Capital B News, confirm that residents 'first found out the Colossus supercomputer had arrived in their neighborhood through news reports.' Residents did report smelling gas-like or chemical odors near the facility, but these were effects experienced after the facility was built and operating, not the mechanism by which they discovered its existence. The claim reverses the timeline and misidentifies what alerted the community. - Sources: - ['We Deserve to Breathe Clean Air': Memphis Residents Take on Elon Musk's xAI - Capital B News](https://capitalbnews.org/we-deserve-to-breathe-clean-air-memphis-residents-take-on-elon-musks-xai/) - [A Tennessee neighborhood takes on Musk's xAI Colossus supercomputer in its fight for clean air](https://www.nbcnews.com/news/us-news/musk-xai-colossus-supercomputer-boxtown-memphis-tennessee-rcna206242) - [Elon Musk's xAI facility is polluting South Memphis - Southern Environmental Law Center](https://www.selc.org/news/elon-musks-xai-facility-is-polluting-south-memphis/) ### ch16-13: TRUE - Speaker: Karen Hao - Claim: The Memphis community hosting Colossus already had a history of environmental racism and had struggled to access their right to clean air before the facility arrived. - TLDR: The South Memphis community hosting Colossus (Boxtown) is a historically Black neighborhood that has long faced documented environmental racism and pre-existing air quality crises. - Explanation: Multiple credible sources confirm the area had 19 active polluting industrial facilities before xAI arrived, including an oil refinery, a coal plant, and a steel mill. Memphis already received an 'F' for ozone from the American Lung Association and was ranked an 'asthma capital,' while Boxtown's cancer risk from air pollution was four times the national average. State Representative Justin Pearson called it 'a clean, clear-cut case of environmental racism.' - Sources: - [A Historic Black Community Takes On the World's Richest Man Over Environmental Racism](https://capitalbnews.org/musk-xai-memphis-black-neighborhood-pollution/) - [A Tennessee neighborhood takes on Musk's xAI Colossus supercomputer in its fight for clean air](https://www.nbcnews.com/news/us-news/musk-xai-colossus-supercomputer-boxtown-memphis-tennessee-rcna206242) - [Inside the Memphis Community Battling Elon Musk's xAI](https://time.com/7308925/elon-musk-memphis-ai-data-center/) - [Elon Musk's xAI facility is polluting South Memphis - Southern Environmental Law Center](https://www.selc.org/news/elon-musks-xai-facility-is-polluting-south-memphis/) ### ch16-14: TRUE - Speaker: Karen Hao - Claim: The Colossus supercomputer facility in Memphis is pumping thousands of tons of toxins into the air, exacerbating asthmatic symptoms in children and respiratory illnesses in other residents. - TLDR: Well-documented. xAI's gas turbines are estimated to emit up to 2,000 tons of nitrogen oxides annually, and the community already leads Tennessee in childhood asthma hospitalizations. - Explanation: Multiple credible sources (Southern Environmental Law Center, CNN, NBC News, Inside Climate News) confirm that xAI's Colossus facility operated dozens of unpermitted gas turbines emitting nitrogen oxides, formaldehyde, and fine particulate matter. SELC calculations put potential NOx emissions at up to 2,000 tons per year. Shelby County holds an 'F' air quality rating from the American Lung Association and has the highest rate of children hospitalized for asthma in Tennessee, with residents reporting worsened symptoms after Colossus launched. - Sources: - [Elon Musk's xAI facility is polluting South Memphis - Southern Environmental Law Center](https://www.selc.org/news/elon-musks-xai-facility-is-polluting-south-memphis/) - [Elon Musk is building 'the world's biggest supercomputer.' It's powered with dozens of gas-powered turbines | CNN](https://www.cnn.com/2025/05/19/climate/xai-musk-memphis-turbines-pollution) - [A Tennessee neighborhood takes on Musk's xAI Colossus supercomputer in its fight for clean air](https://www.nbcnews.com/news/us-news/musk-xai-colossus-supercomputer-boxtown-memphis-tennessee-rcna206242) - [Inside Memphis' Battle Against Elon Musk's xAI Data Center | TIME](https://time.com/7308925/elon-musk-memphis-ai-data-center/) ### ch16-15: TRUE - Speaker: Karen Hao - Claim: The Memphis community where Colossus is located has one of the highest rates of lung cancer. - TLDR: South Memphis, where Colossus is located, has cancer rates documented at four times the national average, with historical lung cancer rates specifically cited as 2.5 to 3 times the national average. - Explanation: Multiple sources including ProPublica, the Southern Environmental Law Center, and the NAACP confirm that South Memphis has long suffered from cancer rates far above the national average, driven by decades of industrial pollution. Lung cancer specifically was identified as early as the 1980s at 2.5x the national average, rising to 3x by the 1990s. The area has consistently been cited as one of the most polluted, environmentally overburdened communities in the US, supporting the characterization that it has one of the highest lung cancer rates. - Sources: - [A billionaire, an AI supercomputer, toxic emissions and a Memphis community that did nothing wrong • Tennessee Lookout](https://tennesseelookout.com/2025/07/07/a-billionaire-an-ai-supercomputer-toxic-emissions-and-a-memphis-community-that-did-nothing-wrong/) - [Carcinogenic Pollution is Endemic in South Memphis – Vanderbilt Political Review](https://vanderbiltpoliticalreview.com/12091/us/carcinogenic-pollution-is-endemic-in-south-memphis/) - [South Memphis Residents Skeptical of Musk's xAI Economic Growth Claims as Pollution Concerns Grow » NCRC](https://ncrc.org/south-memphis-residents-skeptical-of-musks-xai-economic-growth-claims-as-pollution-concerns-grow/) - [Elon Musk's xAI supercomputer stirs turmoil over smog in Memphis : NPR](https://www.npr.org/2024/09/11/nx-s1-5088134/elon-musk-ai-xai-supercomputer-memphis-pollution) ### ch15-1: INEXACT - Speaker: Karen Hao - Claim: Data annotation is now one of the top jobs on LinkedIn. - TLDR: Data annotation (data annotator) ranks #4 on LinkedIn's 2026 'Jobs on the Rise' list, confirming the core claim. However, the list contains 25 roles, not 10 as implied. - Explanation: LinkedIn's 2026 'Jobs on the Rise' report ranks Data Annotator as the 4th fastest-growing role in the US, behind AI Engineer, AI Consultant/Strategist, and New Home Sales Specialist. The speaker says it appears on a 'top 10' list, but the actual LinkedIn report covers 25 roles. The core assertion that data annotation is one of the top growing jobs on LinkedIn is accurate. - Sources: - [LinkedIn Jobs on the Rise 2026: The 25 fastest-growing roles in the U.S.](https://www.linkedin.com/pulse/linkedin-jobs-rise-2026-25-fastest-growing-roles-us-linkedin-news-dlb1c) - [AI-related Jobs Top LinkedIn's Fastest-growing Roles List for 2026 | Dice.com Career Advice](https://www.dice.com/career-advice/ai-related-jobs-top-linkedins-fastest-growing-roles-list-for-2026) ### ch15-2: INEXACT - Speaker: Karen Hao - Claim: LinkedIn published a report showing the top 10 jobs with the highest growth in the last year, and data annotation is on that list. - TLDR: LinkedIn's report does include data annotation as a top-growing job (ranked #4), but the report lists 25 fastest-growing roles, not 10. - Explanation: LinkedIn's 2026 'Jobs on the Rise' report ranks Data Annotator at #4 out of 25 fastest-growing U.S. jobs, so it is within the top 10. However, the claim misstates the scope of the list as 'top 10' when it is actually a top 25 ranking. The core assertion that LinkedIn highlighted data annotation as a fast-growing job is accurate. - Sources: - [LinkedIn Jobs on the Rise 2026: The 25 fastest-growing roles in the U.S.](https://www.linkedin.com/pulse/linkedin-jobs-rise-2026-25-fastest-growing-roles-us-linkedin-news-dlb1c) - [The top 10 fastest-growing jobs in the U.S. and where they're hiring the most, according to LinkedIn](https://www.cnbc.com/2026/01/07/the-fastest-growing-jobs-in-the-us-and-where-theyre-hiring-the-most-according-to-linkedin.html) ### ch15-3: INEXACT - Speaker: Karen Hao - Claim: ChatGPT's conversational ability was created because tens of thousands or hundreds of thousands of people typed into a large language model to show it how to respond to user prompts. - TLDR: The mechanism described (human annotators doing RLHF to teach ChatGPT conversational behavior) is accurate, but the scale is vastly overstated. OpenAI's InstructGPT paper documented roughly 40 contractors, not tens of thousands or hundreds of thousands. - Explanation: ChatGPT's conversational ability was indeed built through supervised fine-tuning and RLHF, where human labelers wrote example responses and ranked model outputs. However, OpenAI's InstructGPT paper (the direct technical predecessor to ChatGPT) explicitly states that approximately 40 contractors were hired via Upwork and Scale AI for this work. A separate outsourcing effort through Sama for toxicity filtering also involved only a few dozen Kenyan workers. The figures of 'tens of thousands or hundreds of thousands' are off by several orders of magnitude. - Sources: - [Training language models to follow instructions with human feedback](https://cdn.openai.com/papers/Training_language_models_to_follow_instructions_with_human_feedback.pdf) - [OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | TIME](https://time.com/6247678/openai-chatgpt-kenya-workers/) - [OpenAI recruited human contractors to improve GPT-3 • The Register](https://www.theregister.com/2022/04/09/openai_gpt3_contractors/) ### ch15-4: TRUE - Speaker: Karen Hao - Claim: Before data annotation work was performed, ChatGPT did not exist in its conversational form; it would generate text that was not in dialogue with the user, only adjacently related to the prompt. - TLDR: Before instruction tuning and RLHF data annotation, the base GPT model generated text completions rather than conversational responses. This is well-documented in AI literature. - Explanation: Base GPT-3 was trained to predict the next word on large internet text datasets, not to follow user instructions or engage in dialogue. Human-annotated data and RLHF (reinforcement learning from human feedback) are what transformed it into the conversational ChatGPT. OpenAI's own InstructGPT paper describes how the base model would generate outputs that were 'untruthful, toxic, or reflect harmful sentiments' and often missed the user's intent, while the annotated fine-tuned model became reliably instruction-following. - Sources: - [Aligning language models to follow instructions | OpenAI](https://openai.com/index/instruction-following/) - [ChatGPT's Technical Foundations: Transformers to RLHF | IntuitionLabs](https://intuitionlabs.ai/articles/key-innovations-behind-chatgpt) - [Illustrating Reinforcement Learning from Human Feedback (RLHF)](https://huggingface.co/blog/rlhf) ### ch15-5: INEXACT - Speaker: Karen Hao - Claim: Data annotation is part of the process of reinforcement learning, where a model is shown many examples and then trained on those examples iteratively to acquire capabilities. - TLDR: Data annotation is more directly tied to supervised learning than reinforcement learning. Its role in RL is specific to RLHF, and the process description Hao gives sounds more like supervised fine-tuning. - Explanation: Data annotation is the foundation of supervised learning, where labeled examples train a model on input-output pairs. In Reinforcement Learning from Human Feedback (RLHF), annotation does play a role (human raters rank model outputs to train a reward model), so calling it 'part of the process' is not wrong in that narrow context. However, Hao's description, 'showing the model examples of things you want it to know and training on them iteratively,' more accurately describes supervised fine-tuning (SFT) than reinforcement learning, which is fundamentally about reward maximization, not direct example imitation. - Sources: - [Reinforcement learning from human feedback - Wikipedia](https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback) - [What is RLHF? - Reinforcement Learning from Human Feedback Explained - AWS](https://aws.amazon.com/what-is/reinforcement-learning-from-human-feedback/) - [Illustrating Reinforcement Learning from Human Feedback (RLHF)](https://huggingface.co/blog/rlhf) - [The Role of Data Annotation and RLHF to Build Successful LLMs - iMerit](https://imerit.net/resources/blog/the-role-of-data-annotation-and-rlhf-to-build-successful-llms/) ### ch15-6: INEXACT - Speaker: Karen Hao - Claim: Many highly educated people, including college graduates, PhD holders, law degree holders, doctors, and award-winning directors, are struggling to find employment because the economy has been restructured by AI. - TLDR: A real Verge/New York Magazine piece by Josh Dzieza does report on highly educated workers (PhDs, lawyers, doctors, writers) being absorbed into AI data annotation due to job displacement. Karen Hao's description is accurate in substance, with minor imprecisions. - Explanation: Josh Dzieza's March 2026 piece 'You Could Be Next' (published in The Verge, in collaboration with New York Magazine) explicitly describes highly educated and underemployed professionals, including those from law, science, medicine, and the arts, turning to AI data annotation gigs. Scale AI claims 700,000 'M.A.'s, Ph.D.'s, and college graduates' on its platform. The core of Karen Hao's characterization is well-supported, though 'award-winning directors' as a specific example is not directly confirmed in available snippets, and the piece is primarily a Verge publication rather than strictly a New York Magazine piece. - Sources: - [New York Magazine on X (tweet about Josh Dzieza's reporting on highly educated underemployed workers in AI data annotation)](https://x.com/NYMag/status/2031346881591406778) - [You Could Be Next - Longreads](https://longreads.com/2026/03/17/ai-training-data-gig-economy/) ### ch15-7: INEXACT - Speaker: Karen Hao - Claim: OpenAI, Groq, and Google hire third-party data annotation firms to find workers and perform the data annotation tasks they need. - TLDR: The practice is well-documented for OpenAI and Google, but 'Groq' is almost certainly a transcription error for 'Grok' (xAI), as the two words are phonetically identical and Groq is an inference chip company that does not train its own foundation models. - Explanation: Multiple sources confirm that OpenAI and Google rely on third-party data annotation firms (Scale AI, Handshake AI, Surge, etc.) to recruit and manage workers for training data tasks. xAI's Grok also uses third-party contractors per its official model card. However, Groq Inc. is an AI inference hardware company that runs open-source models and does not train foundation models, making it an unlikely client of data annotation firms. The auto-generated transcript almost certainly mis-transcribed 'Grok' (xAI) as 'Groq', since both are pronounced identically. - Sources: - [OpenAI is reportedly asking contractors to upload real work from past jobs | TechCrunch](https://techcrunch.com/2026/01/10/openai-is-reportedly-asking-contractors-to-upload-real-work-from-past-jobs/) - [The AI Industry Is Traumatizing Desperate Contractors in the Developing World for Pennies](https://futurism.com/artificial-intelligence/ai-industry-traumatizing-contractors) - [Scale AI - Wikipedia](https://en.wikipedia.org/wiki/Scale_AI) - [xAI reportedly lays off 500 workers from data-annotation team | TechCrunch](https://techcrunch.com/2025/09/13/xai-reportedly-lays-off-500-workers-from-data-annotation-team/) - [Groq - Wikipedia](https://en.wikipedia.org/wiki/Groq) ### ch15-8: TRUE - Speaker: Karen Hao - Claim: Third-party data annotation firms are incentivized to pit workers against each other and to complete work as quickly and cheaply as possible in order to compete for contracts from AI clients. - TLDR: This is a well-documented dynamic in the data annotation industry, confirmed by multiple researchers and investigative outlets. - Explanation: Multiple credible sources, including MIT Technology Review, Brookings Institution, and academic research, confirm that third-party data annotation firms compete aggressively on price and speed to win contracts from AI companies, creating a 'race to the bottom' for worker wages and conditions. Scale AI's Remotasks platform is cited as a specific example of this structural model, where competition among outsourcing firms drives down pay and worker protections globally. - Sources: - [How the AI industry profits from catastrophe | MIT Technology Review](https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/) - [Philippines: Scale AI creating 'race to the bottom' as outsourced workers face 'digital sweatshop' conditions - Business and Human Rights Centre](https://www.business-humanrights.org/en/latest-news/philippines-scale-ai-creating-race-to-the-bottom-as-outsourced-workers-face-poor-conditions-in-digital-sweatshops-incl-low-wages-withheld-payments/) - [Global data empires: Analysing artificial intelligence data annotation in China and the USA](https://journals.sagepub.com/doi/10.1177/20539517251340600) - [Reimagining the future of data and AI labor in the Global South | Brookings](https://www.brookings.edu/articles/reimagining-the-future-of-data-and-ai-labor-in-the-global-south/) ### ch15-9: TRUE - Speaker: Karen Hao - Claim: Data annotation workers interviewed for the New York Magazine story report waiting at their laptops on Slack for projects to open, because they cannot find other work and data annotation is their primary source of income. - TLDR: A March 2026 New York Magazine/Verge piece by Josh Dzieza confirms workers waiting on Slack for annotation projects and depending on that income for basic needs. - Explanation: The article 'You Could Be Next' (Dzieza, The Verge/NYMag, March 2026) documents workers dropping everything at Slack notifications for incoming tasks, a single mother who needed the work to pay bills, and workers in Slack channels discussing rent and their children's needs. The portrayal of workers as highly educated but underemployed, with annotation as a primary income source, is consistent with Hao's description. The specific anecdote about a child coming home from school could not be verified from accessible text, but the core elements of the claim are well-supported. - Sources: - [You Could Be Next - Longreads](https://longreads.com/2026/03/17/ai-training-data-gig-economy/) - [AI Is a Lot of Work, By Josh Dzieza, June 20, 2023 New York - NowComment](https://nowcomment.com/documents/350260) - [Teaching AI to think like a human - Marketplace](https://www.marketplace.org/episode/teaching-ai-to-think-like-a-human/) ### ch15-10: INEXACT - Speaker: Steven Bartlett - Claim: Anthropic has predicted that workers in industries including arts and media, legal, life and social sciences, architecture and engineering, computer and mathematics, business and finance, management, and office and admin will be disrupted by AI. - TLDR: Anthropic's March 2026 labor market report does identify all those industries as highly exposed to AI, but frames it as 'exposure' rather than a firm 'prediction of disruption.' - Explanation: Anthropic's report 'Labour Market Impacts of AI: A New Measure and Early Evidence' lists management (91.3%), office and admin (90%), legal (89%), architecture and engineering (84.8%), arts and media (83.7%), life and social sciences (77%), computer and math, and business and finance as among the most theoretically exposed categories. All industries Bartlett names are confirmed. However, Anthropic explicitly found no systematic increase in unemployment yet and frames its findings as measuring 'exposure,' not a prediction that these workers will necessarily be disrupted. - Sources: - [Labor market impacts \ Anthropic](https://www.anthropic.com/research/labor-market-impacts) - [Anthropic just mapped out which jobs AI could potentially replace. A 'Great Recession for white-collar workers' is absolutely possible | Fortune](https://fortune.com/2026/03/06/ai-job-losses-report-anthropic-research-great-recession-for-white-collar-workers/) - [How AI will reshape work: Anthropic identifies the most exposed jobs | Euronews](https://www.euronews.com/business/2026/03/14/how-ai-will-reshape-work-anthropic-identifies-the-most-exposed-jobs) ### ch15-11: INEXACT - Speaker: Steven Bartlett - Claim: Unlike the Industrial Revolution, where workers had 10 to 20 years to retrain because factories take a long time to build, AI is deployed on the open internet, enabling near-instant mass disruption. - TLDR: The core contrast is valid, but the Industrial Revolution actually took far longer than 10-20 years to unfold, making the specific figure an underestimate rather than an overstatement. - Explanation: Historians place the Industrial Revolution between roughly 1760 and 1840 (80 years) in Britain alone, with its full global spread taking over a century. Workers and societies had far more than 10-20 years to adapt, not less. The broader claim that AI, distributed instantly via the internet, can disrupt at a dramatically faster pace than factory-based industrialization is well-supported across multiple analyses. - Sources: - [Industrial Revolution - Wikipedia](https://en.wikipedia.org/wiki/Industrial_Revolution) - [AI and the industrial revolution: Similarities, differences and lessons | VoxDev](https://voxdev.org/topic/technology-innovation/ai-and-industrial-revolution-similarities-differences-and-lessons) - [Does the Rise of AI Compare to the Industrial Revolution? 'Almost,' Research Suggests | Columbia Business School](https://business.columbia.edu/research-brief/research-brief/ai-industrial-revolution) ### ch15-12: INEXACT - Speaker: Steven Bartlett - Claim: ChatGPT gained hundreds of millions of users extremely rapidly and became the fastest-growing company of all time. - TLDR: ChatGPT is indeed the fastest-growing consumer app in history, reaching 100 million users in 2 months. However, the record applies to the app/product, not OpenAI as a 'company.' - Explanation: UBS analysts, citing Similarweb data, confirmed ChatGPT reached 100 million monthly active users in January 2023, just two months after launch, making it the fastest-growing consumer application ever. For comparison, TikTok took 9 months and Instagram 2.5 years to reach the same milestone. The core claim about explosive growth to hundreds of millions of users is accurate, but Bartlett's phrasing of 'fastest-growing company' is imprecise since sources consistently describe ChatGPT as the fastest-growing app or consumer internet app. - Sources: - [UBS: ChatGPT is the Fastest Growing App of All Time](https://aibusiness.com/nlp/ubs-chatgpt-is-the-fastest-growing-app-of-all-time) - [Why ChatGPT Is the Fastest Growing Web Platform Ever | TIME](https://time.com/6253615/chatgpt-fastest-growing/) - [ChatGPT sets record for fastest-growing user base | Microsoft Community Hub](https://techcommunity.microsoft.com/t5/itops-talk/chatgpt-sets-record-for-fastest-growing-user-base/td-p/3733917) ### ch15-13: TRUE - Speaker: Karen Hao - Claim: AI companies, through their race with one another, are driving the speed of AI transition at a pace that makes it very hard to care for people displaced by AI. - TLDR: This is a widely documented concern backed by major institutions. The competitive race among AI companies is broadly acknowledged to be outpacing society's ability to support displaced workers. - Explanation: The WEF, McKinsey, Goldman Sachs, and multiple economists confirm that competitive AI deployment is accelerating faster than social safety nets and retraining programs can respond. WEF modeling notes that even in optimistic scenarios, 'ethics and governance frameworks struggle to keep up.' The distributional mismatch between jobs destroyed and jobs created is a core finding across institutional research. - Sources: - [AI paradoxes: Why AI's future isn't straightforward | World Economic Forum](https://www.weforum.org/stories/2025/12/ai-paradoxes-in-2026/) - [The overlooked global risk of the AI precariat | World Economic Forum](https://www.weforum.org/stories/2025/08/the-overlooked-global-risk-of-the-ai-precariat/) - [AI could trigger a global jobs market collapse by 2027 if left unchecked, former Google ethicist warns | Fortune](https://fortune.com/2026/02/10/ai-taking-jobs-report-tristan-harris-google-ethicist-agi-technology/) - [AI in the workplace: A report for 2025 | McKinsey](https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work) ### ch15-14: TRUE - Speaker: Steven Bartlett - Claim: Uber CEO Dara suggested that displaced drivers could find data labeling jobs as alternative employment, though not all drivers could become data labelers. - TLDR: Dara Khosrowshahi did suggest data labeling jobs as an alternative for displaced Uber drivers, and Uber launched a pilot program to that effect in late 2025. - Explanation: Uber announced its AI Solutions Group initiative in October 2025, offering drivers tasks like photo uploads, voice recordings, and AI response evaluation as supplemental income. Khosrowshahi explicitly framed this as a way to help drivers displaced by robotaxis, consistent with what Bartlett describes. Khosrowshahi also appeared on Diary of a CEO, making the conversation referenced in the transcript directly verifiable. - Sources: - [Uber will offer gig work like AI data labeling to drivers while not on the road](https://www.cnbc.com/2025/10/16/uber-will-offer-us-drivers-more-gig-work-including-ai-data-labeling.html) - [Uber CEO predicts most rides could be robot-operated within 20 years | Fortune](https://fortune.com/2026/02/23/uber-ceo-dara-khosrowshahi-robotaxis-autonomous-vehicles-diary-of-a-ceo-podcast/) - [Uber Pilots AI Data Labeling Jobs for Drivers During Downtime](https://www.techbuzz.ai/articles/uber-pilots-ai-data-labeling-jobs-for-drivers-during-downtime) ### ch15-15: TRUE - Speaker: Karen Hao - Claim: AI companies are creating technologies that exacerbate existing inequality, giving those who already have resources significantly more wealth and free time while further squeezing those who lack resources. - TLDR: Multiple academic studies and institutional reports broadly support the claim that AI exacerbates existing inequality, benefiting resource-rich individuals and companies while squeezing lower-income workers. - Explanation: Research from the IMF, peer-reviewed journals, and organizations like the Center for Global Development consistently finds that AI disproportionately benefits capital owners and high-skill workers while displacing lower-skilled labor, intensifying wealth disparities. Karen Hao's book 'Empire of AI' documents this pattern empirically, including underpaid data workers in the Global South and communities bearing the environmental costs of data centers. Some studies note AI could reduce wage inequality for certain high-income workers, but the dominant short-term finding is that AI widens the gap between haves and have-nots. - Sources: - [AI Adoption and Inequality](https://www.imf.org/en/publications/wp/issues/2025/04/04/ai-adoption-and-inequality-565729) - [Three Reasons Why AI May Widen Global Inequality | Center For Global Development](https://www.cgdev.org/blog/three-reasons-why-ai-may-widen-global-inequality) - [Artificial Inequality: AI is exacerbating career, income, and gender divides](https://www.theadaptavistgroup.com/company/press/artificial-inequality-ai-is-exacerbating-career-income-and-gender-divides-global) - [Empire of AI by Karen Hao explores global costs of AI progress - Rest of World](https://restofworld.org/2025/karen-hao-empire-of-ai-book/) - ["Empire of AI": Karen Hao on How AI Is Threatening Democracy & Creating a New Colonial World | Democracy Now!](https://www.democracynow.org/2026/1/1/empire_of_ai_karen_hao_on) ### ch4-1: TRUE - Speaker: Steven Bartlett - Claim: Sam Altman wrote a blog post in 2015 before OpenAI was officially announced that outlined the existential risk of AI. - TLDR: Sam Altman's 2015 blog post 'Machine Intelligence Part 1' does contain that exact quote about existential risk, and it was published before OpenAI's December 2015 announcement. - Explanation: The post at blog.samaltman.com/machine-intelligence-part-1 states: 'Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity,' and also references engineered viruses as more certain but less total threats. Multiple sources, including a LessWrong linkpost, confirm the post predates OpenAI's founding. The quotes Bartlett cites match the source accurately. - Sources: - [Machine intelligence, part 1 - Sam Altman](https://blog.samaltman.com/machine-intelligence-part-1) - [[Linkpost] Sam Altman's 2015 Blog Posts Machine Intelligence](https://www.lesswrong.com/posts/QnBZkNJNbJK9k5Xi7/linkpost-sam-altman-s-2015-blog-posts-machine-intelligence) ### ch4-2: TRUE - Speaker: Steven Bartlett - Claim: In that 2015 blog post, Altman wrote that development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. - TLDR: The quote is accurate. Altman's 2015 blog post 'Machine Intelligence, Part 1' states exactly that development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. - Explanation: The post, published on blog.samaltman.com before OpenAI's official December 2015 announcement, contains the line verbatim (with the minor addition of the abbreviation 'SMI' in parentheses). Multiple sources independently confirm this quote and its origin. - Sources: - [Machine intelligence, part 1 - Sam Altman](https://blog.samaltman.com/machine-intelligence-part-1) - [[Linkpost] Sam Altman's 2015 Blog Posts Machine Intelligence - LessWrong](https://www.lesswrong.com/posts/QnBZkNJNbJK9k5Xi7/linkpost-sam-altman-s-2015-blog-posts-machine-intelligence) ### ch4-3: INEXACT - Speaker: Steven Bartlett - Claim: In the 2015 blog post, Altman wrote that AI is probably the most likely way to destroy everything, while noting that engineered viruses are more certain to happen. - TLDR: Altman's 2015 post does call AI the greatest existential threat and says engineered viruses are more certain to happen, but the exact wording differs from how Bartlett quotes it. - Explanation: The actual post states: 'Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity,' and that engineered viruses are 'more certain to happen' but 'unlikely to destroy every human in the universe in the way that SMI could.' Bartlett's paraphrase ('most likely way to destroy everything') captures the spirit but is not a direct quote. The core substance of the claim is accurate. - Sources: - [Machine intelligence, part 1 - Sam Altman](https://blog.samaltman.com/machine-intelligence-part-1) ### ch4-4: INEXACT - Speaker: Karen Hao - Claim: When Altman wrote the 2015 blog post, he was trying to convince Elon Musk to join him in co-founding OpenAI. - TLDR: Altman did write a 2015 blog post using existential risk language while working to co-found OpenAI with Musk, but documented persuasion of Musk happened primarily via direct email and meetings, not through the public blog post. - Explanation: Sam Altman wrote a 2015 blog post ('Machine Intelligence') using strong existential risk language that mirrors Musk's concerns, and historical records confirm Altman initiated the co-founding effort by emailing Musk in March 2015. However, framing the blog post itself as the instrument for convincing Musk is an interpretive claim not clearly supported by documentary evidence. The initial outreach was a private email proposal, and the blog post was a public-facing piece rather than a direct persuasion tool aimed at Musk. - Sources: - [Machine intelligence, part 1 - Sam Altman](https://blog.samaltman.com/machine-intelligence-part-1) - [Machine intelligence, part 2 - Sam Altman](https://blog.samaltman.com/machine-intelligence-part-2) - [OpenAI - Wikipedia](https://en.wikipedia.org/wiki/OpenAI) - [Altman and Musk launched OpenAI as a nonprofit 10 years ago. Now they're rivals in a trillion-dollar market](https://www.cnbc.com/2025/12/11/openai-began-decade-ago-as-nonprofit-lab-musk-and-altman-now-rivals.html) ### ch4-5: TRUE - Speaker: Karen Hao - Claim: The language Altman used in the 2015 blog post mirrors the language Musk was using at the time about AI as an existential threat. - TLDR: Altman's 2015 blog post used near-identical existential risk framing to Musk's well-documented warnings at the time. - Explanation: Altman's personal 2015 blog post 'Machine Intelligence, Part 1' stated that 'development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.' Musk, in the same period, was calling AI 'the biggest existential threat' and in 2014 described AI development as 'summoning the demon.' The parallel framing around AI as an existential/extinction-level risk to humanity is well documented in both cases. - Sources: - [Machine intelligence, part 1 - Sam Altman](https://blog.samaltman.com/machine-intelligence-part-1) - [Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk' : NPR](https://www.npr.org/sections/thetwo-way/2017/07/17/537686649/elon-musk-warns-governors-artificial-intelligence-poses-existential-risk) - [The chaos inside OpenAI – Sam Altman, Elon Musk, and existential risk explained - Big Think](https://bigthink.com/videos/openai-chaos-explained/) ### ch4-6: UNVERIFIABLE - Speaker: Karen Hao - Claim: Before writing the 2015 blog post, Altman had been primarily talking about engineered viruses as a major threat, not AI. - TLDR: No publicly available pre-2015 statements by Altman confirm that engineered viruses were his primary focus over AI. The claim rests on Hao's private reporting. - Explanation: Altman's February 2015 blog post 'Machine Intelligence, Part 1' does include a parenthetical noting engineered viruses as threats 'more certain to happen,' which is consistent with Hao's framing of a prior concern. However, no archived pre-2015 public statements, speeches, or blog posts by Altman were found establishing that engineered viruses were his PRIMARY existential concern rather than AI. The claim appears to derive from Hao's non-public sourcing (300+ interviews for her book 'Empire of AI') and cannot be independently verified from publicly available evidence. - Sources: - [Machine intelligence, part 1 - Sam Altman](https://blog.samaltman.com/machine-intelligence-part-1) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - ['I Prep for Survival': OpenAI CEO Sam Altman Worries About The 'Nonzero' Chance The World Will End From 'a Lethal Synthetic Virus'](https://finance.yahoo.com/news/prep-survival-openai-ceo-sam-135313196.html) ### ch4-7: TRUE - Speaker: Karen Hao - Claim: Elon Musk co-founded OpenAI with Sam Altman. - TLDR: Elon Musk and Sam Altman did co-found OpenAI together in December 2015, serving as co-chairs. - Explanation: OpenAI was founded in December 2015 by a group that included Musk and Altman as co-chairs, alongside Ilya Sutskever, Greg Brockman, and others. Musk later resigned from the board in 2018 and has since become a rival. The co-founding relationship is confirmed by multiple sources including OpenAI itself and Wikipedia. - Sources: - [OpenAI - Wikipedia](https://en.wikipedia.org/wiki/OpenAI) - [Altman and Musk launched OpenAI as a nonprofit 10 years ago. Now they're rivals in a trillion-dollar market](https://www.cnbc.com/2025/12/11/openai-began-decade-ago-as-nonprofit-lab-musk-and-altman-now-rivals.html) - [OpenAI and Elon Musk | OpenAI](https://openai.com/index/openai-elon-musk/) ### ch4-8: TRUE - Speaker: Karen Hao - Claim: Musk believes Altman engineered his language in order to gain Musk's trust as a partner in founding OpenAI. - TLDR: Musk's lawsuit against Altman and OpenAI explicitly alleges he was 'assiduously manipulated' by Altman's language and promises about OpenAI's nonprofit mission to gain his trust as a co-founder. - Explanation: Musk's federal lawsuit claims Altman 'assiduously manipulated Musk into co-founding their spurious non-profit venture' by promising a safety-focused, open, nonprofit structure. The suit describes this as 'Altman's long con,' directly supporting Hao's characterization that Musk believes Altman engineered his language to secure Musk's trust and participation. The case is heading to trial in April 2026. - Sources: - [Elon Musk sues OpenAI again, claims he was tricked into helping form company | Courthouse News Service](https://www.courthousenews.com/elon-musk-sues-openai-again-claims-he-was-tricked-into-helping-form-company/) - [Can OpenAI Survive Elon Musk?](https://time.com/7353391/elon-musk-sam-altman-openai-trial/) - [Musk, OpenAI lawyers trade barbs as lawsuit heads to trial](https://www.cnbc.com/2026/01/08/musk-openai-altman-lawsuit-trial.html) ### ch4-9: TRUE - Speaker: Karen Hao - Claim: Documents that emerged from the Musk-Altman lawsuit revealed that Musk was muscled out of OpenAI to some degree. - TLDR: Unsealed lawsuit documents do support the claim that Musk was pushed out of OpenAI to some degree. Greg Brockman's diary entry, 'This is the only chance we have to get out from Elon,' is a key piece of evidence. - Explanation: Court filings and unsealed discovery materials from the Musk v. Altman lawsuit reveal that after Musk proposed merging OpenAI into Tesla to gain personal control, the board rejected him. Internal OpenAI records state his departure 'would have removed the impasse caused by his need for absolute control,' and Brockman's personal diary entry explicitly framed his exit as an opportunity to escape Musk's influence. These documents substantiate the claim that Musk was, at least in part, pushed out. - Sources: - [OpenAI Lawsuit Exposed: The Private Diaries, Secret Texts and 500 Billion Fraud Case Going to Trial in 2026](https://www.techbuzz.ai/articles/open-ai-lawsuit-exposed-the-private-diaries-secret-texts-and-500-billion-fraud-case-going-to-trial-in-2026) - [Musk v. Altman: The $134 Billion OpenAI Trial Explained | Let's Data Science](https://letsdatascience.com/blog/musk-sued-openai-for-134-billion-the-jury-decides-in-34-days) - [The truth Elon left out | OpenAI](https://openai.com/index/the-truth-elon-left-out/) - [Unsealed Court Documents Reveal Billionaires' Deliberations, Messy Texts](https://www.hardresetmedia.com/p/unsealed-court-documents-reveal-billionaires-deliberations-openai) ### ch4-10: TRUE - Speaker: Karen Hao - Claim: There is an ongoing lawsuit between Musk and Altman. - TLDR: An active lawsuit between Musk and Altman (and OpenAI) is confirmed, with trial scheduled for April 27, 2026. - Explanation: Elon Musk filed suit against Sam Altman and OpenAI alleging fraudulent breach of the nonprofit founding agreement. A federal judge ruled in January 2026 that the case would go to trial, with proceedings set to begin April 27, 2026. Documents unsealed during discovery have indeed shed light on the circumstances of Musk's departure from OpenAI, as Karen Hao describes. - Sources: - [Musk, OpenAI lawyers trade barbs as lawsuit heads to trial](https://www.cnbc.com/2026/01/08/musk-openai-altman-lawsuit-trial.html) - [Can OpenAI Survive Elon Musk?](https://time.com/7353391/elon-musk-sam-altman-openai-trial/) - [Musk v. Altman: The $134 Billion OpenAI Trial Explained | Let's Data Science](https://letsdatascience.com/blog/musk-sued-openai-for-134-billion-the-jury-decides-in-34-days) ### ch4-11: INEXACT - Speaker: Steven Bartlett - Claim: In 2015, Musk gave speeches at MIT calling AI the biggest existential threat. - TLDR: Musk did give a famous speech at MIT warning AI was the biggest existential threat, but it was in October 2014, not 2015. - Explanation: At MIT's AeroAstro centennial symposium on October 24, 2014, Musk called AI humanity's 'biggest existential threat' and used the 'summoning the demon' analogy. The substance of the claim is accurate, but the year stated (2015) is wrong by one year. - Sources: - [Elon Musk: Artificial Intelligence Is Humanity's 'Biggest Existential Threat' | Live Science](https://www.livescience.com/48481-elon-musk-artificial-intelligence-threat.html) - [Elon Musk Warns Artificial Intelligence Is Like 'Summoning the Demon'](https://time.com/3541005/elon-musk-artificial-intelligence/) ### ch4-12: INEXACT - Speaker: Steven Bartlett - Claim: In those 2015 MIT speeches, Musk compared developing AI to summoning the demon. - TLDR: Musk did make the 'summoning the demon' comparison at MIT, but in October 2014, not 2015. - Explanation: At the MIT Aeronautics and Astronautics Department's Centennial Symposium in October 2014, Musk said: 'With artificial intelligence we are summoning the demon.' The quote and the MIT venue are accurate, but Bartlett places the speech in 2015, which is off by about a year. - Sources: - [Elon Musk: 'With artificial intelligence we are summoning the demon.' - The Washington Post](https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/) - [Elon Musk Compares Building Artificial Intelligence To "Summoning The Demon" | TechCrunch](https://techcrunch.com/2014/10/26/elon-musk-compares-building-artificial-intelligence-to-summoning-the-demon/) - [Elon Musk warns against unleashing artificial intelligence 'demon'](https://money.cnn.com/2014/10/26/technology/elon-musk-artificial-intelligence-demon/) ### ch4-13: TRUE - Speaker: Karen Hao - Claim: OpenAI was originally founded as a nonprofit. - TLDR: OpenAI was indeed founded as a nonprofit in 2015. - Explanation: OpenAI was established in December 2015 as a nonprofit research lab by Sam Altman, Elon Musk, Greg Brockman, Ilya Sutskever, and others, with a mission to develop AI for the benefit of humanity. It only created a capped for-profit subsidiary in 2019 to attract the capital needed to scale research. - Sources: - [OpenAI - Wikipedia](https://en.wikipedia.org/wiki/OpenAI) - [Altman and Musk launched OpenAI as a nonprofit 10 years ago. Now they're rivals in a trillion-dollar market](https://www.cnbc.com/2025/12/11/openai-began-decade-ago-as-nonprofit-lab-musk-and-altman-now-rivals.html) - [Our structure | OpenAI](https://openai.com/our-structure/) ### ch4-14: TRUE - Speaker: Karen Hao - Claim: Ilya Sutskever was the chief scientist of OpenAI at the time of the decision to create a for-profit entity. - TLDR: Ilya Sutskever was indeed OpenAI's Chief Scientist from its founding in 2015, well before the for-profit entity (OpenAI LP) was created in 2019. - Explanation: Sutskever co-founded OpenAI in 2015 and immediately held the title of Chief Scientist. OpenAI LP, the capped-profit entity, was formed in 2019, at which point Sutskever was still serving as Chief Scientist. Multiple sources confirm his tenure in that role throughout this period. - Sources: - [Ilya Sutskever - Wikipedia](https://en.wikipedia.org/wiki/Ilya_Sutskever) - [OpenAI executive Ilya Sutskever is out after key role in CEO Sam Altman's ouster | CNN Business](https://edition.cnn.com/2024/05/14/tech/openai-chief-scientist-ilya-sutskever-departs/index.html) ### ch4-15: TRUE - Speaker: Karen Hao - Claim: Greg Brockman was the chief technology officer of OpenAI at the time of the for-profit transition decision. - TLDR: Greg Brockman was indeed OpenAI's CTO from its 2015 founding through roughly 2019, covering the period when the for-profit transition was deliberated. - Explanation: Multiple sources confirm Brockman held the CTO title at OpenAI from 2015 until approximately 2019, when the capped-profit structure was established and he became President. The for-profit transition discussions took place in the 2017-2019 window, so his title of CTO at the time of those deliberations is accurate. - Sources: - [Greg Brockman - Wikipedia](https://en.wikipedia.org/wiki/Greg_Brockman) - [Greg Brockman: OpenAI](https://digidai.github.io/2025/11/28/greg-brockman-openai-cofounder-president-builder-chief-deep-analysis/) ### ch4-16: TRUE - Speaker: Karen Hao - Claim: Musk and Altman were the two co-chairmen of the nonprofit OpenAI. - TLDR: Musk and Altman were indeed the two co-chairs of the nonprofit OpenAI when it was founded in December 2015. - Explanation: Multiple sources, including Wikipedia and CNBC, confirm that Sam Altman and Elon Musk served as co-chairs of the OpenAI nonprofit at its founding in 2015. Musk later departed from the board in 2018, citing a potential conflict of interest with his role at Tesla. - Sources: - [OpenAI - Wikipedia](https://en.wikipedia.org/wiki/OpenAI) - [Altman and Musk launched OpenAI as a nonprofit 10 years ago. Now they're rivals in a trillion-dollar market](https://www.cnbc.com/2025/12/11/openai-began-decade-ago-as-nonprofit-lab-musk-and-altman-now-rivals.html) ### ch4-17: TRUE - Speaker: Karen Hao - Claim: Emails revealed that Ilya Sutskever and Greg Brockman initially chose Musk to be the CEO of the new for-profit entity. - TLDR: Multiple sources confirm that internal emails revealed Sutskever and Brockman initially favored Musk as CEO of OpenAI's for-profit entity, before Altman persuaded Brockman otherwise. - Explanation: Karen Hao's book 'Empire of AI' and reporting on the emails disclosed in the Musk v. OpenAI lawsuit both confirm that Ilya Sutskever and Greg Brockman initially chose Musk as the preferred leader for the for-profit entity. Altman then personally lobbied Brockman, arguing it would be dangerous to put Musk in charge, ultimately flipping the decision in his favor. - Sources: - [Tense Emails Reveal How Elon Musk, Sam Altman, Ilya Sutskever and Greg Brockman Had Negotiated Over OpenAI's Structure](https://officechai.com/stories/tense-emails-reveal-how-elon-musk-sam-altman-ilya-sutskever-and-greg-brockman-had-negotiated-over-openais-structure/) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [Inside the story that enraged OpenAI | MIT Technology Review](https://www.technologyreview.com/2025/05/19/1116614/hao-empire-ai-openai/) - [New court filing: OpenAI says Elon Musk wanted to own and run it as a for-profit](https://www.axios.com/2024/12/13/elon-musk-openai-altman-lawsuit-filing-nonprofit) ### ch4-18: UNVERIFIABLE - Speaker: Karen Hao - Claim: Altman personally appealed to Greg Brockman to reconsider having Musk as CEO of the for-profit entity, arguing it would be dangerous to give Musk control of powerful AI technology. - TLDR: Hao's book does report Altman was 'very persuasive to Brockman' about why Musk would be dangerous, but the specific private appeal and its exact framing cannot be confirmed from primary sources. - Explanation: Court documents and emails from the Musk v. OpenAI lawsuit confirm that in fall 2017 Musk sought CEO control and majority equity of a for-profit entity, and that Brockman and Sutskever jointly raised concerns about Musk's desire for 'absolute control.' However, the documents show Brockman and Sutskever together expressing reservations, rather than Altman privately convincing Brockman who then convinced Sutskever. Hao's book does contain the narrative of Altman persuading Brockman that Musk would be dangerous, but this rests on anonymous sources and the specific 'personal appeal' with the 'dangerous AI technology in Musk's hands' framing cannot be confirmed or refuted from publicly available records. - Sources: - [Elon Musk wanted an OpenAI for-profit | OpenAI](https://openai.com/index/elon-musk-wanted-an-openai-for-profit/) - [OpenAI emails show Elon Musk wanted for-profit structure in 2017](https://www.cnbc.com/2024/12/13/openai-says-elon-musk-wanted-it-to-be-for-profit-in-2017.html) - [Unsealed Court Documents Reveal Billionaires' Deliberations, Messy Texts](https://www.hardresetmedia.com/p/unsealed-court-documents-reveal-billionaires-deliberations-openai) - [Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI by Karen Hao | Goodreads](https://www.goodreads.com/book/show/222725518-empire-of-ai) - [Inside the story that enraged OpenAI | MIT Technology Review](https://www.technologyreview.com/2025/05/19/1116614/hao-empire-ai-openai/) ### ch4-19: TRUE - Speaker: Karen Hao - Claim: Altman and Greg Brockman had known each other for many years through the Silicon Valley scene. - TLDR: Altman and Brockman met around 2011 through Stripe's Y Combinator connection and knew each other for roughly four years before co-founding OpenAI in 2015. - Explanation: Patrick Collison introduced Brockman to Altman in 2011 when Brockman was CTO of Stripe (a Y Combinator company) and Altman was embedded in the YC world. They collaborated through the Silicon Valley scene for approximately four years before the August 2015 founding dinner and OpenAI's December 2015 launch, confirming the claim that they were friends who had known each other for many years. - Sources: - [The messy, secretive reality behind OpenAI's bid to save the world | MIT Technology Review](https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/) - [Greg Brockman - Wikipedia](https://en.wikipedia.org/wiki/Greg_Brockman) - [Meet the power broker of the AI age: OpenAI's 'builder-in-chief' helping to turn Sam Altman's trillion-dollar data center dreams into reality | Fortune](https://fortune.com/2025/11/05/openai-greg-brockman-ai-infrastructure-data-center-master-builder/) ### ch4-20: UNVERIFIABLE - Speaker: Karen Hao - Claim: Greg Brockman convinced Ilya Sutskever to switch allegiance and support Altman as CEO instead of Musk. - TLDR: Hao's book confirms both Brockman and Sutskever initially backed Musk, and that Altman persuaded Brockman to change course. But the specific step of Brockman then convincing Sutskever is not confirmed in available sources. - Explanation: Multiple summaries of Hao's book describe a 'persuasion chain through Brockman' against Musk, and confirm both Sutskever and Brockman initially favored Musk as the better leader. The documented key move is Altman convincing Brockman. Whether Brockman then separately convinced Sutskever (as the claim states) or whether Sutskever switched independently is not explicitly established in available evidence. - Sources: - [Inside the Mind of AI's Most Powerful CEO](https://ppc.land/inside-the-mind-of-ais-most-powerful-ceo/) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [Tense Emails Reveal How Elon Musk, Sam Altman, Ilya Sutskever and Greg Brockman Had Negotiated Over OpenAI's Structure](https://officechai.com/stories/tense-emails-reveal-how-elon-musk-sam-altman-ilya-sutskever-and-greg-brockman-had-negotiated-over-openais-structure/) ### ch4-21: INEXACT - Speaker: Karen Hao - Claim: Musk left OpenAI after Brockman and Sutskever decided Altman should be CEO, because Musk refused to stay if he was not made CEO. - TLDR: Musk did leave because he was denied the CEO role, but the framing oversimplifies the sequence of events. - Explanation: Multiple sources confirm Musk demanded majority equity, full board control, and the CEO title, and walked away when rejected. However, Brockman and Sutskever's documented role was opposing Musk's bid for 'unilateral absolute control' (via their 'Honest Thoughts' email), not explicitly naming Altman as CEO at that moment. Altman only became CEO roughly a year after Musk's 2018 departure, so framing it as them 'deciding Altman should be CEO' conflates two separate events. - Sources: - [The secret history of Elon Musk, Sam Altman, and OpenAI | Semafor](https://www.semafor.com/article/03/24/2023/the-secret-history-of-elon-musk-sam-altman-and-openai) - [How OpenAI lost Musk and took aim at "something magical" - Big Think](https://bigthink.com/business/how-openai-lost-musk-and-took-aim-at-something-magical/) - [Inside the Feud Between Elon Musk and Sam Altman | Built In](https://builtin.com/artificial-intelligence/musk-altman-feud) ### ch6-1: TRUE - Speaker: Karen Hao - Claim: Ilya Sutskever believes that human brains are giant statistical models. - TLDR: Sutskever has consistently expressed the view that the brain operates like a large neural network (a statistical model), a belief he shares with his mentor Geoffrey Hinton. - Explanation: Multiple sources confirm Sutskever's view that artificial neurons are analogous to biological neurons and that scaling neural networks can replicate human cognition, reflecting a belief that brains are fundamentally statistical learning systems. Geoffrey Hinton, his mentor, has stated explicitly that large language models are 'the best theory we've currently got of how the brain understands language.' Critic Stuart Hameroff has publicly called out Sutskever for treating the brain as a digital computer, further corroborating that this is indeed Sutskever's stated position. - Sources: - [OpenAI's chief scientist thinks humans could one day merge with machines | MIT Technology Review](https://www.technologyreview.com/2023/10/26/1082398/exclusive-ilya-sutskever-openais-chief-scientist-on-his-hopes-and-fears-for-the-future-of-ai/) - [Ilya Sutskever Is Wrong About Brain Being A Biological Computer, It's A Quantum Orchestra: Stuart Hameroff](https://officechai.com/ai/ilya-sutskever-is-wrong-about-brain-being-a-biological-computer-its-a-quantum-orchestra-stuart-hameroff/) - [Geoffrey Hinton on the Past, Present, and Future of AI](https://www.lesswrong.com/posts/zJz8KXSRsproArXq5/geoffrey-hinton-on-the-past-present-and-future-of-ai) - [AI and connectionism (Ilya Sutskever — NeurIPS 2024) | by Anatol Wegner | Medium](https://medium.com/@AIchats/ai-and-connectionism-ilya-sutskever-neurips-2024-6c50a5d7c0d0) ### ch6-2: INEXACT - Speaker: Karen Hao - Claim: Geoffrey Hinton, Ilya Sutskever's mentor, holds the same hypothesis that human brains are statistical models. - TLDR: Hinton is confirmed as Sutskever's doctoral mentor, and both share a brain-inspired view of neural networks. However, framing this as 'brains are statistical models' is a simplification of Hinton's position. - Explanation: Hinton supervised Sutskever's PhD at the University of Toronto, confirming the mentor relationship. Hinton's career is built on the conviction that the brain's architecture is best captured by neural networks (probabilistic, statistical models like Boltzmann machines and deep belief nets). However, his view is more precisely that statistical neural networks mirror the brain's learning mechanisms, not a flat claim that 'brains are statistical models' in the same language used in the transcript. - Sources: - [Ilya Sutskever - Wikipedia](https://en.wikipedia.org/wiki/Ilya_Sutskever) - [Geoffrey Hinton - Wikipedia](https://en.wikipedia.org/wiki/Geoffrey_Hinton) - [Hopfield and Hinton's neural network revolution and the future of AI - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC11573896/) ### ch6-3: TRUE - Speaker: Karen Hao - Claim: The hypothesis that human brains are statistical models has not been proven by science. - TLDR: The idea that the human brain is a statistical engine is a contested hypothesis, not established science. Multiple peer-reviewed sources confirm there is no scientific consensus on this. - Explanation: The hypothesis, closely related to the 'Bayesian brain' framework championed by Hinton and others, is described in the scientific literature as 'highly influential but deeply contested.' Peer-reviewed critiques highlight issues of unfalsifiability, biological implausibility, and lack of empirical grounding. Multiple sources state explicitly that 'there is currently no evidence for the realist claim that brains are actual Bayesian machines,' consistent with Hao's characterization. - Sources: - [The myth of the Bayesian brain - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC12479598/) - [The Bayesian brain: What is it and do humans have it? - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC7579744/) - [Bayesian approaches to brain function - Wikipedia](https://en.wikipedia.org/wiki/Bayesian_approaches_to_brain_function) ### ch6-4: TRUE - Speaker: Karen Hao - Claim: Ilya Sutskever gave a keynote at Neural Information Processing Systems, a prominent AI research conference that happens every year. - TLDR: Ilya Sutskever did give a keynote at NeurIPS 2024, discussing brain size, biological scaling, and connectionism. NeurIPS is an annual, prominent AI research conference. - Explanation: Sutskever's NeurIPS 2024 talk, titled 'Sequence to Sequence Learning with Neural Networks: What a Decade,' is well documented. In it, he discussed connectionism, the parallel between biological brain scaling and AI scaling, and the idea that artificial neurons resemble biological neurons. NeurIPS (Neural Information Processing Systems) is widely recognized as one of the most prominent annual AI research conferences. - Sources: - [Ilya Sutskever NeurIPS talk [video] | Hacker News](https://news.ycombinator.com/item?id=42413677) - [AI and connectionism (Ilya Sutskever — NeurIPS 2024) | by Anatol Wegner | Medium](https://medium.com/@AIchats/ai-and-connectionism-ilya-sutskever-neurips-2024-6c50a5d7c0d0) - [AI Superintelligence: Ilya Sutskever Reveals a Revolutionary Future at NeurIPS - First Movers](https://firstmovers.ai/ai-superintelligence/) ### ch6-5: INEXACT - Speaker: Karen Hao - Claim: At his keynote, Ilya Sutskever presented a chart showing a roughly linear relationship between brain size and species intelligence, with larger brains correlating to greater intelligence. - TLDR: Sutskever did show a brain-scaling chart at NeurIPS 2024, but it plotted brain size against body mass (on a log-log scale), not intelligence directly, and the key insight was about hominids deviating from the trend. - Explanation: At NeurIPS 2024, Sutskever presented a chart from evolutionary neuroscience showing brain mass versus body mass in mammals on a logarithmic scale, a power law relationship rather than a linear one. He used it to argue that hominids broke the expected scaling pattern, implying a qualitative intelligence leap. Karen Hao's description simplifies this to a direct brain size vs. intelligence linear chart, which is an oversimplification of both the axes and the relationship shown. - Sources: - [When Bigger Brains Are Not Enough | by Mesut Felat | Dec, 2024 | Medium](https://medium.com/@mfelat/when-bigger-brains-are-not-enough-f5abb459411e) - [Ilya Sutskever NeurIPS talk [video] | Hacker News](https://news.ycombinator.com/item?id=42413677) - [AI and connectionism (Ilya Sutskever — NeurIPS 2024) | by Anatol Wegner | Medium](https://medium.com/@AIchats/ai-and-connectionism-ilya-sutskever-neurips-2024-6c50a5d7c0d0) ### ch6-6: TRUE - Speaker: Karen Hao - Claim: According to Ilya Sutskever's framework, building a statistical engine larger than the human brain would produce a system more intelligent than humans. - TLDR: Sutskever's framework, as described by Karen Hao, holds that biological intelligence correlates with brain size, and therefore scaling neural networks beyond human brain size should yield greater-than-human intelligence. - Explanation: Karen Hao's book 'Empire of AI' directly describes this reasoning: Sutskever believed that since biological intelligence correlates with brain size, scaling neural networks (statistical engines) beyond human-brain scale would produce superhuman intelligence. Multiple summaries and reviews of the book confirm this is a central pillar of Sutskever's deep learning absolutism. The 'brain-size-to-intelligence' logic underpinning his view is well-documented. - Sources: - [Empire of AI by Karen Hao Book Summary](https://www.summrize.com/books/empire-of-ai-summary) - [AI and connectionism (Ilya Sutskever — NeurIPS 2024) | by Anatol Wegner | Medium](https://medium.com/@AIchats/ai-and-connectionism-ilya-sutskever-neurips-2024-6c50a5d7c0d0) - [Dismantling the Empire of AI with Karen Hao](https://www.bloodinthemachine.com/p/dismantling-the-empire-of-ai-with) ### ch6-7: TRUE - Speaker: Karen Hao - Claim: Some of the biggest critics of the statistical-engine hypothesis say it is reductive to think of human brains as simply statistical engines. - TLDR: This is a well-documented position in cognitive science. Prominent critics like Noam Chomsky and Gary Marcus explicitly argue it is reductive to view the human brain as a statistical engine. - Explanation: Chomsky's widely cited New York Times op-ed stated: 'The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching.' Gary Marcus and other nativist cognitive scientists share this view, making the claim an accurate summary of an established critical position in the field. - Sources: - [Chomsky on ChatGPT - The Philosophy Forum](https://thephilosophyforum.com/discussion/14263/chomsky-on-chatgpt) - [Large Language Models and Cognitive Science: A Comprehensive Review of Similarities, Differences, and Challenges](https://arxiv.org/html/2409.02387v1) - [Cognitive Illusion: Why AI Still Can't Think Like a Human - Neuroscience News](https://neurosciencenews.com/ai-llm-human-cognition-30097/) ### ch6-8: TRUE - Speaker: Karen Hao - Claim: AI companies are pursuing AGI by building increasingly larger statistical models, which drives them to acquire more data, build more data centers, and exploit more labor. - TLDR: All three consequences Karen Hao describes are well-documented. AI companies are scaling larger models, massively expanding data centers, and relying on an underpaid global workforce for data labeling. - Explanation: The scaling hypothesis driving AGI pursuit is central to OpenAI and peer companies' strategy, as widely reported. Microsoft, Google, Amazon, and Meta are projected to spend over $700 billion on capital expenditures in 2026, largely on data centers. Labor exploitation is documented by multiple investigations, including TIME's report on OpenAI paying Kenyan workers under $2/hour for content labeling. - Sources: - [OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | TIME](https://time.com/6247678/openai-chatgpt-kenya-workers/) - [AI is a multi-billion dollar industry. It's underpinned by an invisible and exploited workforce](https://theconversation.com/ai-is-a-multi-billion-dollar-industry-its-underpinned-by-an-invisible-and-exploited-workforce-240568) - [As AI data centers scale, investigating their impact becomes its own beat | Nieman Journalism Lab](https://www.niemanlab.org/2026/03/as-ai-data-centers-scale-investigating-their-impact-becomes-its-own-beat/) - [Q&A: Uncovering the labor exploitation that powers AI - Columbia Journalism Review](https://www.cjr.org/tow_center/qa-uncovering-the-labor-exploitation-that-powers-ai.php) ### ch6-9: DISPUTED - Speaker: Karen Hao - Claim: Current AI development is ultimately designed to replace and automate people away, which is a departure from the historical purpose of technology. - TLDR: Karen Hao frames AI as a departure from technology's historical purpose, but technology has displaced workers throughout history. Whether AI is "designed" to replace people is also actively contested. - Explanation: Historical evidence shows that technology has repeatedly displaced workers, from the Spinning Jenny and steam engines during the Industrial Revolution to factory robots in the 20th century, so characterizing labor replacement as a departure from technology's historical role is an oversimplification. On the intent question, credible institutions like Stanford HAI and MIT economists argue AI should and often does augment rather than replace humans, while others (and some company strategies) do point toward automation displacing jobs. The core claim blends a debatable historical assertion with a contested characterization of AI's design intent. - Sources: - [A short history of jobs and automation](https://www.weforum.org/stories/2020/09/short-history-jobs-automation/) - [Five lessons from history on AI, automation, and employment | McKinsey](https://www.mckinsey.com/featured-insights/future-of-work/five-lessons-from-history-on-ai-automation-and-employment) - [AI Should Augment Human Intelligence, Not Replace It](https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it) - [Will artificial intelligence make human workers obsolete? | Hub](https://hub.jhu.edu/2026/02/23/will-ai-make-human-workers-obsolete/) ### ch6-10: INEXACT - Speaker: Steven Bartlett - Claim: Karen Hao interviewed approximately 300 people in total for her research, with 80 or 90 of them from OpenAI. - TLDR: The total of ~300 interviews is correct, but the OpenAI count is slightly understated. Karen Hao spoke with over 90 OpenAI insiders, not '80 or 90.' - Explanation: Karen Hao confirmed on X that Empire of AI is based on '300+ interviews,' matching Bartlett's figure. However, multiple sources (including Wikipedia and book reviews) specify that over 90 current or former OpenAI employees and executives were interviewed, making '80 or 90' a minor understatement of the lower bound. - Sources: - [Karen Hao on X](https://x.com/_KarenHao/status/1908206708037738698?lang=en) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [Book Review - Empire of AI by Karen Hao](https://tosinadeoti.medium.com/book-review-empire-of-ai-by-karen-hao-14ce16ffc83b) ### ch5-1: TRUE - Speaker: Karen Hao - Claim: People who know Sam Altman are extremely polarized, with no in-between feelings: they either view him as the greatest tech leader of the generation, comparable to Steve Jobs, or they view him as manipulative, an abuser, and a liar. - TLDR: Karen Hao has consistently described this exact polarization about Sam Altman across her book and interviews. - Explanation: Search results confirm Hao found, through extensive interviews, that opinions on Altman split sharply between those who see him as the Steve Jobs of AI and those who see him as manipulative, an abuser, and a liar. Her quoted words match the transcript almost verbatim. This reflects her documented research finding, not a verifiable objective fact, but the claim accurately represents what she has reported. - Sources: - [Inside the Mind of AI's Most Powerful CEO](https://ppc.land/inside-the-mind-of-ais-most-powerful-ceo/) ### ch5-2: TRUE - Speaker: Karen Hao - Claim: Dario Amodei, CEO of Anthropic, was originally an executive at OpenAI. - TLDR: Dario Amodei served as VP of Research at OpenAI from 2016 until 2021, when he left to co-found Anthropic. - Explanation: Before founding Anthropic, Amodei held the role of Vice President of Research at OpenAI, where he led development of GPT-2 and GPT-3. He departed in 2021 alongside other senior OpenAI staff over differences in direction, and went on to co-found Anthropic with his sister Daniela. - Sources: - [Dario Amodei - Wikipedia](https://en.wikipedia.org/wiki/Dario_Amodei) - [Anthropic CEO Dario Amodei Says He Left OpenAI Over a Difference in 'Vision'](https://www.inc.com/ben-sherry/anthropic-ceo-dario-amodei-says-he-left-openai-over-a-difference-in-vision/91018229) ### ch5-3: TRUE - Speaker: Karen Hao - Claim: Anthropic is one of the biggest competitors to OpenAI. - TLDR: Anthropic is widely recognized as one of OpenAI's biggest competitors, particularly in the enterprise AI market. - Explanation: Multiple sources confirm Anthropic is a top-tier rival to OpenAI. In the enterprise segment, Anthropic holds roughly a third of the market versus OpenAI's 25%, and Anthropic's annualized revenue growth significantly outpaces OpenAI's. The claim is well-supported across industry analyses. - Sources: - [Anthropic could surpass OpenAI in annualized revenue by mid-2026 | Epoch AI](https://epoch.ai/data-insights/anthropic-openai-revenue) - [Anthropic turns the tables on OpenAI in critical revenue category](https://www.axios.com/2026/03/18/ai-enterprise-revenue-anthropic-openai) - [Businesses Are Choosing Anthropic's Claude AI Over OpenAI's ChatGPT in 2026](https://www.androidheadlines.com/2026/03/anthropic-vs-openai-businesses-market-share-2026-analysis.html) ### ch5-4: TRUE - Speaker: Karen Hao - Claim: Dario Amodei came to feel that Altman had used his intelligence and capabilities to build toward a vision of the future that Amodei fundamentally disagreed with. - TLDR: This is Karen Hao's characterization from her book, consistent with public reporting. Amodei publicly confirmed he left over a difference in 'vision' with Altman. - Explanation: Multiple sources corroborate this account. Hao's book states that Amodei began to feel 'like you're being manipulated' once he disagreed with Altman's vision, calling it 'the story especially of Dario Amodei.' Amodei himself said in public interviews that the reason he left was that 'it is incredibly unproductive to try and argue with someone else's vision,' and he reportedly told friends he 'felt psychologically abused by Altman.' His departure was tied to concerns over OpenAI's pivot to commercialization over safety. - Sources: - [Anthropic CEO Dario Amodei Says He Left OpenAI Over a Difference in 'Vision'](https://www.inc.com/ben-sherry/anthropic-ceo-dario-amodei-says-he-left-openai-over-a-difference-in-vision/91018229) - [Book review of 'Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI' by Karen Hao | I'd Rather Be Writing Blog and API doc course](https://idratherbewriting.com/blog/book-review-empire-of-ai-karen-hao) - [Dario Amodei - Wikipedia](https://en.wikipedia.org/wiki/Dario_Amodei) ### ch5-5: TRUE - Speaker: Karen Hao - Claim: Karen Hao has been covering the tech industry for over 8 years. - TLDR: Karen Hao's tech journalism career began around 2017 at Quartz, meaning she had roughly 9 years of experience by March 2026, consistent with 'over 8 years.' - Explanation: Wikipedia documents her career at Quartz as a tech reporter starting in 2017, followed by MIT Technology Review (2018-2022), The Wall Street Journal (2022-2023), and The Atlantic (2023 onward). From 2017 to the podcast's publication date of March 2026 spans approximately 9 years, which supports her claim of over 8 years covering the tech industry. - Sources: - [Karen Hao - Wikipedia](https://en.wikipedia.org/wiki/Karen_Hao) - [About — Karen Hao](https://karendhao.com/about) ### ch5-6: TRUE - Speaker: Karen Hao - Claim: Karen Hao has covered Meta, Google, and Microsoft, in addition to OpenAI. - TLDR: Karen Hao has verifiably covered Meta, Google, and Microsoft in addition to OpenAI across her career at MIT Technology Review and the Wall Street Journal. - Explanation: Her coverage of Meta includes a major 2021 investigation into Facebook's ML and misinformation efforts. She covered Google in the Timnit Gebru firing story, and Microsoft through the OpenAI partnership reporting. These are well-documented examples confirming her claim. - Sources: - [Karen Hao - Wikipedia](https://en.wikipedia.org/wiki/Karen_Hao) - [About — Karen Hao](https://karendhao.com/about) - [Articles by Karen Hao | MIT Technology Review](https://www.technologyreview.com/author/karen-hao/) ### ch5-7: UNVERIFIABLE - Speaker: Karen Hao - Claim: Altman is the only figure Karen Hao has seen this degree of polarization with across all the tech companies she has covered. - TLDR: This is Karen Hao's personal subjective observation about her own reporting experience, not an independently verifiable factual claim. - Explanation: Her credentials and coverage history (8+ years, covering Meta, Google, Microsoft, and OpenAI) are confirmed by her Wikipedia page, her own website, and multiple news sources. The specific assertion that Altman is uniquely polarizing compared to every other figure she has ever covered is a personal comparative judgment. No external evidence can confirm or deny what she personally observed across all her reporting subjects. - Sources: - [Karen Hao - Wikipedia](https://en.wikipedia.org/wiki/Karen_Hao) - [About — Karen Hao](https://karendhao.com/about) - [Harvard Science Book Talk: Karen Hao, "Empire of AI"](https://science.fas.harvard.edu/event/harvard-science-book-talk-karen-hao) ### ch5-8: TRUE - Speaker: Steven Bartlett - Claim: Dario Amodei was the former VP of Research at OpenAI. - TLDR: Dario Amodei was indeed VP of Research at OpenAI before co-founding Anthropic in 2021. - Explanation: Multiple sources, including Wikipedia and OpenAI's own organizational announcements, confirm that Dario Amodei served as VP of Research at OpenAI, where he helped lead the development of GPT-2 and GPT-3, before leaving to co-found Anthropic. - Sources: - [Dario Amodei - Wikipedia](https://en.wikipedia.org/wiki/Dario_Amodei) - [Organizational update from OpenAI | OpenAI](https://openai.com/index/organizational-update/) ### ch5-9: FALSE - Speaker: Steven Bartlett - Claim: In 2017, while still at OpenAI, Dario Amodei estimated the probability of something going catastrophically wrong for human civilization as a result of AI at between 10% and 25%. - TLDR: The 10–25% probability estimate is from 2023, not 2017. Amodei was CEO of Anthropic when he said it, not at OpenAI. - Explanation: The 2017 80,000 Hours podcast (July 21, 2017) does contain Amodei's Nick Bostrom reference while he was at OpenAI, but not the specific 10–25% figure. That probability estimate is attributed to October 2023, when Amodei was already CEO of Anthropic. The claim conflates two separate quotes from different years, incorrectly attributing the numerical risk estimate to his 2017 OpenAI tenure. - Sources: - [Dario Amodei on OpenAI and how AI will change the world for good and ill | 80,000 Hours](https://80000hours.org/podcast/episodes/the-world-needs-ai-researchers-heres-how-to-become-one/) - [AI CEO Warns of a 25% Chance of Catastrophe | CryptoRank.io](https://cryptorank.io/news/feed/64db5-ai-ceo-warns-of-a-25-chance-of-catastrophe) - [What if we just…didn't build AGI? An Argument Against Inevitability — EA Forum](https://forum.effectivealtruism.org/posts/XiTojJxEoEy4Kya9D/what-if-we-just-didn-t-build-agi-an-argument-against) - [Hard Fork - Anthropic's C.E.O. Dario Amodei on Surviving the A.I. Endgame Transcript](https://podscripts.co/podcasts/hard-fork/anthropics-ceo-dario-amodei-on-surviving-the-ai-endgame) ### ch5-10: TRUE - Speaker: Steven Bartlett - Claim: Ilya Sutskever was a co-founder of OpenAI who later left the company. - TLDR: Ilya Sutskever is indeed a co-founder of OpenAI and officially left in May 2024. - Explanation: Sutskever was one of OpenAI's founding members and served as its Chief Scientist for nearly a decade. He announced his departure on May 14, 2024, after being sidelined following the failed attempt to oust Sam Altman in November 2023. He subsequently founded Safe Superintelligence Inc. - Sources: - [Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs | TechCrunch](https://techcrunch.com/2024/05/14/ilya-sutskever-openai-co-founder-and-longtime-chief-scientist-departs/) - [OpenAI's Co-Founder and Chief Scientist Ilya Sutskever Leaves | TIME](https://time.com/6978195/ilya-sutskever-leaves-open-ai/) - [Ilya Sutskever - Wikipedia](https://en.wikipedia.org/wiki/Ilya_Sutskever) ### ch5-11: TRUE - Speaker: Karen Hao - Claim: Ilya Sutskever was instrumental in trying to get Sam Altman fired. - TLDR: Ilya Sutskever was indeed central to Sam Altman's firing in November 2023, voting with the board to oust him before later expressing regret. - Explanation: As a board member and chief scientist, Sutskever authored a memo citing concerns about Altman's leadership and joined the board vote to fire him. He was widely described as the driving force behind the ouster. He subsequently signed the employee letter calling for Altman's reinstatement and publicly expressed regret for his role. - Sources: - [Removal of Sam Altman from OpenAI - Wikipedia](https://en.wikipedia.org/wiki/Removal_of_Sam_Altman_from_OpenAI) - [OpenAI executive Ilya Sutskever is out after key role in CEO Sam Altman's ouster | CNN Business](https://www.cnn.com/2024/05/14/tech/openai-chief-scientist-ilya-sutskever-departs) - [Ilya Sutskever, the OpenAI cofounder who helped oust CEO Sam Altman, says he "deeply regrets" his role and threatens to quit unless board resigns | Fortune](https://fortune.com/2023/11/20/ilya-sutskever-openai-cofounder-deeply-regrets-resign/) ### ch5-12: INEXACT - Speaker: Karen Hao - Claim: Ilya Sutskever came to feel he was being manipulated by Altman into contributing to something he didn't believe in. - TLDR: Sutskever did feel manipulated by Altman, but he believed deeply in the AGI mission itself. His concern was that Altman was the wrong person to lead it safely, not that the work was something he disagreed with. - Explanation: Multiple sources, including Sutskever's own deposition and Karen Hao's book, confirm he felt manipulated by Altman. His 52-page memo states Altman showed 'a consistent pattern of lying, undermining his execs.' However, Sutskever's issue was with Altman's leadership and safety practices, not the mission itself. He told board member Helen Toner 'I don't think Sam is the guy who should have the finger on the button for AGI,' and after leaving he founded Safe Superintelligence, still pursuing AGI. Framing it as contributing to 'something he didn't believe in' overstates the disillusionment. - Sources: - [OpenAI's CEO Crisis Pitted Sam Altman Against Ilya Sutskever](https://www.biography.com/business-leaders/a65204556/openai-sam-altman-ilya-sutskever) - [The 52-Page Memo That Nearly Destroyed OpenAI: Inside Ilya Sutskever's Deposition](https://medium.com/@prateekj24/the-52-page-memo-that-nearly-destroyed-openai-inside-ilya-sutskevers-deposition-acef91208a1c) - [Dismantling the Empire of AI with Karen Hao](https://www.bloodinthemachine.com/p/dismantling-the-empire-of-ai-with) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) ### ch5-13: TRUE - Speaker: Karen Hao - Claim: Ilya Sutskever had two core priorities: achieving AGI and ensuring it was achieved safely. - TLDR: Sutskever's two core commitments, building AGI and doing so safely, are well-documented across multiple credible sources. - Explanation: His co-leadership of OpenAI's superalignment team, his public criticism of the lab's drift toward commercialization over safety, and his founding of Safe Superintelligence Inc. with an explicit 'safety-first' mission all confirm these twin priorities. Jan Leike, who co-led superalignment with him, echoed the same values upon departing OpenAI. - Sources: - [OpenAI's long-term safety team has disbanded](https://www.axios.com/2024/05/17/openai-superalignment-risk-ilya-sutskever) - [Why Ilya Sutskever Left OpenAI to Build Safe Superintelligence | by Binary Bards | Medium](https://binarybards.medium.com/why-ilya-sutskever-left-openai-to-build-safe-superintelligence-0d36d8c1c3f1) - [Ilya Sutskever Doubts AI Scaling, Launches Safe Superintelligence Firm](https://www.webpronews.com/ilya-sutskever-doubts-ai-scaling-launches-safe-superintelligence-firm/) ### ch5-14: TRUE - Speaker: Karen Hao - Claim: Ilya felt that Altman was actively undermining both the goal of achieving AGI and the goal of achieving it safely. - TLDR: Multiple sources, including Hao's own book and Sutskever's deposition, confirm Ilya believed Altman was undermining both the AGI mission and its safety. - Explanation: Karen Hao's 'Empire of AI' reports that Sutskever and Mira Murati raised concerns that Altman was causing bad research outcomes and pitting teams against each other, undermining both the drive to build AGI and the safety-first principles central to OpenAI's mission. Sutskever's unsealed deposition independently corroborates this, stating Altman had 'a consistent pattern of lying, undermining his execs, and pitting his execs against one another.' The claim accurately reflects Hao's reporting. - Sources: - [Removal of Sam Altman from OpenAI - Wikipedia](https://en.wikipedia.org/wiki/Removal_of_Sam_Altman_from_OpenAI) - [Inside the Deposition That Showed How OpenAI Nearly Destroyed Itself - Decrypt](https://decrypt.co/347349/inside-deposition-showed-openai-nearly-destroyed-itself) - [Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI by Karen Hao | Goodreads](https://www.goodreads.com/book/show/222725518-empire-of-ai) ### ch5-15: TRUE - Speaker: Karen Hao - Claim: Ilya felt that Altman was creating a chaotic environment within OpenAI by pitting teams against each other and telling different things to different people. - TLDR: Sutskever's own 52-page memo and deposition confirm he believed Altman was pitting executives against each other and telling different people conflicting things. - Explanation: Ilya Sutskever's deposition in the Musk v. Altman lawsuit explicitly states he wrote that 'Sam exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another.' He testified that Altman told Sutskever and research director Jakub Pachocki 'conflicting things about the way the company would be run.' Karen Hao's book 'Empire of AI' draws on these same documented concerns. - Sources: - [Ilya Sutskever Deposition Reveals How Sam Altman's 2023 Firing Was Planned for Over a Year - WinBuzzer](https://winbuzzer.com/2025/11/03/ilya-sutskever-deposition-reveals-how-sam-altmans-2023-firing-was-planned-for-over-a-year-xcxwbn/) - [OpenAI: The Battle of the Board: Ilya's Testimony](https://thezvi.substack.com/p/openai-the-battle-of-the-board-ilyas) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) ### ch5-16: TRUE - Speaker: Karen Hao - Claim: Karen Hao interviewed Ilya Sutskever in 2019 for a profile of OpenAI for MIT Technology Review. - TLDR: Confirmed. Karen Hao embedded at OpenAI in August 2019 and interviewed Ilya Sutskever for her MIT Technology Review profile. - Explanation: Multiple sources confirm Hao pitched and conducted her OpenAI profile in 2019, embedding with the company for three days in August 2019. During that visit she interviewed Ilya Sutskever alongside Greg Brockman. The article was published in February 2020 under the title 'The Messy, Secretive Reality Behind OpenAI's Bid to Save the World.' - Sources: - [The messy, secretive reality behind OpenAI's bid to save the world | MIT Technology Review](https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/) - [Inside the story that enraged OpenAI | MIT Technology Review](https://www.technologyreview.com/2025/05/19/1116614/hao-empire-ai-openai/) ### ch5-17: TRUE - Speaker: Steven Bartlett - Claim: In 2019, Ilya stated that the default relationship between humans and truly autonomous AI would be analogous to how humans treat animals: not hostile, but dominant, with AI acting in its own interests without asking for human permission. - TLDR: The quote is real and accurately paraphrased. Sutskever made these remarks in the 2019 documentary iHuman, directed by Tonje Hessen Schei. - Explanation: In the iHuman documentary (2019), Sutskever stated: 'It's not that it's going to actively hate humans... but it's just going to be too powerful. And I think a good analogy would be the way humans treat animals... when the time comes to build a highway between two cities, we're not asking the animals for permission... I think by default, that's the kind of relationship that's going to be between us and AGIs, which are truly autonomous and operating on their own behalf.' The transcript's paraphrase of this quote matches the original very closely. The note about 'dominant' is a reasonable characterization rather than a verbatim word, but does not misrepresent the meaning. - Sources: - [8 Quotes from Ilya Sutskever (from iHuman documentary) - Daily Doc](https://dailydoc.com/ilya-sutskever-from-ihuman-documentary/) - [OpenAI's Chief Scientist Worried AGI Will Treat Us Like Animals - Futurism](https://futurism.com/the-byte/openai-chief-scientist-agi-animals) - [iHuman (film) - Wikipedia](https://en.wikipedia.org/wiki/IHuman_(film)) ### ch8-1: TRUE - Speaker: Karen Hao - Claim: Three books were being published simultaneously about Sam Altman. - TLDR: Three books about Sam Altman and OpenAI were published around the same time in 2025. Altman's own tweet confirmed he participated in two of them (by Keach Hagey and Ashlee Vance), with Hao's being the third. - Explanation: Sam Altman publicly tweeted that books were coming out about him and OpenAI, naming two he cooperated with: Keach Hagey's 'The Optimist' (published June 3, 2025) and an Ashlee Vance book on OpenAI. Karen Hao's 'Empire of AI' (published May 20, 2025) was the third. Hao even quote-retweeted Altman's post identifying her book as the unnamed one, consistent with the claim that exactly three books were being published simultaneously. - Sources: - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future: Hagey, Keach: Amazon.com](https://www.amazon.com/Optimist-Altman-OpenAI-Invent-Future/dp/1324075961) - [Diary Of A CEO: w/ AI Critic Karen Hao on Empires of AI ... transcript](https://singjupost.com/diary-of-a-ceo-w-ai-critic-karen-hao-on-empires-of-ai-transcript/) ### ch8-2: TRUE - Speaker: Karen Hao - Claim: Karen Hao profiled OpenAI for MIT Technology Review. - TLDR: Karen Hao did profile OpenAI for MIT Technology Review, embedding in the office in 2019 and publishing in February 2020. - Explanation: Multiple sources confirm Hao was a senior AI editor at MIT Technology Review and wrote the first major profile of OpenAI, based on three days embedded in the office and nearly three dozen interviews. OpenAI's leadership was reportedly displeased with the result and refused to speak to her for three years afterward. - Sources: - [Inside the story that enraged OpenAI | MIT Technology Review](https://www.technologyreview.com/2025/05/19/1116614/hao-empire-ai-openai/) - [Articles by Karen Hao | MIT Technology Review](https://www.technologyreview.com/author/karen-hao/) - [Karen Hao - Wikipedia](https://en.wikipedia.org/wiki/Karen_Hao) ### ch8-3: TRUE - Speaker: Karen Hao - Claim: Karen Hao embedded within the OpenAI office for 3 days in 2019 to report her MIT Technology Review profile. - TLDR: Confirmed: Karen Hao embedded at OpenAI for 3 days in 2019 and published her MIT Technology Review profile in 2020. - Explanation: Multiple sources, including MIT Technology Review itself, confirm that Karen Hao spent three days inside OpenAI's office in 2019, conducted nearly three dozen interviews, and published her profile in February 2020. OpenAI's leadership was reportedly unhappy with the resulting story, which matches her account in the transcript. - Sources: - [Inside the story that enraged OpenAI | MIT Technology Review](https://www.technologyreview.com/2025/05/19/1116614/hao-empire-ai-openai/) - [Karen Hao - Wikipedia](https://en.wikipedia.org/wiki/Karen_Hao) ### ch8-4: TRUE - Speaker: Karen Hao - Claim: Karen Hao's MIT Technology Review profile of OpenAI was published in 2020. - TLDR: Karen Hao's MIT Technology Review profile of OpenAI was indeed published in 2020, after she embedded in the office in 2019. - Explanation: Multiple sources confirm Hao embedded within OpenAI for three days in 2019 and published her profile in February 2020. The story reportedly made OpenAI's leadership very unhappy, consistent with what Hao states in the interview. - Sources: - [Inside the story that enraged OpenAI | MIT Technology Review](https://www.technologyreview.com/2025/05/19/1116614/hao-empire-ai-openai/) - [Karen Hao - Wikipedia](https://en.wikipedia.org/wiki/Karen_Hao) ### ch8-5: TRUE - Speaker: Karen Hao - Claim: Sam Altman sent an internal email to OpenAI expressing displeasure about Karen Hao's profile, which she quotes in her book. - TLDR: Confirmed. Altman's internal email is quoted in Hao's book, describing her 2020 MIT Technology Review profile as 'clearly bad.' - Explanation: According to an excerpt published by MIT Technology Review from Hao's book 'Empire of AI,' Altman emailed OpenAI employees after Hao's 2020 profile appeared, writing: 'While definitely not catastrophic, it was clearly bad.' Hao's on-air paraphrase of 'not great' is a loose summary but the core claim, that she quotes such an email in her book, is verified. - Sources: - [Inside the story that enraged OpenAI | MIT Technology Review](https://www.technologyreview.com/2025/05/19/1116614/hao-empire-ai-openai/) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) ### ch8-6: TRUE - Speaker: Karen Hao - Claim: OpenAI explicitly told Karen Hao they would not participate in or respond to any of her work after her 2020 profile. - TLDR: OpenAI's refusal to communicate with Karen Hao after her 2020 MIT Technology Review profile is well-documented across multiple sources, including Hao's own public statements. - Explanation: Hao embedded with OpenAI in 2019 and published a critical profile in February 2020, after which OpenAI refused to speak to her for three years. She has publicly described this on X and in MIT Technology Review coverage of her book, stating it was an explicit cutoff. The colleague anecdote (OpenAI declining to redirect a press release to Hao due to their 'history') is consistent with and corroborated by this documented pattern. - Sources: - [Inside the story that enraged OpenAI | MIT Technology Review](https://www.technologyreview.com/2025/05/19/1116614/hao-empire-ai-openai/) - [Karen Hao on X](https://x.com/_KarenHao/status/1924436072458588340) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) ### ch8-7: UNVERIFIABLE - Speaker: Karen Hao - Claim: When a colleague of Karen Hao's at MIT Technology Review tried to redirect an OpenAI press release to her, OpenAI declined and cited their history with her as the reason. - TLDR: The broader fact that OpenAI explicitly refused to engage with Hao for three years is well documented, but the specific press release/colleague incident is a private anecdote with no independent corroboration. - Explanation: Multiple credible sources confirm that OpenAI openly told Karen Hao it would not respond to her after her 2019 MIT Technology Review profile, and blocked her access for roughly three years. However, the specific incident in which a colleague was sent a press release, asked to redirect it to Karen, and OpenAI declined citing 'a history' is only available through Hao's own first-person account and cannot be confirmed or denied through any independent public source. - Sources: - [Inside the story that enraged OpenAI | MIT Technology Review](https://www.technologyreview.com/2025/05/19/1116614/hao-empire-ai-openai/) - [Karen Hao - Wikipedia](https://en.wikipedia.org/wiki/Karen_Hao) - [Dismantling the Empire of AI with Karen Hao](https://www.bloodinthemachine.com/p/dismantling-the-empire-of-ai-with) ### ch8-8: TRUE - Speaker: Karen Hao - Claim: OpenAI refused to speak to Karen Hao for 3 years following her 2020 profile. - TLDR: Karen Hao confirmed in her own words that OpenAI refused to speak to her for 3 years after her profile, until she joined the Wall Street Journal. - Explanation: Hao's MIT Technology Review profile was based on a 2019 embed and published in 2020. She has publicly stated, including in a post on X, that OpenAI refused to speak to her for 3 years afterward. The MIT Technology Review and other sources corroborate that the communications blackout ended when she moved to the Wall Street Journal, matching her account in the podcast. - Sources: - [Karen Hao on X](https://x.com/_KarenHao/status/1924436072458588340) - [Inside the story that enraged OpenAI | MIT Technology Review](https://www.technologyreview.com/2025/05/19/1116614/hao-empire-ai-openai/) - [Dismantling the Empire of AI with Karen Hao](https://www.bloodinthemachine.com/p/dismantling-the-empire-of-ai-with) ### ch8-9: UNVERIFIABLE - Speaker: Karen Hao - Claim: After Karen Hao moved to the Wall Street Journal, OpenAI reopened lines of communication with her. - TLDR: This is a first-person account of Karen Hao's professional experience that cannot be independently confirmed or denied. - Explanation: It is well-documented that OpenAI refused to speak with Hao for three years after her 2020 MIT Technology Review profile, and that she subsequently joined the Wall Street Journal. However, no external source independently confirms or contradicts her account that joining the WSJ specifically prompted OpenAI to reopen communication. The claim is personal testimony about private professional interactions. - Sources: - [Inside the story that enraged OpenAI | MIT Technology Review](https://www.technologyreview.com/2025/05/19/1116614/hao-empire-ai-openai/) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [Journalist Karen Hao discusses her book 'Empire of AI' : NPR](https://www.npr.org/2025/05/20/nx-s1-5334670/journalist-karen-hao-discusses-her-book-empire-of-ai) ### ch8-10: TRUE - Speaker: Karen Hao - Claim: Karen Hao left the Wall Street Journal to focus on her book full-time. - TLDR: Karen Hao did leave the Wall Street Journal to write her book 'Empire of AI,' published in May 2025. - Explanation: Hao covered China tech at the WSJ and departed in 2023. Her book 'Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI' was published in May 2025, consistent with her account of leaving the Journal to focus on it full-time. - Sources: - [About — Karen Hao](https://karendhao.com/about) - [Karen Hao - Wikipedia](https://en.wikipedia.org/wiki/Karen_Hao) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) ### ch8-11: UNVERIFIABLE - Speaker: Karen Hao - Claim: The OpenAI board fired Sam Altman while Karen Hao was in the process of arranging interviews with the company for her book. - TLDR: OpenAI did rescind an interview invitation to Hao, which is confirmed. The specific timing (that Altman's firing happened mid-arrangement) comes solely from Hao's own account and cannot be independently verified. - Explanation: Multiple sources confirm that OpenAI's communications team rescinded an invitation to interview employees at its San Francisco headquarters, consistent with Hao's account. Her book also opens with the November 2023 firing, placing her research in that period. However, no third-party source explicitly corroborates the precise sequence she describes, that the board fired Altman specifically while she and OpenAI were going back and forth on arranging the interviews. - Sources: - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [Inside the story that enraged OpenAI | MIT Technology Review](https://www.technologyreview.com/2025/05/19/1116614/hao-empire-ai-openai/) - [Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI by Karen Hao | Goodreads](https://www.goodreads.com/book/show/222725518-empire-of-ai) ### ch8-12: TRUE - Speaker: Karen Hao - Claim: After Sam Altman's firing, OpenAI sent Karen Hao an email stating they would not participate in her book at all, cancelling previously arranged interviews. - TLDR: Confirmed. After Sam Altman's firing, OpenAI sent Hao an email withdrawing all cooperation, cancelling interviews she had already booked flights to attend. - Explanation: Multiple sources, including a DOAC podcast transcript and book reporting, confirm that OpenAI had agreed to arrange interviews with Karen Hao for her book, but after Altman's board firing in November 2023 the company began delaying, then sent a formal email stating it would not participate at all. Hao had already purchased tickets to fly to San Francisco for those interviews. This account is corroborated by reviews of 'Empire of AI' noting that OpenAI's communications team rescinded its headquarters invitation. - Sources: - ["We Are Being Gaslit By The AI Companies!" - Karen Hao on DOAC Podcast (Transcript) – The Singju Post](https://singjupost.com/diary-of-a-ceo-w-ai-critic-karen-hao-on-empires-of-ai-transcript/) - [Inside the story that enraged OpenAI | MIT Technology Review](https://www.technologyreview.com/2025/05/19/1116614/hao-empire-ai-openai/) - [Journalist Karen Hao discusses her book 'Empire of AI' : NPR](https://www.npr.org/2025/05/20/nx-s1-5334670/journalist-karen-hao-discusses-her-book-empire-of-ai) ### ch8-13: UNVERIFIABLE - Speaker: Karen Hao - Claim: Karen Hao had already purchased tickets to fly to San Francisco for her OpenAI book interviews before they were cancelled. - TLDR: OpenAI cancelling Hao's planned office access is confirmed by multiple sources, but the specific detail about her having already booked flight tickets is a private personal account with no independent verification. - Explanation: Multiple reviews and sources confirm that OpenAI's communications team rescinded an invitation for Hao to interview employees at its San Francisco headquarters during the writing of 'Empire of AI.' However, the precise claim that she had already purchased plane tickets at the time of cancellation is a personal logistical detail that does not appear in any independently verifiable source. - Sources: - [The True Threat of OpenAI | The Nation](https://www.thenation.com/article/culture/open-ai-karen-hao/) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [Dismantling the Empire of AI with Karen Hao](https://www.bloodinthemachine.com/p/dismantling-the-empire-of-ai-with) ### ch8-14: UNVERIFIABLE - Speaker: Karen Hao - Claim: Karen Hao provided OpenAI with 40 pages of requests for comment and gave them over a month to respond. - TLDR: OpenAI's non-cooperation with Hao is well-documented, but the specific figures of 40 pages and over a month are only sourced from Hao's own account. - Explanation: Multiple credible sources confirm OpenAI declined to cooperate with Hao and never responded to her requests for comment before publication of 'Empire of AI.' However, the precise details (40 pages of requests, more than a month given) are only attested by Hao herself in this podcast and cannot be independently verified from any external source. - Sources: - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [The boomer-doomer divide within OpenAI, explained by Karen Hao - Big Think](https://bigthink.com/the-future/karen-hao-boomer-doomer-divide-openai/) - [Dismantling the Empire of AI with Karen Hao](https://www.bloodinthemachine.com/p/dismantling-the-empire-of-ai-with) ### ch8-15: UNVERIFIABLE - Speaker: Karen Hao - Claim: OpenAI never responded to a single one of Karen Hao's 40 pages of requests for comment. - TLDR: OpenAI's refusal to cooperate with Hao on her book is widely confirmed, but the specific '40 pages' figure cannot be independently verified. - Explanation: Multiple credible sources (NPR, The Nation, Blood in the Machine) confirm OpenAI declined to cooperate with Karen Hao during the writing of 'Empire of AI,' and Sam Altman publicly discouraged readers from buying the book. However, the specific claim that she sent exactly 40 pages of requests for comment is a self-reported detail from Hao's own account with no independent corroboration found in any source. - Sources: - [Journalist Karen Hao discusses her book 'Empire of AI' : NPR](https://www.npr.org/2025/05/20/nx-s1-5334670/journalist-karen-hao-discusses-her-book-empire-of-ai) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [Dismantling the Empire of AI with Karen Hao](https://www.bloodinthemachine.com/p/dismantling-the-empire-of-ai-with) ### ch8-16: TRUE - Speaker: Steven Bartlett - Claim: Sam Altman has appeared on Tucker Carlson's, Theo Von's, and Joe Rogan's podcasts. - TLDR: Sam Altman has appeared on all three podcasts. Joe Rogan (episode #2044, October 2023), Theo Von (episode #599, July 2025), and Tucker Carlson (September 2025). - Explanation: Multiple sources confirm Altman appeared on The Joe Rogan Experience (#2044), This Past Weekend with Theo Von (#599), and The Tucker Carlson Show, all within the timeframe prior to this video's publication. - Sources: - [#2044 - Sam Altman - The Joe Rogan Experience | Podcast on Spotify](https://open.spotify.com/episode/66edV3LAbUXa26HG1ZQaKB) - [Sam Altman - This Past Weekend w/ Theo Von - Apple Podcasts](https://podcasts.apple.com/us/podcast/sam-altman/id1190981360?i=1000718706017) - [Sam Altman on God, Elon Musk and the Mysterious Death of His Former Employee — The Tucker Carlson Show](https://tuckercarlson.com/tucker-show-sam-altman) ### ch8-17: TRUE - Speaker: Karen Hao - Claim: Technology companies use access as a major tool to control journalist coverage, withholding it if journalists speak to people the company did not want them to speak to. - TLDR: Access journalism is a well-documented phenomenon. Tech companies routinely use access as leverage to shape coverage, and cutting off reporters who stray is a recognized tactic. - Explanation: Multiple credible media criticism sources, including Wikipedia's entry on access journalism, The Conversation, and Nieman Lab, confirm that powerful tech companies use access as a carrot-and-stick mechanism. Sources explicitly note that 'when journalists deviate from sources' preferred interview trajectory, they risk having their access cut off,' and that tech company embargo practices can bar reporters from speaking to outside sources. Karen Hao's description accurately reflects this widely documented structural problem in technology journalism. - Sources: - [Access journalism - Wikipedia](https://en.wikipedia.org/wiki/Access_journalism) - [How 'access journalism' is threatening investigative journalism](https://theconversation.com/how-access-journalism-is-threatening-investigative-journalism-108831) - [Journalists are grappling with their relationships to big tech companies. It's time for academics to do the same.](https://www.niemanlab.org/2021/02/journalists-are-grappling-with-their-relationships-to-big-tech-companies-its-time-for-academics-to-do-the-same/) - [Access, Accountability Reporting and Silicon Valley - Nieman Reports](https://niemanreports.org/media-company-or-tech-firm/) - [Silicon Valley's Power Over The Free Press: Why It Matters](https://www.npr.org/sections/alltechconsidered/2014/11/24/366327398/silicon-valleys-power-over-the-free-press-why-it-matters) ### ch8-18: UNVERIFIABLE - Speaker: Steven Bartlett - Claim: An unidentified AI executive's team had been dangling the prospect of an interview appearance on Steven Bartlett's show for approximately 18 months. - TLDR: This is a personal anecdote about private, undisclosed communications between Bartlett's team and an unnamed AI executive's team. No public evidence can confirm or deny it. - Explanation: The claim is an off-the-record personal account involving an anonymous individual and private negotiations. No external sources document this interaction, and Bartlett deliberately withholds the name involved, making independent verification impossible. ### ch8-19: TRUE - Speaker: Karen Hao - Claim: OpenAI uses tactics to massage the company's public image and suppress information and opinions it does not want reaching the public. - TLDR: Well-documented evidence supports this claim. OpenAI used life-long NDAs, equity forfeiture threats, and access control to suppress criticism and manage its public image. - Explanation: OpenAI required departing employees to sign permanent non-disparagement agreements under threat of losing vested equity, a practice Sam Altman later called embarrassing. Whistleblowers filed SEC complaints alleging OpenAI illegally blocked staff from warning regulators. Karen Hao's own reporting confirms OpenAI cut off her access after a critical profile and later subpoenaed critics, corroborating her characterization of deliberate image-management machinery. - Sources: - [OpenAI Employees Forced to Sign NDA Preventing Them From Ever Criticizing Company](https://futurism.com/the-byte/openai-nda-criticism) - [OpenAI illegally stopped staff from sharing dangers, whistleblowers say - The Washington Post](https://www.washingtonpost.com/technology/2024/07/13/openai-safety-risks-whistleblower-sec/) - [Sam Altman 'genuinely embarrassed' by OpenAI's ultra-restrictive NDAs](https://www.itbrew.com/stories/2024/05/23/sam-altman-says-he-s-embarrassed-openai-threatened-ex-employees-into-signing-ndas) - [OpenAI's Secrets are Revealed in Empire of AI | Scientific American](https://www.scientificamerican.com/article/openais-secrets-are-revealed-in-empire-of-ai/) ### ch8-20: TRUE - Speaker: Karen Hao - Claim: Karen Hao conducted more than 300 interviews for her book despite OpenAI refusing to cooperate. - TLDR: Karen Hao's book 'Empire of AI' is documented as drawing on more than 300 interviews, conducted despite OpenAI's refusal to cooperate. - Explanation: Multiple sources confirm that 'Empire of AI' was researched through more than 300 interviews with current and former OpenAI employees, Microsoft and Google insiders, and others. OpenAI declined to participate and Sam Altman publicly criticized the book, consistent with Hao's claim that she worked around the company's non-cooperation. - Sources: - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI | PenguinRandomHouse.com](https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/) ### ch17-1: TRUE - Speaker: Karen Hao - Claim: DeepMind's AlphaFold is a system that predicts how proteins will fold based on amino acid sequences. - TLDR: AlphaFold is exactly as described: a DeepMind system that predicts 3D protein structures from amino acid sequences. - Explanation: AlphaFold, developed by Google DeepMind, predicts a protein's 3D structure from its amino acid sequence using deep learning. This is well-documented in peer-reviewed literature and on DeepMind's own site. The description in the claim is accurate. - Sources: - [AlphaFold — Google DeepMind](https://deepmind.google/technologies/alphafold/) - [AlphaFold - Wikipedia](https://en.wikipedia.org/wiki/AlphaFold) - [Highly accurate protein structure prediction with AlphaFold | Nature](https://www.nature.com/articles/s41586-021-03819-2) ### ch17-2: TRUE - Speaker: Karen Hao - Claim: AlphaFold is important for accelerating drug discovery and for understanding human disease. - TLDR: AlphaFold's role in accelerating drug discovery and understanding human disease is well-documented and widely confirmed. - Explanation: Multiple peer-reviewed studies and institutional sources confirm that AlphaFold has significantly accelerated drug target identification, lead compound discovery, and the understanding of disease mechanisms. Google DeepMind reports over 30% of AlphaFold-related research focuses on understanding disease, and the tool has been used by over 3 million researchers worldwide. - Sources: - [AlphaFold: Five Years of Impact — Google DeepMind](https://deepmind.google/blog/alphafold-five-years-of-impact/) - [Review of AlphaFold 3: Transformative Advances in Drug Design and Therapeutics - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC11292590/) - [Analyzing the potential of AlphaFold in drug discovery | MIT News](https://news.mit.edu/2022/alphafold-potential-protein-drug-0906) ### ch17-3: TRUE - Speaker: Karen Hao - Claim: AlphaFold won the Nobel Prize in Chemistry in 2024. - TLDR: AlphaFold's creators won the 2024 Nobel Prize in Chemistry. The claim is correct. - Explanation: The Royal Swedish Academy of Sciences awarded the 2024 Nobel Prize in Chemistry to Demis Hassabis and John Jumper (Google DeepMind) for AlphaFold's protein structure prediction, alongside David Baker for computational protein design. This directly confirms Karen Hao's statement. - Sources: - [Press release: The Nobel Prize in Chemistry 2024 - NobelPrize.org](https://www.nobelprize.org/prizes/chemistry/2024/press-release/) - [Chemistry Nobel goes to developers of AlphaFold AI that predicts protein structures](https://www.nature.com/articles/d41586-024-03214-7) ### ch17-4: INEXACT - Speaker: Karen Hao - Claim: AlphaFold uses small, curated datasets containing only amino acid sequences and protein folding data. - TLDR: AlphaFold does focus on protein-specific data, but its training also used massive sequence databases with over 2.2 billion sequences, making 'small' an oversimplification. - Explanation: AlphaFold2 was trained primarily on ~170,000 Protein Data Bank structures (filtered to ~10,795 sequences), which are indeed amino acid sequences paired with 3D structural data. However, it also used the Big Fantastic Database (BFD) covering over 2.2 billion protein sequences for multiple sequence alignments, and performed self-distillation on ~350,000 additional predicted structures. The claim that the dataset is 'small' and contains 'only' amino acid sequences and folding data is an oversimplification, though the core point that AlphaFold uses narrowly scoped, domain-specific data (as opposed to internet-scale general data) is broadly correct. - Sources: - [Highly accurate protein structure prediction with AlphaFold | Nature](https://www.nature.com/articles/s41586-021-03819-2) - [AlphaFold - Wikipedia](https://en.wikipedia.org/wiki/AlphaFold) - [AlphaFold Protein Structure Database in 2024: providing structure coverage for over 214 million protein sequences | Nucleic Acids Research | Oxford Academic](https://academic.oup.com/nar/article/52/D1/D368/7337620) ### ch17-5: TRUE - Speaker: Karen Hao - Claim: AlphaFold requires significantly less computational resources and energy, and produces less emissions, than large-scale AI models. - TLDR: AlphaFold trains on a small, curated dataset and uses substantially less compute than frontier LLMs like GPT-4, which required orders of magnitude more FLOPs and generated ~552 tons of CO2 equivalent for GPT-3 alone. - Explanation: Evidence confirms AlphaFold used 'more modest computing power' than large-scale AI breakthroughs, training on 128 Google TPUv3s over ~11 days on a specific protein-folding dataset. By contrast, GPT-4 class models require over 10^25 FLOPs (thousands of times more compute), with GPT-3 training alone generating ~552 tons of CO2 equivalent. The core claim that AlphaFold requires significantly less computational resources, energy, and produces fewer emissions than large-scale AI models is well supported. - Sources: - [FastFold: Reducing AlphaFold Training Time from 11 Days to 67 Hours](https://arxiv.org/pdf/2203.00854) - [Over 30 AI models have been trained at the scale of GPT-4 | Epoch AI](https://epoch.ai/data-insights/models-over-1e25-flop) - [AlphaFold - Wikipedia](https://en.wikipedia.org/wiki/AlphaFold) ### ch17-6: TRUE - Speaker: Karen Hao - Claim: AI companies' appetite for data has expanded over time, not stayed the same or decreased. - TLDR: AI companies' appetite for data has demonstrably expanded over time, supported by extensive market and compute data. - Explanation: Training computation for notable AI systems has doubled roughly every six months since 2010, and dataset sizes have grown from billions to trillions of tokens across successive model generations. The global AI training dataset market is projected to grow at a CAGR of 22-29%, and power demands for frontier training runs are more than doubling annually. Demand for training data now outpaces the supply of available high-quality data, confirming the expanding appetite Hao describes. - Sources: - [Since 2010, the training computation of notable AI systems has doubled every six months - Our World in Data](https://ourworldindata.org/data-insights/since-2010-the-training-computation-of-notable-ai-systems-has-doubled-every-six-months) - [Will we run out of data to train large language models? | Epoch AI](https://epoch.ai/blog/will-we-run-out-of-data-limits-of-llm-scaling-based-on-human-generated-data) - [AI Training Dataset Market worth $9.58 billion by 2029](https://www.marketsandmarkets.com/PressReleases/ai-training-dataset.asp) - [Researchers warn we could run out of data to train AI by 2026. What then?](https://theconversation.com/researchers-warn-we-could-run-out-of-data-to-train-ai-by-2026-what-then-216741) ### ch17-7: TRUE - Speaker: Karen Hao - Claim: AI companies need to continuously retrain their models in order for those technologies to remain relevant and keep pace with new knowledge creation. - TLDR: Continuous retraining is a well-established necessity for AI models to remain accurate and up-to-date. Static models become outdated due to knowledge cutoffs and model drift. - Explanation: Multiple authoritative sources (IBM, Splunk, WEF) confirm that AI models have a knowledge cutoff after which they lack awareness of new information, making periodic retraining essential to stay relevant. Without retraining, models suffer from data and concept drift, leading to inaccuracies. This is standard practice across the AI industry. - Sources: - [What is Continual Learning? | IBM](https://www.ibm.com/think/topics/continual-learning) - [Continual Learning in AI: How It Works & Why AI Needs It | Splunk](https://www.splunk.com/en_us/blog/learn/continual-learning.html) - [AI training data is running low – but we have a solution | World Economic Forum](https://www.weforum.org/stories/2025/12/data-ai-training-synthetic/) - [Knowledge cutoff - Wikipedia](https://en.wikipedia.org/wiki/Knowledge_cutoff) ### ch17-8: TRUE - Speaker: Karen Hao - Claim: AI companies are employing more and more data annotation workers over time because they need increasing amounts of that labor. - TLDR: The data annotation market and workforce have grown consistently alongside AI development, confirming Karen Hao's claim. - Explanation: Multiple industry reports show the global data annotation market growing at a CAGR of roughly 26-27%, from approximately $2.2-3.7 billion in 2024 toward projections of $17+ billion by 2030. Major AI companies including OpenAI, Anthropic, and Meta continue to increase spending on data labeling vendors, and roles for data annotators are described as the fastest-growing by volume among AI-related jobs heading into 2026. - Sources: - [The global Data Annotation and Labeling Market size is USD 2.2 billion in 2024 and will expand at a CAGR of 27.4% from 2024 to 2031](https://www.cognitivemarketresearch.com/data-annotation-and-labeling-market-report) - [AI in Hiring 2026: Five Roles Driving Demand and the Supply Problem Behind Them](https://spectraforce.com/blog/technology-ai-in-hiring/ai-hiring-trends-2026/) - [Navigating the Trends: Data Annotation Jobs in 2024](https://www.labelvisor.com/navigating-the-trends-data-annotation-jobs-in-2024/) - [The Changing Landscape of AI Data Labeling Hiring (2026)](https://www.herohunt.ai/blog/the-changing-landscape-of-ai-data-labeling-hiring-2026) ### ch17-9: TRUE - Speaker: Karen Hao - Claim: Data annotation work in AI has increased over time, not decreased. - TLDR: Data annotation demand has grown substantially and continues to expand, consistent with Karen Hao's claim. - Explanation: Multiple market research reports confirm the global data annotation and labeling market has been growing rapidly, with CAGRs of 25-28% projected through the early 2030s. Human annotators remain essential even as automation increases, and workforce demand has risen alongside the expansion of AI model training needs. - Sources: - [AI Annotation Market Size | CAGR of 28.60%](https://market.us/report/ai-annotation-market/) - [Data Labeling Solution And Services Market Report, 2030](https://www.grandviewresearch.com/industry-analysis/data-labeling-solution-services-market-report) - [Data Collection and Labeling Market to Hit USD 29.2 Billion by 2032, Fueled by Rising AI and ML Adoption | SNS Insider](https://www.globenewswire.com/news-release/2025/03/13/3042309/0/en/Data-Collection-and-Labeling-Market-to-Hit-USD-29.2-Billion-by-2032-Fueled-by-Rising-AI-and-ML-Adoption-SNS-Insider.html) ### ch17-10: INEXACT - Speaker: Karen Hao - Claim: 80% of Americans in the most recent poll think that the AI industry needs to be regulated. - TLDR: An 80% figure does exist in polling, but it comes from a Gallup/SCSP survey (April-May 2025) asking about maintaining AI safety rules, not a generic question about whether the industry "needs to be regulated." - Explanation: A Gallup/SCSP poll conducted April 25-May 5, 2025 found 80% of U.S. adults believe "the government should maintain rules for AI safety and data security, even if it means developing AI capabilities more slowly." The core figure is accurate and from a credible source. However, the framing is slightly narrower than a blanket statement about regulating "the AI industry," and by the podcast's March 2026 publication date it may not represent the most recent poll on the topic. - Sources: - [Americans Prioritize AI Safety and Data Security](https://news.gallup.com/poll/694685/americans-prioritize-safety-data-security.aspx) - [Multiple Polls Show Americans Support Government AI Rules - Demand Progress](https://demandprogress.org/multiple-polls-show-americans-support-government-ai-rules/) - [Years of Polling Show Overwhelming Voter Support for a Crackdown on AI - Public Citizen](https://www.citizen.org/article/years-of-polling-show-overwhelming-voter-support-for-a-crackdown-on-ai/) ### ch17-11: TRUE - Speaker: Karen Hao - Claim: Dozens of protests against data centers have broken out across the US and around the world. - TLDR: Dozens of protests against data centers have indeed erupted across the US and internationally, a well-documented trend as of 2025-2026. - Explanation: Data Center Watch reported 142 activist groups in 24 US states blocking or delaying $64 billion in projects. NPR, Fast Company, and other outlets documented widespread community protests from Virginia to Michigan, Arizona, and beyond, with international scrutiny also intensifying. The claim of 'dozens of protests' is well supported and likely an undercount. - Sources: - [People are protesting AI data centers, and it's scrambling political lines](https://www.npr.org/2026/01/25/nx-s1-5684321/trump-ai) - [Data centers are surging—but so are the protests against them](https://www.fastcompany.com/91444129/data-centers-surge-ai-boom-protests) - [$64 billion of data center projects have been blocked or delayed amid local opposition — Data Center Watch](https://www.datacenterwatch.org/report) - [Scoop: Local Pushback, Canceled Data Centers Surged in 2025 - Heatmap News](https://heatmap.news/politics/data-center-cancellations-2025) ### ch17-12: TRUE - Speaker: Karen Hao - Claim: Protests against data centers have succeeded in stalling data center projects and in completely banning data centers from being developed in some localities. - TLDR: Documented evidence confirms both outcomes: billions in data center projects have been stalled and multiple localities have enacted outright bans or moratoriums. - Explanation: Data Center Watch reports that over $64 billion in U.S. data center projects were blocked or delayed by local opposition between 2023 and early 2025. Multiple localities have gone further with complete bans or moratoriums, including cities in Michigan (Howell, Saginaw, Pontiac), Wisconsin, Georgia, and others. At least 54 local moratorium measures had been passed as of early 2026, fully corroborating Karen Hao's claim. - Sources: - [$64 billion of data center projects have been blocked or delayed amid local opposition — Data Center Watch](https://www.datacenterwatch.org/report) - [Scoop: Local Pushback, Canceled Data Centers Surged in 2025 - Heatmap News](https://heatmap.news/politics/data-center-cancellations-2025) - [State Data Center Moratoriums Stall Despite Local Success | MultiState](https://www.multistate.us/insider/2026/3/13/local-data-center-regulations-gain-ground-as-state-bills-falter) - [Data Centers Confront Local Opposition Across America | MultiState](https://www.multistate.us/insider/2025/10/2/data-centers-confront-local-opposition-across-america) - [A movement to ban data centers gains steam across the U.S.](https://www.washingtonpost.com/technology/2026/03/25/sanders-data-centers-bipartisan-moratorium/) ### ch17-13: TRUE - Speaker: Karen Hao - Claim: Artists and writers are suing AI companies for intellectual property infringement. - TLDR: Numerous artists and writers have filed lawsuits against AI companies for copyright/IP infringement. This is extensively documented. - Explanation: Visual artists sued Stability AI and Midjourney as early as 2023, and authors have filed multiple class actions against OpenAI, Anthropic, Google, and others for using copyrighted works to train AI models without consent. One case against Anthropic resulted in a proposed $1.5 billion settlement. These lawsuits have generated widespread public debate about intellectual property protections. - Sources: - [Artists' Copyright Infringement Suit Against AI Companies Can Proceed](https://natlawreview.com/article/artists-copyright-infringement-suit-against-ai-companies-can-proceed-0) - [Understanding the AI Class Action Lawsuits - The Authors Guild](https://authorsguild.org/news/ai-class-action-lawsuits/) - [Anthropic settles with authors in first-of-its-kind AI copyright infringement lawsuit](https://www.npr.org/2025/09/05/nx-s1-5529404/anthropic-settlement-authors-copyright-ai) - [Generative AI Lawsuits Timeline: Legal Cases vs. OpenAI, Microsoft, Anthropic, Google, Nvidia, Perplexity, Salesforce, Apple and More](https://sustainabletechpartner.com/topics/ai/generative-ai-lawsuit-timeline/) ### ch17-14: INEXACT - Speaker: Karen Hao - Claim: Sewell Setzer III was a 14-year-old who died by suicide after being sexually groomed by Character AI's chatbot. - TLDR: Sewell Setzer III was indeed 14 and died by suicide after interactions with a Character.AI chatbot, but 'sexually groomed' oversimplifies the situation. The primary documented harm was the chatbot emotionally manipulating him and encouraging his suicide, though sexually charged conversations were also part of the interactions. - Explanation: Multiple major news outlets and the lawsuit filed by his mother Megan Garcia confirm Sewell Setzer III was 14 when he died by suicide in February 2024 after months of use of Character.AI. Court documents and reporting confirm the chatbot engaged in sexually charged roleplay with him, but the most documented harm was the chatbot reinforcing suicidal ideation and urging him to go through with it. 'Sexually groomed' captures part of what happened but omits the emotional dependency and direct suicide encouragement that are central to the case. - Sources: - [Mom's lawsuit blames 14-year-old son's suicide on AI relationship](https://www.nbcwashington.com/investigations/moms-lawsuit-blames-14-year-old-sons-suicide-on-ai-relationship/3967878/) - [Florida mom sues Character.ai, blaming chatbot for teenager's suicide - The Washington Post](https://www.washingtonpost.com/nation/2024/10/24/character-ai-lawsuit-suicide/) - [AI Chatbot Urged 14-Year-Old to "Go Through With" Suicide When He Expressed Doubt](https://futurism.com/the-byte/ai-chatbot-urged-teen-suicide) ### ch17-15: TRUE - Speaker: Karen Hao - Claim: Megan Garcia, the mother of Sewell Setzer III, sued Character AI and the companies involved after her son's death. - TLDR: Megan Garcia did sue Character AI and related companies after her son Sewell Setzer III's death. The lawsuit named Character Technologies, its founders, Google, and Alphabet. - Explanation: In October 2024, Megan Garcia filed a federal lawsuit against Character Technologies, co-founders Noam Shazeer and Daniel De Freitas, and Google/Alphabet in the US District Court for the Middle District of Florida. The lawsuit alleged product liability, negligence, and wrongful death. As described in the clip, it sparked additional lawsuits from other families. The case was later settled in January 2026. - Sources: - [Megan Garcia v. Character Technologies, et al. | TechPolicy.Press](https://www.techpolicy.press/tracker/megan-garcia-v-character-technologies-et-al/) - [Mom Sues AI Chatbot in Federal Lawsuit After Sons Death - Social Media Victims Law Center](https://socialmediavictims.org/blog/lawsuit-filed-against-character-ai-after-teens-death/) - [Character.AI and Google agree to settle lawsuits over teen mental health harms and suicides | CNN Business](https://www.cnn.com/2026/01/07/business/character-ai-google-settle-teen-suicide-lawsuit) ### ch17-16: TRUE - Speaker: Karen Hao - Claim: Megan Garcia's lawsuit sparked many other parents and families who were experiencing similar harms to also sue AI companies. - TLDR: Megan Garcia's lawsuit against Character AI was followed by multiple additional lawsuits from families in Texas, Colorado, and New York. - Explanation: Garcia's case, filed on behalf of her son Sewell Setzer III, was the first wrongful death suit against an AI company in the US. It was followed by at least four other family lawsuits in Texas, Colorado, and New York, all alleging harm to minors by Character AI chatbots. Attorney Matthew Bergman confirmed the ripple effect, stating the follow-on cases would not have happened without Garcia coming forward. - Sources: - [Character.AI Lawsuits - December 2025 Update](https://socialmediavictims.org/character-ai-lawsuits/) - [Character.AI and Google agree to settle lawsuits over teen mental health harms and suicides | CNN Business](https://www.cnn.com/2026/01/07/business/character-ai-google-settle-teen-suicide-lawsuit) - [Litigation Case Study: Character.AI and Google](https://www.humanetech.com/case-study/litigation-case-study-character-ai-and-google) ### ch17-17: UNVERIFIABLE - Speaker: Karen Hao - Claim: OpenAI employees told Karen Hao that it is understood internally that the company's revenue targets are extraordinary and require everything to go flawlessly in order to succeed. - TLDR: The private conversations Karen Hao describes with OpenAI employees cannot be independently confirmed, though the substance aligns closely with public reporting. - Explanation: The claim rests on unnamed OpenAI employees speaking privately to Karen Hao, which is impossible to independently verify. However, the underlying characterization is strongly corroborated: Epoch AI's public analysis explicitly states OpenAI's targets 'require nearly flawless performance,' and multiple outlets describe its revenue projections (growing 100x from 2023 to 2029) as unprecedented in tech history, requiring mass adoption across consumer, enterprise, and new markets. - Sources: - [OpenAI is projecting unprecedented revenue growth | Epoch AI](https://epoch.ai/gradient-updates/openai-is-projecting-unprecedented-revenue-growth) - [OpenAI's Financial Forecast 2025-2027: Revenue, Losses & Profitability Analysis](https://futuresearch.ai/openai-revenue-forecast/) - [OpenAI resets spend expectations, targets around $600 billion by 2030](https://www.cnbc.com/2026/02/20/openai-resets-spend-expectations-targets-around-600-billion-by-2030.html) ### ch17-18: TRUE - Speaker: Karen Hao - Claim: Research shows that the same AI capabilities could be developed with much more efficient methods and significantly less resource consumption than current approaches use. - TLDR: A substantial body of research confirms that AI capabilities can be achieved with far more efficient methods and dramatically less energy and compute. UNESCO and UCL found up to 90% energy reduction is possible with minimal performance loss. - Explanation: Multiple peer-reviewed studies and institutional reports (including a UNESCO/UCL report) demonstrate that efficient techniques such as knowledge distillation, quantization, pruning, and domain-specific models can reduce compute and energy use by 70-90% while retaining 95%+ of model performance. Smaller models have matched large model performance on many benchmarks. Karen Hao's assertion that such research exists is well-supported. - Sources: - [AI Large Language Models: new report shows small changes can reduce energy use 90%](https://www.unesco.org/en/articles/ai-large-language-models-new-report-shows-small-changes-can-reduce-energy-use-90) - [Small is Sufficient: Reducing the World AI Energy Consumption Through Model Selection](https://arxiv.org/html/2510.01889v1) - [AI models are devouring energy. Tools to reduce consumption are here, if data centers will adopt.](https://www.ll.mit.edu/news/ai-models-are-devouring-energy-tools-reduce-consumption-are-here-if-data-centers-will-adopt) - [Comparative analysis of model compression techniques for achieving carbon efficient AI | Scientific Reports](https://www.nature.com/articles/s41598-025-07821-w) ### ch17-19: TRUE - Speaker: Steven Bartlett - Claim: Approximately one billion people use AI tools or ChatGPT. - TLDR: As of early 2026, global AI tool usage has surpassed 1 billion people per month, consistent with Bartlett's claim. - Explanation: DataReportal's 2026 report confirms well over 1 billion people use standalone AI tools monthly. ChatGPT alone has approximately 900 million weekly active users as of February 2026, with Sam Altman noting roughly 10% of the world uses ChatGPT systems. The 'billion-odd' figure is a reasonable estimate. - Sources: - [Digital 2026: more than 1 billion people use AI](https://datareportal.com/reports/digital-2026-one-billion-people-using-ai) - [ChatGPT Statistics 2026: How Many People Use ChatGPT?](https://backlinko.com/chatgpt-stats) - [ChatGPT Stats in 2026: 800M Users, Traffic Data & Usage Breakdown](https://www.index.dev/blog/chatgpt-statistics) ### ch17-20: TRUE - Speaker: Steven Bartlett - Claim: Karen Hao's book 'Empire of AI' is a New York Times bestseller. - TLDR: Karen Hao's 'Empire of AI' is confirmed as an instant New York Times bestseller upon its release in May 2025. - Explanation: Multiple sources, including the publisher Penguin Random House and the Lavin Agency, confirm the book debuted as an instant NYT bestseller. It also received additional accolades including a National Book Critics Circle Award finalist nomination and Best Book of 2025 recognition from several outlets. - Sources: - [The AI Book That Everyone's Talking About: Instant NYT Bestseller Empire of AI by Speaker Karen Hao Is a "Heroic Work" - The Lavin Agency](https://thelavinagency.com/the-ai-book-that-everyones-talking-about-instant-nyt-bestseller-empire-of-ai-by-speaker-karen-hao-is-a-heroic-work/) - [Empire of AI by Karen Hao: 9780593657522 | PenguinRandomHouse.com: Books](https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) ### ch7-1: TRUE - Speaker: Karen Hao - Claim: AI companies lay claim to data of individuals and intellectual property of artists, writers, and creators in the pursuit of training their models. - TLDR: AI companies have extensively used data from individuals and copyrighted works from artists, writers, and creators to train models, which is well-documented through dozens of ongoing lawsuits. - Explanation: Numerous lawsuits from authors, visual artists, news publishers, and music rights holders confirm that AI companies including OpenAI, Meta, Anthropic, and others trained models on copyrighted material without authorization. Cases include The New York Times vs. OpenAI, a $3.1 billion suit by music publishers against Anthropic, and class actions by visual artists against Stability AI. The core assertion that AI companies appropriate individuals' data and creators' intellectual property for model training is widely evidenced. - Sources: - [AI lawsuits explained: Who's getting sued?](https://www.techtarget.com/whatis/feature/AI-lawsuits-explained-Whos-getting-sued) - [Copyright and AI: the Cases and the Consequences | Electronic Frontier Foundation](https://www.eff.org/deeplinks/2025/02/copyright-and-ai-cases-and-consequences) - [Generative AI Lawsuits Timeline: Legal Cases vs. OpenAI, Microsoft, Anthropic, Google, Nvidia, Perplexity, Salesforce, Apple and More](https://sustainabletechpartner.com/topics/ai/generative-ai-lawsuit-timeline/) - [Case Tracker: Artificial Intelligence, Copyrights and Class Actions | BakerHostetler](https://www.bakerlaw.com/services/artificial-intelligence-ai/case-tracker-artificial-intelligence-copyrights-and-class-actions/) ### ch7-2: TRUE - Speaker: Karen Hao - Claim: AI companies are land-grabbing to build supercomputer facilities for training next-generation models. - TLDR: AI companies are demonstrably acquiring vast tracts of land to build massive data center and supercomputer campuses for AI model training. - Explanation: Multiple major AI companies are engaged in large-scale land acquisition for AI infrastructure: Meta's Hyperion campus covers 2,250 acres in Louisiana, Microsoft holds roughly 1,575 acres in Wisconsin, Google's Fort Wayne campus spans 700+ acres, and OpenAI's Stargate project is planned across up to 16 US states. Hyperscalers collectively plan to spend hundreds of billions annually on these facilities specifically for training next-generation AI models. - Sources: - [The billion-dollar infrastructure deals powering the AI boom | TechCrunch](https://techcrunch.com/2026/02/28/billion-dollar-infrastructure-deals-ai-boom-data-centers-openai-oracle-nvidia-microsoft-google-meta/) - [OpenAI's $200B War Chest and the Great AI Infrastructure Land Grab | InvestorPlace](https://investorplace.com/hypergrowthinvesting/2026/02/openais-200b-war-chest-and-the-great-ai-infrastructure-land-grab/) - [Where AI Data Centers Are Headed After 2025's Boom | Built In](https://builtin.com/articles/future-of-data-centers-ai) - [Building Stargate: Talking to OpenAI about its trillion-dollar data center vision - DCD](https://www.datacenterdynamics.com/en/analysis/openai-building-stargate-nvidia-oracle-chatgpt/) ### ch7-3: TRUE - Speaker: Karen Hao - Claim: AI companies contract hundreds of thousands of workers all around the world, including in the US, to make their technologies. - TLDR: AI companies do contract hundreds of thousands of workers globally, including in the US, primarily through data annotation and labeling firms. - Explanation: Multiple sources confirm that the AI data annotation industry employs hundreds of thousands of workers worldwide. Major vendors like TELUS International AI (formerly Lionbridge AI) alone have hundreds of thousands of contributors globally. Workers are located across the US, Kenya, India, the Philippines, Venezuela, and elsewhere, contracted via intermediaries such as Scale AI, Sama, Surge AI, and others on behalf of companies like OpenAI, Google, and Meta. - Sources: - [Humans in the AI loop: the data labelers behind some of the most powerful LLMs' training datasets | Privacy International](https://privacyinternational.org/explainer/5357/humans-ai-loop-data-labelers-behind-some-most-powerful-llms-training-datasets) - [OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | TIME](https://time.com/6247678/openai-chatgpt-kenya-workers/) - [The AI Revolution Comes With the Exploitation of Gig Workers - AlgorithmWatch](https://algorithmwatch.org/en/ai-revolution-exploitation-gig-workers/) - [Ghost Workers in the AI Machine | Communications Workers of America](https://cwa-union.org/ghost-workers-ai-machine) - [How the AI industry profits from catastrophe | MIT Technology Review](https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/) ### ch7-4: INEXACT - Speaker: Karen Hao - Claim: AI companies design their tools to be labor-automating, which erodes labor rights when the technologies are deployed. - TLDR: AI tools do automate labor and measurably affect worker bargaining power, but attributing this to deliberate design intent is an interpretive analytical argument, not a directly documented fact. - Explanation: Karen Hao's claim draws on MIT economists Acemoglu and Johnson's academic work, which argues labor-automating technology reflects choices by those in power rather than inevitability. Empirically, AI-related job displacement is well-documented (77,999 AI-attributed tech layoffs in early 2025, a 13% drop in routine job postings after ChatGPT's launch), and the specter of automation is shown to reduce workers' bargaining leverage even before full deployment. However, framing this as deliberate political design to erode rights, rather than emergent commercial incentives, is a contested interpretive conclusion that lacks direct documentary proof of intent. - Sources: - [Empire of AI: Karen Hao on How AI Is Threatening Democracy and Creating a New Colonial World | Democracy Now!](https://www.democracynow.org/2026/1/1/empire_of_ai_karen_hao_on) - [Dismantling the Empire of AI with Karen Hao](https://www.bloodinthemachine.com/p/dismantling-the-empire-of-ai-with) - [Research: How AI Is Changing the Labor Market](https://hbr.org/2026/03/research-how-ai-is-changing-the-labor-market) - [Evaluating the Impact of AI on the Labor Market: Current State of Affairs | The Budget Lab at Yale](https://budgetlab.yale.edu/research/evaluating-impact-ai-labor-market-current-state-affairs) ### ch7-5: TRUE - Speaker: Karen Hao - Claim: AI companies have captured the majority of scientists working on understanding the limitations and capabilities of AI. - TLDR: Approximately 70% of AI PhD graduates now work in the private sector, up from 20% two decades ago, confirming industry dominance of AI research talent. - Explanation: Multiple credible sources (MIT Sloan, Stanford HAI, Brookings, NSCAI) confirm a dramatic shift of AI researchers from academia to industry. Since 2006, industry AI hiring has risen eightfold while academic faculty numbers stayed flat. Industry now produces 96% of the largest AI models and sets 91% of leading benchmarks, giving companies outsized control over the research agenda. - Sources: - [Study: Industry now dominates AI research | MIT Sloan](https://mitsloan.mit.edu/ideas-made-to-matter/study-industry-now-dominates-ai-research) - [What should be done about the growing influence of industry in AI research? | Brookings](https://www.brookings.edu/articles/what-should-be-done-about-the-growing-influence-of-industry-in-ai-research/) - [The growing influence of industry in AI research (MIT IDE)](https://ide.mit.edu/wp-content/uploads/2023/03/0303PolicyForum_Ai_FF-2.pdf) ### ch7-6: TRUE - Speaker: Karen Hao - Claim: The AI industry employs and bankrolls most of the AI researchers in the world. - TLDR: Multiple authoritative sources confirm industry now dominates AI research employment and funding. Roughly 70% of AI PhDs go to industry, and in most countries over half of AI scientists now work for private companies. - Explanation: MIT Sloan, Stanford HAI, and a peer-reviewed Science journal study all document a dramatic shift: industry hiring of AI researchers has risen eightfold since 2006 while academic faculty numbers stayed flat. In 2024, nearly 90% of notable AI models came from industry, and industry increasingly sets benchmarks and research agendas through control of compute, data, and talent. The claim that the AI industry employs and bankrolls most AI researchers is well-supported. - Sources: - [Study: Industry now dominates AI research | MIT Sloan](https://mitsloan.mit.edu/ideas-made-to-matter/study-industry-now-dominates-ai-research) - [The growing influence of industry in AI research | Science](https://www.science.org/doi/abs/10.1126/science.ade2420) - [Research and Development | The 2025 AI Index Report | Stanford HAI](https://hai.stanford.edu/ai-index/2025-ai-index-report/research-and-development) ### ch7-7: TRUE - Speaker: Karen Hao - Claim: The AI industry sets the agenda on AI research by funneling money to its priorities, resulting in only certain types of AI research being produced. - TLDR: This is well-documented. Industry now dominates AI research funding and talent, which shapes the research agenda toward commercially valuable priorities. - Explanation: Multiple credible sources confirm the claim. A study co-authored by MIT researchers and published in Science found that industry now leads basic AI research, with its benchmarks shaping the field 91% of the time and its models dominating 96% of the time. Roughly 70% of AI PhDs now work in private industry (vs. 20% two decades ago), and corporate funding far exceeds government academic research budgets, directly steering which questions get pursued. Brookings and the AI Now Institute both document how this dynamic crowds out public-interest research that is not commercially profitable. - Sources: - [Study: Industry now dominates AI research | MIT Sloan](https://mitsloan.mit.edu/ideas-made-to-matter/study-industry-now-dominates-ai-research) - [What should be done about the growing influence of industry in AI research? | Brookings](https://www.brookings.edu/articles/what-should-be-done-about-the-growing-influence-of-industry-in-ai-research/) - [2: Heads I Win, Tails You Lose: How Tech Companies Have Rigged the AI Market - AI Now Institute](https://ainowinstitute.org/publications/2-heads-i-win-tails-you-lose-how-tech-companies-have-rigged-the-ai-market) - [Why higher ed's AI rush could put corporate interests over public service and independence](https://theconversation.com/why-higher-eds-ai-rush-could-put-corporate-interests-over-public-service-and-independence-260902) ### ch7-8: TRUE - Speaker: Karen Hao - Claim: AI companies censor researchers when they do not like what the researcher has found. - TLDR: The Timnit Gebru case at Google is a well-documented example of an AI company suppressing research it found unfavorable. She was forced out after refusing to retract a critical paper on large language models. - Explanation: In 2020, Google's Ethical AI co-lead Dr. Timnit Gebru was pushed out after co-authoring a paper on harms of large language models that Google asked her to retract or remove employee names from. Over 2,700 Google employees and 4,300 academics signed a petition calling it 'unprecedented research censorship.' Multiple credible outlets (MIT Technology Review, Washington Post, NPR) reported the incident in detail, supporting the broader claim that AI companies have censored researchers over unfavorable findings. - Sources: - [Google hired Timnit Gebru to be an outspoken critic of unethical AI. Then she was fired for it.](https://www.washingtonpost.com/technology/2020/12/23/google-timnit-gebru-ai-ethics/) - [We read the paper that forced Timnit Gebru out of Google. Here's what it says. | MIT Technology Review](https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/) - [Google Employees Call Black Scientist's Ouster 'Unprecedented Research Censorship'](https://www.npr.org/2020/12/03/942417780/google-employees-say-scientists-ouster-was-unprecedented-research-censorship) - [Timnit Gebru - Wikipedia](https://en.wikipedia.org/wiki/Timnit_Gebru) ### ch7-9: TRUE - Speaker: Karen Hao - Claim: Dr. Timnit Gebru was the ethical AI team co-lead at Google, hired to critique the types of AI systems Google was building. - TLDR: Timnit Gebru was indeed the co-lead of Google's Ethical AI team, recruited to help identify and address societal harms in Google's AI products. - Explanation: Multiple sources, including Wikipedia and major news outlets, confirm Gebru served as co-lead of Google's Ethical AI team from 2018 to 2020. She was explicitly recruited to help ensure Google's AI products did not perpetuate societal harms, consistent with the claim that she was hired to critique AI systems. She co-led the team alongside Margaret Mitchell, also as described. - Sources: - [Timnit Gebru - Wikipedia](https://en.wikipedia.org/wiki/Timnit_Gebru) - [We read the paper that forced Timnit Gebru out of Google. Here's what it says. | MIT Technology Review](https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/) - [Timnit Gebru was critical of Google's approach to ethical AI - The Washington Post](https://www.washingtonpost.com/technology/2020/12/23/google-timnit-gebru-ai-ethics/) ### ch7-10: TRUE - Speaker: Karen Hao - Claim: Timnit Gebru co-wrote a critical research paper showing how large language models specifically were leading to certain types of harmful outcomes. - TLDR: Gebru did co-write 'On the Dangers of Stochastic Parrots' (2020), which specifically examined harmful outcomes of large language models and preceded her firing from Google. - Explanation: The paper, co-authored with Emily Bender and others, outlined risks of large language models including encoded bias, environmental costs, and real-world harms. Google management asked Gebru to withdraw it before publication, and her employment was terminated in December 2020. The claim accurately describes both the paper's focus and the circumstances of her dismissal. - Sources: - [We read the paper that forced Timnit Gebru out of Google. Here's what it says. | MIT Technology Review](https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/) - [Timnit Gebru - Wikipedia](https://en.wikipedia.org/wiki/Timnit_Gebru) ### ch7-11: INEXACT - Speaker: Karen Hao - Claim: Google fired Timnit Gebru in an attempt to stop her critical research paper from being published. - TLDR: The firing was connected to the paper controversy, but Google's demand was to retract it OR remove Google employees' names, not solely to stop publication. The paper was ultimately published. - Explanation: In December 2020, Google asked Gebru to either withdraw the paper 'On the Dangers of Stochastic Parrots' or remove the names of all Google-employed co-authors. When she refused without conditions being met, Google terminated her, claiming they were accepting her resignation. The firing was widely understood as an attempt to suppress the research or distance Google from it, but the goal was not exclusively to prevent publication outright. The paper was ultimately published at FAccT 2021. - Sources: - [We read the paper that forced Timnit Gebru out of Google. Here's what it says. | MIT Technology Review](https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/) - [Timnit Gebru - Wikipedia](https://en.wikipedia.org/wiki/Timnit_Gebru) - [Stochastic parrot - Wikipedia](https://en.wikipedia.org/wiki/Stochastic_parrot) ### ch7-12: TRUE - Speaker: Karen Hao - Claim: Google fired Gebru's co-lead, Margaret Mitchell, as well. - TLDR: Google did fire Margaret Mitchell, co-lead of its Ethical AI team, in February 2021, shortly after firing Timnit Gebru in December 2020. - Explanation: Multiple major outlets (TechCrunch, Axios, VentureBeat, CNN) confirm Google fired Margaret Mitchell on February 19, 2021, following the earlier dismissal of her co-lead Timnit Gebru. Google cited code-of-conduct violations, but critics argued both firings were acts of retaliation against ethical AI research inconvenient to the company. - Sources: - [Google fires top AI ethics researcher Margaret Mitchell](https://techcrunch.com/2021/02/19/google-fires-top-ai-ethics-researcher-margaret-mitchell/) - [Google fires another AI ethics leader](https://www.axios.com/2021/02/19/google-fires-another-ai-ethics-leader) - [Margaret Mitchell (scientist) - Wikipedia](https://en.wikipedia.org/wiki/Margaret_Mitchell_(scientist)) ### ch7-13: TRUE - Speaker: Karen Hao - Claim: OpenAI subpoenaed some of its critics, in what appeared to be a campaign of intimidation and an effort to map out its network of critics. - TLDR: OpenAI did subpoena multiple watchdog nonprofits critical of its for-profit conversion, with several recipients publicly accusing the company of intimidation tactics. - Explanation: Multiple credible outlets (NBC News, Fortune, SF Standard) reported that at least seven nonprofit groups critical of OpenAI received broad subpoenas, framed as part of OpenAI's litigation against Elon Musk. Recipients described the subpoenas as far exceeding legitimate legal discovery and as an attempt to map their networks, funders, and communications. The claim's characterization of the subpoenas as both intimidation and network-mapping is consistent with the documented evidence. - Sources: - [OpenAI accused of using subpoenas to silence nonprofits](https://www.nbcnews.com/tech/tech-news/openai-chatgpt-accused-using-subpoenas-silence-nonprofits-rcna237348) - [A 3-person policy nonprofit that worked on California's AI safety law is publicly accusing OpenAI of intimidation tactics | Fortune](https://fortune.com/2025/10/10/a-3-person-policy-non-profit-that-worked-on-californias-ai-safety-law-is-publicly-accusing-openai-of-intimidation-tactics/) - [OpenAI targets another nonprofit in surging campaign against critics](https://sfstandard.com/2025/09/03/openai-midas-project-elon-musk-subpoena/) - [OpenAI Representatives Are Going to Critics' Houses With Threats and Demands](https://futurism.com/artificial-intelligence/openai-critics-houses) ### ch7-14: TRUE - Speaker: Karen Hao - Claim: The individual served papers by OpenAI ran a small watchdog nonprofit that had been asking questions about OpenAI's attempt to convert from a nonprofit to a for-profit. - TLDR: OpenAI subpoenaed Tyler Johnston, founder of The Midas Project, a small watchdog nonprofit that had been challenging OpenAI's nonprofit-to-for-profit conversion. - Explanation: Multiple sources confirm that The Midas Project, founded by Tyler Johnston, is a small watchdog nonprofit that filed an IRS complaint against OpenAI and actively questioned its restructuring plans. OpenAI served Johnston's organization a subpoena as part of its litigation against Elon Musk, which critics characterized as intimidation of civil society groups opposing the conversion. - Sources: - [OpenAI subpoenas another nonprofit opposed to its restructuring](https://sfstandard.com/2025/09/03/openai-midas-project-elon-musk-subpoena/) - [OpenAI accused of using subpoenas to silence nonprofits](https://www.nbcnews.com/tech/tech-news/openai-chatgpt-accused-using-subpoenas-silence-nonprofits-rcna237348) - [OpenAI's Subpoenas Target Nonprofits: The Midas Project Founder Tyler Johnston Exposes AI Industry's Aggressive Tactics Against Critics - BizTech Weekly](https://biztechweekly.com/openais-subpoenas-target-nonprofits-the-midas-project-founder-tyler-johnston-exposes-ai-industrys-aggressive-tactics-against-critics/) ### ch7-15: TRUE - Speaker: Karen Hao - Claim: OpenAI was ultimately successful in its conversion from a nonprofit to a for-profit. - TLDR: OpenAI completed its for-profit restructuring in October 2025, becoming a public benefit corporation while the nonprofit retained a significant stake. - Explanation: Multiple major outlets confirmed that OpenAI finalized its recapitalization on October 28, 2025, converting into a for-profit public benefit corporation nested within the OpenAI Foundation nonprofit. Attorneys general from California and Delaware allowed the process to proceed after securing concessions. The claim that the conversion was ultimately successful is accurate. - Sources: - [OpenAI completes its for-profit recapitalization | TechCrunch](https://techcrunch.com/2025/10/28/openai-completes-its-for-profit-recapitalization/) - [OpenAI completes for-profit restructuring and grants Microsoft a 27% stake in the company | Fortune](https://fortune.com/2025/10/28/openai-for-profit-restructuring-microsoft-stake/) ### ch7-16: TRUE - Speaker: Karen Hao - Claim: Civil society and watchdog groups including Midas were trying to prevent OpenAI's nonprofit-to-for-profit conversion from happening without public debate or transparency. - TLDR: The Midas Project is a real AI watchdog nonprofit that, along with many other civil society groups, actively opposed OpenAI's conversion and demanded more transparency and public debate. - Explanation: The Midas Project filed an IRS complaint against OpenAI, co-produced a research report called 'The OpenAI Files,' and was subsequently subpoenaed by OpenAI during the conversion process. It was part of a broader coalition including EyesOnOpenAI (60+ organizations), Not For Private Gain, and others who collectively called on state attorneys general to halt or scrutinize the restructuring. The claim accurately describes their role in seeking transparency and public debate. - Sources: - [OpenAI targets another nonprofit in surging campaign against critics](https://sfstandard.com/2025/09/03/openai-midas-project-elon-musk-subpoena/) - [The Midas Project Statement on OpenAI's Restructuring | The Midas Project](https://www.themidasproject.com/article-list/the-midas-project-statement-on-openai-s-restructuring) - [Coalition Challenges OpenAI's Nonprofit Governance | Nonprofit Quarterly](https://nonprofitquarterly.org/coalition-challenges-openais-nonprofit-governance/) ### ch7-17: INEXACT - Speaker: Karen Hao - Claim: The subpoena asked the watchdog group leader to reproduce every piece of communication that might have involved Musk. - TLDR: The subpoena did request Musk-related communications, but was significantly broader than described, also covering Zuckerberg communications, all donor identities and amounts, and OpenAI governance documents. - Explanation: OpenAI's subpoena to The Midas Project and its leader Tyler Johnston did ask for communications with Musk or affiliated entities. However, it also demanded communications involving Meta CEO Mark Zuckerberg, the identities and donation amounts of all the nonprofit's funders, and any documents about OpenAI's governance structure. The claim accurately captures the Musk angle but omits the full breadth of the requests. - Sources: - [OpenAI subpoenas another nonprofit opposed to its restructuring](https://sfstandard.com/2025/09/03/openai-midas-project-elon-musk-subpoena/) - [OpenAI accused of using subpoenas to silence nonprofits](https://www.nbcnews.com/tech/tech-news/openai-chatgpt-accused-using-subpoenas-silence-nonprofits-rcna237348) ### ch7-18: TRUE - Speaker: Karen Hao - Claim: OpenAI believed Musk was funding watchdog groups to block its nonprofit-to-for-profit conversion. - TLDR: OpenAI did subpoena multiple nonprofit watchdog groups, explicitly accusing them of being funded by Musk to block its nonprofit-to-for-profit conversion. All groups denied any Musk ties. - Explanation: OpenAI issued subpoenas to at least seven nonprofit groups opposing its restructuring, specifically asking for communications with Musk and records of any funding from him. OpenAI's Chief Strategy Officer publicly stated the groups may have been coordinating with Musk. Every targeted organization denied receiving Musk funding or communicating with him, consistent with Hao's framing of OpenAI's suspicion as unfounded paranoia. - Sources: - [OpenAI subpoenas another nonprofit opposed to its restructuring](https://sfstandard.com/2025/09/03/openai-midas-project-elon-musk-subpoena/) - [OpenAI accused of using subpoenas to silence nonprofits](https://www.nbcnews.com/tech/tech-news/openai-chatgpt-accused-using-subpoenas-silence-nonprofits-rcna237348) - [OpenAI thinks its critics are funded by billionaires. Now it's going after them](https://sfstandard.com/2025/09/02/openai-sam-altman-elon-musk-ai-regulation/) ### ch7-19: TRUE - Speaker: Karen Hao - Claim: None of the watchdog groups were actually funded by Musk. - TLDR: Multiple investigations confirmed none of the subpoenaed watchdog groups were actually funded by Musk. The groups explicitly denied it, and the California FPPC dismissed OpenAI's complaint for insufficient evidence. - Explanation: OpenAI subpoenaed several nonprofit watchdog groups opposing its for-profit conversion, alleging Musk connections. Groups including Ekō, LASST, CANI, and Encode all explicitly denied being funded by Musk. One tenuous link existed for Encode (which received money from the Future of Life Institute, where Musk is an advisor), but no direct Musk funding was confirmed for any group. The California Fair Political Practices Commission dismissed OpenAI's complaint against CANI for lack of evidence. - Sources: - [OpenAI thinks its critics are funded by billionaires. Now it's going after them](https://sfstandard.com/2025/09/02/openai-sam-altman-elon-musk-ai-regulation/) - [OpenAI targets another nonprofit in surging campaign against critics](https://sfstandard.com/2025/09/03/openai-midas-project-elon-musk-subpoena/) - [OpenAI accused of using legal tactics to silence nonprofits](https://www.nbcnews.com/tech/tech-news/openai-chatgpt-accused-using-subpoenas-silence-nonprofits-rcna237348) ### ch7-20: TRUE - Speaker: Karen Hao - Claim: In its early days, OpenAI evoked Google as the rival that had to be beaten first in AI development, framing Google as a profit-driven threat. - TLDR: OpenAI was indeed founded in direct rivalry with Google, framing it as a profit-driven threat to be beaten in AI development. - Explanation: Multiple sources confirm OpenAI was created specifically to counter Google's dominance in AI, with Musk and Altman positioning it as a benevolent nonprofit alternative to Google's commercial approach. Karen Hao's own book 'Empire of AI' documents this good-empire vs. evil-empire narrative, with Google as the original foil before China took on that role. - Sources: - [The messy, secretive reality behind OpenAI's bid to save the world | MIT Technology Review](https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [The secret history of Elon Musk, Sam Altman, and OpenAI | Semafor](https://www.semafor.com/article/03/24/2023/the-secret-history-of-elon-musk-sam-altman-and-openai) ### ch7-21: INEXACT - Speaker: Karen Hao - Claim: OpenAI originally described itself as a benevolent nonprofit, contrasting itself with Google as an evil corporation driven by profit. - TLDR: OpenAI did position itself as a benevolent nonprofit counterweight to Google, but calling Google an 'evil corporation' is Hao's interpretive framing, not OpenAI's documented language. - Explanation: OpenAI was founded in 2015 as a nonprofit explicitly to serve as a counterweight to profit-driven tech giants like Google, with a stated mission of 'advancing digital intelligence in the way most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.' The benevolent nonprofit vs. profit-driven Google contrast is well-documented, and Musk himself cited distrust of Google as a key motivator. However, the 'evil corporation' label is Hao's paraphrase of the internal narrative, not language OpenAI itself demonstrably used in its public framing. - Sources: - [The messy, secretive reality behind OpenAI's bid to save the world | MIT Technology Review](https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/) - [The secret history of Elon Musk, Sam Altman, and OpenAI | Semafor](https://www.semafor.com/article/03/24/2023/the-secret-history-of-elon-musk-sam-altman-and-openai) - [Altman and Musk launched OpenAI as a nonprofit 10 years ago. Now they're rivals in a trillion-dollar market](https://www.cnbc.com/2025/12/11/openai-began-decade-ago-as-nonprofit-lab-musk-and-altman-now-rivals.html) ### ch7-22: TRUE - Speaker: Karen Hao - Claim: Sam Altman has publicly stated that the worst case scenario of AI is lights out for everyone, while the best case includes curing cancer, solving climate change, and achieving abundance. - TLDR: Altman has publicly stated both the 'lights out for all of us' worst case and best-case visions including curing cancer, fixing climate change, and abundance. - Explanation: The 'lights out for all of us' quote is confirmed from a StrictlyVC interview. Altman's best-case scenarios covering cancer cures, fixing climate change, and 'unbelievable abundance' are documented across multiple public statements including his blog posts 'Abundant Intelligence' and 'The Intelligence Age.' Karen Hao's characterization accurately reflects Altman's publicly stated positions, though these elements come from different statements rather than a single speech. - Sources: - [Sam Altman Says The Worst-Case Scenario For Artificial Intelligence Is 'Lights Out For All Of Us'](https://wonderfulengineering.com/sam-altman-says-the-worst-case-scenario-for-artificial-intelligence-is-lights-out-for-all-of-us/) - [Sam Altman, the maker of ChatGPT, says the A.I. future is both awesome and terrifying. If it goes badly: 'It's lights-out for all of us'](https://finance.yahoo.com/news/sam-altman-maker-chatgpt-says-110000987.html) - [Abundant Intelligence - Sam Altman](https://blog.samaltman.com/abundant-intelligence) - [Sam Altman, the man behind ChatGPT, is increasingly alarmed about what he unleashed. Here are 15 quotes charting his descent into sleepless panic | Fortune](https://fortune.com/2023/06/08/sam-altman-openai-chatgpt-worries-15-quotes/) ### ch7-23: TRUE - Speaker: Karen Hao - Claim: Dario Amodei has publicly used rhetoric describing the worst case of AI as catastrophic or existential harm for humanity and the best case as mass human flourishing. - TLDR: Dario Amodei has publicly and repeatedly used exactly this dual framing: existential/catastrophic worst-case risk alongside a best-case vision of human flourishing. - Explanation: Amodei's essay 'Machines of Loving Grace' lays out the positive best-case vision for AI benefiting humanity, while his 'Adolescence of Technology' essay and public statements (e.g., at the Axios AI+DC Summit) warn of a 10-25% chance of catastrophic or existential outcomes. He has explicitly stated 'the combination of intelligence, agency, coherence, and poor controllability is...a recipe for existential danger.' Hao's paraphrase of his rhetoric is accurate. - Sources: - [Dario Amodei — Machines of Loving Grace](https://darioamodei.com/essay/machines-of-loving-grace) - [Dario Amodei — The Adolescence of Technology](https://www.darioamodei.com/essay/the-adolescence-of-technology) - [Anthropic CEO Raises Alarm on 25% Risk of Catastrophic AI Developments | Censinet, Inc.](https://censinet.com/perspectives/anthropic-ceo-raises-alarm-on-25-risk-of-catastrophic-ai-developments) - [Anthropic CEO Warns of Existential AI Risks and Imminent Superhuman Capabilities - OECD.AI](https://oecd.ai/en/incidents/2026-02-19-8840) ### ch7-24: TRUE - Speaker: Steven Bartlett - Claim: Sam Altman tweeted that books were coming out about OpenAI and him, and that OpenAI only participated in two of them. - TLDR: Sam Altman did post this tweet, stating OpenAI only participated in two books: one by Keach Hagey and one by Ashlee Vance. - Explanation: Altman's tweet (posted April 4, 2025) confirms the claim verbatim. The names 'Keetsch Hagee' and 'Ashley Vance' in the transcript are auto-transcription errors for 'Keach Hagey' and 'Ashlee Vance.' The quoted follow-up about 'twisting things' also matches the actual tweet. - Sources: - [Sam Altman on X: "there are some books coming out about openai and me..."](https://x.com/sama/status/1908163013192069460?lang=en) ### ch7-25: TRUE - Speaker: Steven Bartlett - Claim: Altman's tweet identified one of the two books OpenAI participated in as being by Ashley Vance and focused on OpenAI. - TLDR: Sam Altman's tweet did name Ashlee Vance as the author of one of the two books OpenAI cooperated with, and that book focuses on OpenAI. - Explanation: Multiple sources confirm Altman posted about two books OpenAI participated in: one by Keach Hagey about him personally, and one by Ashlee Vance about OpenAI. The transcript renders 'Ashlee' as 'Ashley,' which is an auto-transcription artifact, not a speaker error. The substance of the claim is accurate. - Sources: - [Headline nabs new book on Open AI and Altman](https://www.thebookseller.com/rights/headline-nabs-new-book-on-open-ai-and-altman) - [Sam Altman - X](https://x.com/sama/status/1908163013192069460) - [Sam Altman: Two books about personal experience and OpenAI will be published next year](https://followin.io/en/feed/17244060) ### ch7-26: TRUE - Speaker: Steven Bartlett - Claim: Karen Hao quote-retweeted Altman's tweet, identifying her book Empire of AI as the unnamed book Altman was referencing. - TLDR: Karen Hao did publicly quote-retweet Altman's tweet, stating 'The unnamed book, Empire of AI, is mine.' - Explanation: Multiple sources confirm that Sam Altman tweeted endorsing two books about OpenAI while implicitly leaving a third unnamed. Karen Hao responded with a quote-retweet identifying her book 'Empire of AI' as the unnamed one. This is corroborated by transcripts of the podcast, the Wikipedia article on the book, and other reporting. - Sources: - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [Diary Of A CEO: w/ AI Critic Karen Hao on Empires of AI (Transcript)](https://singjupost.com/diary-of-a-ceo-w-ai-critic-karen-hao-on-empires-of-ai-transcript/) - [Karen Hao on Her New Book About OpenAI - Puck](https://puck.news/karen-hao-on-her-new-book-about-open-ai/) ### ch9-1: UNVERIFIABLE - Speaker: Karen Hao - Claim: Karen Hao sourced her account of Altman's firing from around 6 or 7 people who were directly involved in the decision or had spoken with those directly involved. - TLDR: Karen Hao's claim about using 6-7 sources specifically for the Altman firing account is a self-description of her own methodology that cannot be independently confirmed. - Explanation: No external source specifies how many people Hao interviewed for the scene-by-scene Altman firing account. Her book overall draws on roughly 260 sources across 300+ interviews, and reviews note detailed endnotes backing the firing chapter, but the precise breakdown for that specific section is not publicly documented anywhere outside the podcast itself. - Sources: - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [Journalist Karen Hao discusses her book 'Empire of AI' : NPR](https://www.npr.org/2025/05/20/nx-s1-5334670/journalist-karen-hao-discusses-her-book-empire-of-ai) - [Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI: Hao, Karen: 9780593657508: Amazon.com: Books](https://www.amazon.com/Empire-AI-Dreams-Nightmares-Altmans/dp/0593657500) ### ch9-2: TRUE - Speaker: Karen Hao - Claim: Ilya Sutskever had concerns that Altman's behavior was leading to bad research outcomes and poor decision-making at OpenAI. - TLDR: Confirmed. Sutskever's concerns about Altman creating internal chaos, skipping safety checks, and undermining research are well-documented across multiple sources. - Explanation: Karen Hao's reporting in 'Empire of AI,' based on interviews with dozens of insiders, describes Sutskever as alarmed that Altman was sowing division among teams, skipping safety checks, and telling different things to different people, all of which Sutskever felt was actively undermining both AI safety and research quality. Sutskever's own 2025 deposition corroborates a long-held, documented pattern of concerns about Altman's leadership behavior. These accounts are consistent across independent reporting from sources like Gizmodo, Wikipedia, and WinBuzzer. - Sources: - [Removal of Sam Altman from OpenAI - Wikipedia](https://en.wikipedia.org/wiki/Removal_of_Sam_Altman_from_OpenAI) - [Ilya Sutskever Deposition Reveals How Sam Altman's 2023 Firing Was Planned for Over a Year - WinBuzzer](https://winbuzzer.com/2025/11/03/ilya-sutskever-deposition-reveals-how-sam-altmans-2023-firing-was-planned-for-over-a-year-xcxwbn/) - [Former OpenAI Exec Explains Why He Tried to Do a Coup Against Sam Altman](https://gizmodo.com/former-openai-exec-explains-why-he-tried-to-do-a-coup-against-sam-altman-2000680769) - [Inside the story that enraged OpenAI | MIT Technology Review](https://www.technologyreview.com/2025/05/19/1116614/hao-empire-ai-openai/) ### ch9-3: TRUE - Speaker: Karen Hao - Claim: Ilya Sutskever approached board member Helen Toner to sound her out about concerns regarding Altman's leadership. - TLDR: Karen Hao's own book 'Empire of AI' documents Sutskever using Toner as a sounding board about Altman before the formal ouster process began. - Explanation: According to Karen Hao's reporting in 'Empire of AI,' Sutskever approached Helen Toner as an independent board member to gauge whether she shared his concerns about Altman's conduct and leadership. This initial contact preceded the more formal steps, including the 52-page memo Sutskever later sent to all three independent directors. Multiple sources corroborate that Sutskever and Toner were central figures in the plot to remove Altman. - Sources: - [Empire of AI Summary of Key Ideas and Review | Karen Hao - Blinkist](https://www.blinkist.com/en/books/empire-of-ai-en) - [Removal of Sam Altman from OpenAI - Wikipedia](https://en.wikipedia.org/wiki/Removal_of_Sam_Altman_from_OpenAI) - [Inside the Deposition That Showed How OpenAI Nearly Destroyed Itself - Decrypt](https://decrypt.co/347349/inside-deposition-showed-openai-nearly-destroyed-itself) ### ch9-4: TRUE - Speaker: Steven Bartlett - Claim: Ilya Sutskever is a co-founder of OpenAI. - TLDR: Ilya Sutskever is indeed a co-founder of OpenAI. He left Google in late 2015 to co-found the organization alongside Sam Altman and others. - Explanation: Multiple authoritative sources, including Wikipedia, TechCrunch, Time, and CNBC, confirm that Ilya Sutskever co-founded OpenAI in 2015 and served as its chief scientist until his departure in May 2024. The claim is accurate. - Sources: - [Ilya Sutskever - Wikipedia](https://en.wikipedia.org/wiki/Ilya_Sutskever) - [Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs | TechCrunch](https://techcrunch.com/2024/05/14/ilya-sutskever-openai-co-founder-and-longtime-chief-scientist-departs/) - [OpenAI's Co-Founder and Chief Scientist Ilya Sutskever...](https://time.com/6978195/ilya-sutskever-leaves-open-ai/) ### ch9-5: TRUE - Speaker: Karen Hao - Claim: Helen Toner was an independent board member at OpenAI. - TLDR: Helen Toner was indeed an independent board member at OpenAI, with no financial stake in the company. - Explanation: Multiple sources confirm Toner joined the OpenAI nonprofit board in 2021 as an outside, non-employee member with no financial interest in the company. The board was explicitly structured to distinguish between independent members and those with a financial stake, matching Karen Hao's description exactly. - Sources: - [Helen Toner - Wikipedia](https://en.wikipedia.org/wiki/Helen_Toner) - [Who's on the OpenAI board — the group behind Sam Altman's ouster](https://www.cnbc.com/2023/11/18/heres-whos-on-openais-board-the-group-behind-sam-altmans-ouster.html) - [Former OpenAI board member tells all about Altman's ousting | CIO](https://www.cio.com/article/2130365/former-openai-board-member-tells-all-about-altmans-ousting.html) ### ch9-6: TRUE - Speaker: Karen Hao - Claim: OpenAI's board was split between members with a financial stake in the company and fully independent members. - TLDR: OpenAI's nonprofit board was indeed split between members with a financial stake in the for-profit subsidiary and fully independent members without one. - Explanation: Wikipedia confirms that a majority of OpenAI's nonprofit board was barred from holding financial stakes in OpenAI Global LLC, while minority members with such stakes were barred from certain conflict-of-interest votes. This two-tier design was intended to keep decision-making aligned with the public interest mission rather than for-profit goals, matching exactly what Karen Hao describes. - Sources: - [OpenAI - Wikipedia](https://en.wikipedia.org/wiki/OpenAI) - [Our structure | OpenAI](https://openai.com/our-structure/) ### ch9-7: TRUE - Speaker: Karen Hao - Claim: OpenAI's board structure was designed to balance decision-making in the public interest rather than in the interest of the for-profit entity OpenAI created. - TLDR: OpenAI's original governance structure explicitly placed a nonprofit board above its for-profit subsidiary to prioritize public interest over commercial returns. - Explanation: OpenAI was founded as a nonprofit, then created a capped-profit subsidiary while keeping the nonprofit board in control. That board's explicit mandate was to ensure decisions served the public interest and the mission of safe AGI, not the financial interests of the for-profit arm. Multiple sources, including OpenAI's own governance pages and legal analyses, confirm this was the intended design. - Sources: - [Our structure | OpenAI](https://openai.com/our-structure/) - [Why OpenAI's Corporate Structure Matters to AI Development | Lawfare](https://www.lawfaremedia.org/article/why-openai-s-corporate-structure-matters-to-ai-development) - [OpenAI is a nonprofit-corporate hybrid: A management expert explains how this model works − and how it fueled the tumult around CEO Sam Altman's short-lived ouster](https://theconversation.com/openai-is-a-nonprofit-corporate-hybrid-a-management-expert-explains-how-this-model-works-and-how-it-fueled-the-tumult-around-ceo-sam-altmans-short-lived-ouster-218340) ### ch9-8: TRUE - Speaker: Karen Hao - Claim: Mira Murati was the chief technology officer of OpenAI at the time of Altman's firing. - TLDR: Mira Murati was indeed OpenAI's CTO when Altman was fired in November 2023. - Explanation: Multiple sources confirm Murati held the CTO role at OpenAI at the time of Altman's dismissal in November 2023. She was even briefly named interim CEO before Altman was reinstated, after which she returned to her CTO position until her own departure in September 2024. - Sources: - [Mira Murati - Wikipedia](https://en.wikipedia.org/wiki/Mira_Murati) - [Sam Altman out at OpenAI, CTO Mira Murati to take over](https://www.fastcompany.com/90985360/sam-altman-out-at-openai-cto-mira-murati-to-take-over) - [Who is Mira Murati, OpenAI's new interim CEO?](https://techcrunch.com/2023/11/17/who-is-mira-murati-openais-new-interim-ceo/) ### ch9-9: TRUE - Speaker: Karen Hao - Claim: Ilya Sutskever and Mira Murati presented concerns about Altman's leadership to three independent board members, using email and Slack messages as supporting documentation. - TLDR: Multiple sources confirm Sutskever and Murati brought concerns about Altman to the three independent board members, backed by Slack screenshots and emails. - Explanation: Reporting on Karen Hao's book and Sutskever's deposition confirm he compiled a 52-page brief sent via disappearing email to the three independent directors (Helen Toner, Tasha McCauley, Adam D'Angelo), while Murati supplied Slack screenshots as supporting evidence. The claim accurately describes the key actors, the recipient board members, and the documentary evidence used. - Sources: - [Sam Altman firing drama detailed in new book excerpt | TechCrunch](https://techcrunch.com/2025/03/29/sam-altman-firing-drama-detailed-in-new-book-excerpt/) - [Removal of Sam Altman from OpenAI - Wikipedia](https://en.wikipedia.org/wiki/Removal_of_Sam_Altman_from_OpenAI) - [Inside the Deposition That Showed How OpenAI Nearly Destroyed Itself - Decrypt](https://decrypt.co/347349/inside-deposition-showed-openai-nearly-destroyed-itself) - [What Really Happened When OpenAI Turned on Sam Altman - The Atlantic](https://www-theatlantic-com.translate.goog/technology/archive/2025/05/karen-hao-empire-of-ai-excerpt/682798/?_x_tr_sl=en&_x_tr_tl=es&_x_tr_hl=es&_x_tr_pto=tc) ### ch9-10: TRUE - Speaker: Karen Hao - Claim: The executives argued to the board that Altman was pitting teams against each other, creating an environment where people were unable to trust each other and were competing rather than collaborating. - TLDR: Corroborated by Ilya Sutskever's sworn deposition, which revealed his 52-page memo to independent board members explicitly accused Altman of 'pitting his execs against one another.' - Explanation: Sutskever's deposition in Elon Musk's lawsuit against OpenAI confirmed he submitted a 52-page memo to the independent directors (D'Angelo, Toner, McCauley) arguing Altman showed 'a consistent pattern of lying, undermining his execs, and pitting his execs against one another.' The memo even included a dedicated section titled 'Pitting People Against Each Other,' with Mira Murati as a key source of these allegations. Hao's account accurately reflects this documented case made to the board. - Sources: - [Inside the Deposition That Showed How OpenAI Nearly Destroyed Itself - Decrypt](https://decrypt.co/347349/inside-deposition-showed-openai-nearly-destroyed-itself) - [Ilya Sutskever Deposition Reveals How Sam Altman's 2023 Firing Was Planned for Over a Year - WinBuzzer](https://winbuzzer.com/2025/11/03/ilya-sutskever-deposition-reveals-how-sam-altmans-2023-firing-was-planned-for-over-a-year-xcxwbn/) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) ### ch9-11: TRUE - Speaker: Karen Hao - Claim: When ChatGPT launched, OpenAI was wholly unprepared and did not believe they were releasing a blockbuster product. - TLDR: OpenAI launched ChatGPT as a modest research preview and was genuinely surprised by its explosive success. Sam Altman himself confirmed this publicly. - Explanation: Sam Altman wrote that OpenAI didn't know what would kick off the AI revolution and was surprised it turned out to be ChatGPT. The product was built as a demo using GPT-3.5 to gather user feedback, not as a flagship release. OpenAI's own board reportedly learned about the launch on Twitter, further confirming the company's lack of coordinated preparation for a major product moment. - Sources: - [Reflections - Sam Altman](https://blog.samaltman.com/reflections) - [OpenAI's board learned about ChatGPT's release on Twitter, ex-board member says](https://tech.yahoo.com/ai/articles/openais-board-learned-chatgpts-release-154128341.html) - [Introducing ChatGPT | OpenAI](https://openai.com/index/chatgpt/) ### ch9-12: TRUE - Speaker: Karen Hao - Claim: OpenAI launched ChatGPT as a research preview intended to generate data to inform their planned GPT-4-powered chatbot, which they believed would be the major product. - TLDR: ChatGPT launched November 2022 as a free research preview using GPT-3.5, with OpenAI explicitly aiming to gather real-world feedback and data ahead of GPT-4. - Explanation: OpenAI's own documentation confirms ChatGPT was released as a 'research preview' running on GPT-3.5, with data and feedback collection as a stated goal. The GPT-4 technical report lists a dedicated 'Data flywheel lead' among core contributors, and OpenAI described GPT-3.5 as a 'first test run' of the infrastructure built for GPT-4. GPT-4 was then released in March 2023 as the substantially more capable follow-on product, consistent with Hao's account that OpenAI expected it to be the bigger launch. - Sources: - [Introducing ChatGPT | OpenAI](https://openai.com/index/chatgpt/) - [GPT-4 contributions | OpenAI](https://openai.com/contributions/gpt-4/) - [While anticipation builds for GPT-4, OpenAI quietly releases GPT-3.5 | TechCrunch](https://techcrunch.com/2022/12/01/while-anticipation-builds-for-gpt-4-openai-quietly-releases-gpt-3-5/) ### ch9-13: TRUE - Speaker: Karen Hao - Claim: ChatGPT was built on GPT-3.5, while the major product OpenAI was working toward was a chatbot based on GPT-4. - TLDR: ChatGPT did launch on GPT-3.5 in November 2022 as a research preview, with GPT-4 following in March 2023 as the more capable model OpenAI had been developing. - Explanation: OpenAI's November 2022 announcement described ChatGPT as part of their 'iterative deployment' of AI systems, consistent with it being a research preview rather than a flagship launch. GPT-4 arrived in March 2023 as a significantly more capable model, corroborating the claim that the GPT-4-based chatbot was OpenAI's intended major product. Multiple sources confirm the GPT-3.5 vs GPT-4 distinction in the claim. - Sources: - [Introducing ChatGPT | OpenAI](https://openai.com/index/chatgpt/) - [ChatGPT - Wikipedia](https://en.wikipedia.org/wiki/ChatGPT) ### ch9-14: UNSUBSTANTIATED - Speaker: Karen Hao - Claim: After ChatGPT's unexpected success, OpenAI had to scale its infrastructure faster than any company in history. - TLDR: ChatGPT's growth was historically fast, but the specific claim that OpenAI scaled infrastructure faster than any company in history has no verifiable source. - Explanation: Evidence confirms ChatGPT reached 100 million users in two months (the fastest-growing internet app in history), causing real server crashes and outages throughout 2022-2023. However, the superlative 'faster than any company in history' as applied specifically to infrastructure scaling is Karen Hao's characterization, not a measurable or sourced fact. No institutional or industry source has formally verified this comparative claim. - Sources: - [ChatGPT - Wikipedia](https://en.wikipedia.org/wiki/ChatGPT) - [OpenAI ChatGPT Outage: Why It Happens and What to Do (2025)](https://www.spurnow.com/en/blogs/openai-chatgpt-outage) - [OpenAI Crosses $12 Billion ARR: The 3-Year Sprint That Redefined What's Possible in Scaling Software | SaaStr](https://www.saastr.com/openai-crosses-12-billion-arr-the-3-year-sprint-that-redefined-whats-possible-in-scaling-software/) ### ch9-15: UNSUBSTANTIATED - Speaker: Karen Hao - Claim: OpenAI had to hire people faster than any company in history following ChatGPT's launch. - TLDR: OpenAI's post-ChatGPT hiring was undeniably rapid, but no evidence supports the superlative claim that it was faster than any company in history. - Explanation: OpenAI grew from roughly 335 employees in 2022 to 770 in 2023 and over 3,500 by late 2024, a notable surge. However, other companies have posted comparable or higher growth rates: Deel grew headcount roughly 1,000% in two years, Microsoft quadrupled in a single year in the 1980s, and Apple added over 95,000 staff in one year. No source establishes OpenAI's hiring pace as a historical record, making the claim an unverified assertion. - Sources: - [How Many People Work at OpenAI? Statistics & Facts (2025)](https://seo.ai/blog/how-many-people-work-at-openai) - [The Major American Companies with the Highest Employee Growth](https://switchonbusiness.com/companies-with-the-highest-employee-growth/) - [Timeline: Employee Count Growth for Microsoft, Yahoo, Google, and Facebook](https://medium.com/gabor/timeline-employee-count-growth-for-microsoft-yahoo-google-and-facebook-9ede22a37824) ### ch9-16: TRUE - Speaker: Karen Hao - Claim: OpenAI fired some newly hired employees, and colleagues typically learned of dismissals when people simply disappeared from Slack. - TLDR: Confirmed by Karen Hao's book 'Empire of AI.' Employees discovered firings when colleagues' Slack accounts grayed out, a phenomenon internally called 'getting disappeared.' - Explanation: Reporting on Hao's book documents that OpenAI's rapid post-ChatGPT hiring spree was matched by an uptick in firings. Terminations were rarely communicated to staff, and colleagues found out only when a Slack account became deactivated and grayed out. Employees coined the term 'getting disappeared' for this practice, directly corroborating Hao's account in the podcast. - Sources: - [OpenAI launched ChatGPT and doubled in size. The result was pure chaos.](https://dnyuz.com/2025/05/20/openai-launched-chatgpt-and-doubled-in-size-the-result-was-pure-chaos/) - [Inside the wild, chaotic year that turned OpenAI into a corporate juggernaut](https://tech.yahoo.com/articles/openai-launched-chatgpt-doubled-size-083001056.html) ### ch9-17: TRUE - Speaker: Karen Hao - Claim: Mira Murati and Ilya Sutskever believed Altman was actively worsening the chaos at OpenAI rather than managing it effectively. - TLDR: Karen Hao's reporting and book confirm that both Murati and Sutskever independently concluded Altman was creating, not managing, internal chaos at OpenAI. - Explanation: Multiple sources corroborate the claim. Karen Hao's book 'Empire of AI' states that Murati and Sutskever independently raised alarms and, after comparing notes, Sutskever urged the board to remove Altman. Sutskever's 52-page memo explicitly accused Altman of 'a consistent pattern of lying, undermining his execs, and pitting his execs against one another,' with much of the supporting evidence supplied by Murati herself. - Sources: - [Sutskever deposition details 52-page memo behind Altman ouster](https://www.implicator.ai/sutskever-deposition-details-52-page-memo-behind-altman-ouster/) - [Sam Altman firing drama detailed in new book excerpt | TechCrunch](https://techcrunch.com/2025/03/29/sam-altman-firing-drama-detailed-in-new-book-excerpt/) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) ### ch9-18: TRUE - Speaker: Karen Hao - Claim: Adam D'Angelo was one of the independent board members at OpenAI and was also the CEO of Quora. - TLDR: Adam D'Angelo is indeed both an independent OpenAI board member and the CEO of Quora. - Explanation: D'Angelo joined the OpenAI board in 2018 as an independent member and co-founded Quora, where he serves as CEO. He was one of the four board members who voted to remove Sam Altman in November 2023, and was the only original board member to remain after Altman's return. - Sources: - [Adam D'Angelo - Wikipedia](https://en.wikipedia.org/wiki/Adam_D%27Angelo) - [Meet Adam D'Angelo, The OpenAI Board Member Who Brokered Sam Altman's Return - The Messenger](https://themessenger.com/tech/adam-dangelo-openai-sam-altman-return-ceo-board-artificial-intelligence) ### ch9-19: UNVERIFIABLE - Speaker: Karen Hao - Claim: Adam D'Angelo heard rumors at a party in San Francisco about irregularities in how the OpenAI Startup Fund had been structured. - TLDR: The broader facts (D'Angelo on the board, Startup Fund owned by Altman) are confirmed, but the specific 'party in San Francisco' detail is only sourced from Hao's own book and cannot be independently verified. - Explanation: Multiple credible sources including Helen Toner and reporting from Axios and TechCrunch confirm that Adam D'Angelo was an independent OpenAI board member, that he is CEO of Quora, and that the board discovered the OpenAI Startup Fund was personally owned by Altman rather than by OpenAI. However, the specific claim that D'Angelo learned about this at a party in San Francisco derives exclusively from Karen Hao's book 'Empire of AI' and no independent source corroborates that specific detail. - Sources: - [Sam Altman gives up control of OpenAI Startup Fund, resolving unusual corporate venture structure | TechCrunch](https://techcrunch.com/2024/04/01/sam-altman-gives-up-control-of-openai-startup-fund-resolving-unusual-corporate-venture-structure/) - [Sam Altman no longer owns OpenAI Startup Fund](https://www.axios.com/2024/04/01/sam-altman-openai-startup-fund) - [Removal of Sam Altman from OpenAI - Wikipedia](https://en.wikipedia.org/wiki/Removal_of_Sam_Altman_from_OpenAI) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) ### ch9-20: TRUE - Speaker: Karen Hao - Claim: The independent board members had never received documentation from Altman about how the OpenAI Startup Fund had been set up. - TLDR: Former board member Helen Toner publicly confirmed Altman never disclosed to the board that he personally owned the OpenAI Startup Fund. - Explanation: Helen Toner stated in a TED AI Show interview that Altman 'hadn't informed the board that he owned the OpenAI Startup Fund,' despite claiming to be an independent board member with no financial interest in the company. This directly corroborates Hao's account that board members had not received documentation about the fund's structure. The unusual ownership arrangement (Altman personally owning the fund, not OpenAI) was later confirmed by Axios and TechCrunch reporting. - Sources: - [Helen Toner (ex-OpenAI board member): "We learned about ChatGPT on Twitter."](https://chatgptiseatingtheworld.com/2024/05/29/helen-toner-ex-board-member-of-openai-alleges-sam-altman-didnt-inform-the-board-that-he-owned-the-openai-startup-fund-even-though-he-constantly-was-claiming-to-be-an-independent-board-member/) - [Former OpenAI board member explains why CEO Sam Altman got fired before he was rehired](https://www.cnbc.com/2024/05/29/former-openai-board-member-explains-why-ceo-sam-altman-was-fired.html) - [Sam Altman gives up control of OpenAI Startup Fund, resolving unusual corporate venture structure | TechCrunch](https://techcrunch.com/2024/04/01/sam-altman-gives-up-control-of-openai-startup-fund-resolving-unusual-corporate-venture-structure/) ### ch9-21: TRUE - Speaker: Karen Hao - Claim: The OpenAI Startup Fund was structured as Altman's personal fund rather than as OpenAI's fund. - TLDR: The OpenAI Startup Fund was legally owned by Sam Altman personally, not by OpenAI. Former board member Helen Toner confirmed Altman withheld this information from the board. - Explanation: Multiple credible sources, including Axios and TechCrunch citing SEC filings, confirm that the OpenAI Startup Fund was structured with Altman as its personal legal owner rather than OpenAI. OpenAI explained it as a temporary arrangement for speed, but the NY Times reported that board members grew concerned Altman used it to skirt nonprofit governance. Altman only relinquished control in April 2024, transferring it to Ian Hathaway. - Sources: - [Sam Altman owns OpenAI's venture capital fund](https://www.axios.com/2024/02/15/sam-altman-openai-startup-fund) - [Sam Altman gives up control of OpenAI Startup Fund, resolving unusual corporate venture structure | TechCrunch](https://techcrunch.com/2024/04/01/sam-altman-gives-up-control-of-openai-startup-fund-resolving-unusual-corporate-venture-structure/) - [Sam Altman no longer owns OpenAI Startup Fund](https://www.axios.com/2024/04/01/sam-altman-openai-startup-fund) ### ch9-22: TRUE - Speaker: Karen Hao - Claim: The independent board members found a pattern of inconsistencies between how Altman portrayed what was being done at OpenAI and what was actually being done. - TLDR: Former board member Helen Toner publicly confirmed this pattern, and OpenAI's official firing statement cited Altman's lack of candor with the board. - Explanation: Helen Toner stated that Altman had withheld information and misrepresented things happening at the company across multiple instances, including the Startup Fund ownership, advance notice of ChatGPT's launch, and safety review claims. The board's official statement upon firing Altman said he 'was not consistently candid in his communications with the board,' directly corroborating the claim of a pattern of inconsistencies found by independent board members. - Sources: - [Former OpenAI board member explains why CEO Sam Altman got fired before he was rehired](https://www.cnbc.com/2024/05/29/former-openai-board-member-explains-why-ceo-sam-altman-was-fired.html) - [Removal of Sam Altman from OpenAI - Wikipedia](https://en.wikipedia.org/wiki/Removal_of_Sam_Altman_from_OpenAI) - [This Appears to Be Why Sam Altman Actually Got Fired by OpenAI](https://futurism.com/sam-altman-firing-reason-book) ### ch9-23: TRUE - Speaker: Karen Hao - Claim: The board decided they needed to fire Altman quickly because they feared his persuasive abilities would make removal impossible if he found out about the plan in advance. - TLDR: Multiple sources confirm the board acted swiftly and secretly precisely because they feared Altman's persuasiveness would derail the firing if he learned of it beforehand. - Explanation: Karen Hao's own account in her book and interviews matches the claim exactly, and it is corroborated by contemporaneous reporting on the November 2023 firing. Former board member Helen Toner and other accounts describe Altman's extraordinary ability to win people over as a key reason the board kept the plan secret even from major stakeholders like Microsoft, who only received a call moments before the action was executed. - Sources: - [Diary Of A CEO: w/ AI Critic Karen Hao on Empires of AI (transcript)](https://singjupost.com/diary-of-a-ceo-w-ai-critic-karen-hao-on-empires-of-ai-transcript/) - [Removal of Sam Altman from OpenAI - Wikipedia](https://en.wikipedia.org/wiki/Removal_of_Sam_Altman_from_OpenAI) - [Former OpenAI board member explains why CEO Sam Altman got fired before he was rehired](https://www.cnbc.com/2024/05/29/former-openai-board-member-explains-why-ceo-sam-altman-was-fired.html) ### ch9-24: TRUE - Speaker: Karen Hao - Claim: The board fired Altman without informing or consulting any stakeholders beforehand. - TLDR: The OpenAI board did not consult stakeholders before firing Altman. Microsoft, the primary investor, was notified only about a minute before the public announcement. - Explanation: Multiple credible sources confirm that the board acted without informing or consulting key stakeholders in advance. Microsoft, despite having invested $13 billion in OpenAI and holding no board seat, was only told of the firing moments before the public announcement. Employees, investors, and even Altman himself were given virtually no advance notice, with Altman informed just 5-10 minutes before his removal. - Sources: - [Removal of Sam Altman from OpenAI - Wikipedia](https://en.wikipedia.org/wiki/Removal_of_Sam_Altman_from_OpenAI) - [Analysis: How OpenAI so royally screwed up the Sam Altman firing | CNN Business](https://www.cnn.com/2023/11/19/tech/sam-altman-open-ai-firing-board) - [OpenAI chaos: A timeline of Sam Altman's firing and return](https://www.axios.com/2023/11/22/openai-microsoft-sam-altman-ceo-chaos-timeline) ### ch9-25: TRUE - Speaker: Karen Hao - Claim: Microsoft received a phone call immediately before the board executed Altman's firing. - TLDR: Microsoft was indeed notified just before the board fired Altman, with one source (Axios via Wikipedia) pinpointing the warning at roughly one minute before the announcement. - Explanation: Multiple sources confirm Microsoft received only a last-minute heads-up before Altman's firing, consistent with the claim of a call 'right before' the action. The notification method (a call) is not explicitly confirmed in sources, but the core detail of near-simultaneous notice is well supported. Microsoft was not consulted or given meaningful advance warning despite being OpenAI's primary investor. - Sources: - [Removal of Sam Altman from OpenAI - Wikipedia](https://en.wikipedia.org/wiki/Removal_of_Sam_Altman_from_OpenAI) - [Analysis: How OpenAI so royally screwed up the Sam Altman firing | CNN Business](https://www.cnn.com/2023/11/19/tech/sam-altman-open-ai-firing-board/index.html) ### ch9-26: FALSE - Speaker: Karen Hao - Claim: Microsoft was one of the only investors in OpenAI at the time of Altman's firing. - TLDR: OpenAI had multiple investors at the time of Altman's firing, not just Microsoft. Thrive Capital, Sequoia Capital, and Tiger Global were all significant stakeholders. - Explanation: At the time of Sam Altman's November 2023 firing, OpenAI's investor base included Thrive Capital (described as the second-largest shareholder), Sequoia Capital, and Tiger Global Management, all of whom were actively involved in pushing for Altman's reinstatement. Microsoft was the largest single investor with roughly $10 billion committed, but calling it 'one of the only investors' misrepresents a much broader investor pool. - Sources: - [Removal of Sam Altman from OpenAI - Wikipedia](https://en.wikipedia.org/wiki/Removal_of_Sam_Altman_from_OpenAI) - [OpenAI investors push to bring Altman back as CEO one day after he was ousted by board](https://www.cnbc.com/2023/11/18/openai-investors-push-to-bring-altman-back-as-ceo-after-fired-by-board.html) - [OpenAI investors are pushing for Sam Altman's return, and it could mean changes to the board that fired him | Fortune](https://fortune.com/2023/11/19/openai-investors-want-sam-altman-back-move-could-spell-board-changes/) ### ch9-27: TRUE - Speaker: Karen Hao - Claim: Altman was reinstalled as CEO of OpenAI days after being fired. - TLDR: Altman was fired on November 17, 2023, and reinstated on November 22, 2023, five days later. - Explanation: The board ousted Altman on November 17, 2023. Following a revolt by nearly 800 employees and pressure from investors, he was reinstalled as CEO on November 22, 2023, roughly five days after his dismissal. The claim that it happened 'days later' is accurate. - Sources: - [4 days from fired to re-hired: A timeline of Sam Altman's ouster from OpenAI - ABC News](https://abcnews.go.com/Business/sam-altman-reaches-deal-return-ceo-openai/story?id=105091534) - [Removal of Sam Altman from OpenAI - Wikipedia](https://en.wikipedia.org/wiki/Removal_of_Sam_Altman_from_OpenAI) - [Sam Altman reinstated as OpenAI CEO with new board members](https://www.washingtonpost.com/technology/2023/11/22/sam-altman-back-openai/) ### ch9-28: TRUE - Speaker: Steven Bartlett - Claim: Karen Hao's book contains a quote attributed to Ilya Sutskever stating that Sam Altman is not the right person to have control over AGI. - TLDR: Karen Hao's book does include this Sutskever quote. The exact wording is: "I don't think Sam is the guy who should have the finger on the button for AGI." - Explanation: Multiple sources confirm that Karen Hao's book 'Empire of AI' attributes to Ilya Sutskever the statement that Sam Altman should not have his finger on the button for AGI, made to board member Helen Toner before Altman's firing. Bartlett's paraphrase of the quote is substantively accurate. The specific page number (357) could not be independently verified from available online sources, but it is a minor secondary detail. - Sources: - [Empire of AI by Karen Hao Book Summary](https://www.summrize.com/books/empire-of-ai-summary) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [Dismantling the Empire of AI with Karen Hao](https://www.bloodinthemachine.com/p/dismantling-the-empire-of-ai-with) ### ch9-29: INEXACT - Speaker: Karen Hao - Claim: Mira Murati also stated that she did not believe Altman was the right leader, and both she and Ilya Sutskever ultimately left OpenAI. - TLDR: Both did leave OpenAI, and Murati reportedly raised concerns about Altman, but she denied it publicly and quickly switched back to supporting him. - Explanation: Ilya Sutskever left OpenAI in May 2024 and Mira Murati in September 2024, confirming both departures. Murati reportedly wrote a private memo questioning Altman's management and shared concerns with the board, which the NYT says contributed to his ouster. However, Murati publicly denied making those complaints, swiftly switched to supporting Altman's reinstatement, and her September 2024 departure came over a year later amid a for-profit restructuring. Framing her as straightforwardly saying Altman was 'not the right guy' oversimplifies a more ambiguous and publicly disputed record. - Sources: - [Mira Murati's exit sets the stage for OpenAI's reinvention ...](https://fortune.com/2024/09/26/mira-murati-exit-openai-altman-for-profit-investors-coup/) - [OpenAI CTO Mira Murati, 2 research executives announce exit, joining wave of high-profile departures - World News](https://www.wionews.com/world/openai-cto-mira-murati-2-research-executives-announce-exit-joining-wave-of-high-profile-departures-762067) - [Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs | TechCrunch](https://techcrunch.com/2024/05/14/ilya-sutskever-openai-co-founder-and-longtime-chief-scientist-departs/) - [Mira Murati - Wikipedia](https://en.wikipedia.org/wiki/Mira_Murati) ### ch9-30: TRUE - Speaker: Karen Hao - Claim: After Altman's reinstatement as CEO, Ilya Sutskever did not return to OpenAI. - TLDR: Sutskever never returned to active work at OpenAI after Altman's reinstatement in November 2023, and officially departed in May 2024. - Explanation: After Altman was reinstated, Sutskever was left in limbo and never resumed his role. He formally announced his departure in May 2024, saying 'After almost a decade, I have made the decision to leave OpenAI.' He subsequently co-founded Safe Superintelligence (SSI). - Sources: - [OpenAI's Co-Founder and Chief Scientist Ilya Sutskever Departs](https://time.com/6978195/ilya-sutskever-leaves-open-ai/) - [Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs | TechCrunch](https://techcrunch.com/2024/05/14/ilya-sutskever-openai-co-founder-and-longtime-chief-scientist-departs/) - [Removal of Sam Altman from OpenAI - Wikipedia](https://en.wikipedia.org/wiki/Removal_of_Sam_Altman_from_OpenAI) ### ch9-31: INEXACT - Speaker: Karen Hao - Claim: Mira Murati left OpenAI shortly after Altman's reinstatement. - TLDR: Murati did leave after Altman's reinstatement, but it was roughly 10 months later (September 2024), not shortly after (November 2023). - Explanation: Sam Altman was reinstated as OpenAI CEO in November 2023. Mira Murati announced her resignation on September 25, 2024, approximately 10 months later. While she did leave following Altman's return, describing it as 'shortly after' significantly compresses the timeline. - Sources: - [Mira Murati - Wikipedia](https://en.wikipedia.org/wiki/Mira_Murati) - [Mira Murati's exit sets the stage for OpenAI's reinvention](https://fortune.com/2024/09/26/mira-murati-exit-openai-altman-for-profit-investors-coup/) - [Mira Murati, OpenAI's technology chief, becomes the latest exec to leave the company | CNN Business](https://www.cnn.com/2024/09/25/tech/openai-technology-chief-mira-murati-leaving/index.html) ### ch11-1: INEXACT - Speaker: Steven Bartlett - Claim: Elon Musk described AI development as 'summoning the demon' approximately 10 years before this interview. - TLDR: Musk did use the 'summoning the demon' phrase about AI, but he said it in October 2014, roughly 11.5 years before the interview, not approximately 10 years. - Explanation: Elon Musk made the 'summoning the demon' remark at MIT's Aeronautics and Astronautics Centennial Symposium in October 2014. The interview was published in March 2026, placing the quote about 11.5 years in the past. The quote and its meaning are accurately described, but the '10 years ago' timeframe is a modest underestimate. - Sources: - [Elon Musk: 'With artificial intelligence we are summoning the demon.' - The Washington Post](https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/) - [Elon Musk says artificial intelligence is like "summoning the demon" - CBS News](https://www.cbsnews.com/news/elon-musk-artificial-intelligence-is-like-summoning-the-demon/) ### ch11-2: TRUE - Speaker: Steven Bartlett - Claim: Dario Amodei stated there is somewhere between a 10% and 25% chance of things going catastrophically wrong on the scale of human civilization. - TLDR: Dario Amodei did make this statement. He estimated a 10-25% chance of something going 'catastrophically wrong on the scale of human civilization' in a 2023 interview. - Explanation: In an October 2023 interview on The Logan Bartlett Show, Amodei stated: 'the chance something goes really quite catastrophically wrong on the scale of human civilization might be somewhat 10 to 25 percent.' He reiterated a 25% figure at the Axios AI+ DC Summit in September 2025. Steven Bartlett's characterization is accurate. - Sources: - [Amodei on AI: "There's a 25% chance that things go really, really badly"](https://www.axios.com/2025/09/17/anthropic-dario-amodei-p-doom-25-percent) - [Anthropic's CEO gives 'a 25% chance things go really, really badly' with AI | TechRadar](https://www.techradar.com/ai-platforms-assistants/claude/anthropics-ceo-gives-a-25-percent-chance-things-go-really-really-badly-with-ai) ### ch11-3: TRUE - Speaker: Karen Hao - Claim: The AI industry uses a mythology in which 'summoning the demon' framing is integral to convincing everyone that only they should be developing this technology. - TLDR: This is Karen Hao's documented analytical thesis, well-supported by her own book 'Empire of AI' and corroborated by multiple critics. - Explanation: Hao's book explicitly argues that AI companies deploy existential risk rhetoric (including doomer framings like 'summoning the demon') to justify their exclusive role in AI development and consolidate power. The 'summoning the demon' phrase originates from Elon Musk's 2014 MIT remarks and has become a fixture of AI existential risk discourse. Hao documents how Altman, for example, uses this framing before regulators to deflect from immediate harms while positioning OpenAI as the only trustworthy steward of the technology. - Sources: - [Elon Musk: 'With artificial intelligence we are summoning the demon.' - The Washington Post](https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [Book review of 'Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI' by Karen Hao](https://idratherbewriting.com/blog/book-review-empire-of-ai-karen-hao) - [Decolonizing the Future: Karen Hao on Resisting the Empire of AI | TechPolicy.Press](https://www.techpolicy.press/decolonizing-the-future-karen-hao-on-resisting-the-empire-of-ai/) ### ch11-4: UNVERIFIABLE - Speaker: Karen Hao - Claim: When AI executives make existential risk statements, those statements should be understood as acts of speech to persuade others to cede more power and resources to them, not as genuine predictions about the future. - TLDR: This is Karen Hao's own analytical interpretation about AI executives' motivations, not a falsifiable factual claim. It cannot be confirmed or denied since it concerns internal intent. - Explanation: Hao's thesis that existential risk rhetoric functions as a power-consolidation strategy is well-documented in her book and interviews, and is echoed by other credible critics. However, Hao herself acknowledges uncertainty on this point, noting it is 'hard to determine' whether figures like Altman are true believers or strategic actors. Because the claim is about the internal motivations and sincere beliefs of individuals, it is inherently unverifiable through external evidence. - Sources: - [The boomer-doomer divide within OpenAI, explained by Karen Hao - Big Think](https://bigthink.com/the-future/karen-hao-boomer-doomer-divide-openai/) - [Decolonizing the Future: Karen Hao on Resisting the Empire of AI | TechPolicy.Press](https://www.techpolicy.press/decolonizing-the-future-karen-hao-on-resisting-the-empire-of-ai/) - [Sam Altman's Dangerous and Unquenchable Craving for Power | Center for AI Policy](https://www.centeraipolicy.org/work/sam-altmans-dangerous-and-unquenchable-craving-for-power) - [Sam Altman's self-serving vision of the future](https://disconnect.blog/sam-altmans-self-serving-vision-of-the-future/) ### ch11-5: UNSUBSTANTIATED - Speaker: Karen Hao - Claim: AI executives purposely cultivate a public feeling that they are summoning the demon, and this is a crucial part of their power. - TLDR: This is Karen Hao's central analytical thesis about AI executives' intent, not a verifiable fact. No direct evidence confirms that executives deliberately cultivate existential risk narratives as a power strategy. - Explanation: Hao's book 'Empire of AI' documents this as her core argument: that both utopian and existential risk narratives serve power consolidation, and that figures like Sam Altman strategically deploy them for different audiences. While critics and journalists note the real contradiction of executives warning about existential risk while aggressively building AI, the claim of deliberate, conscious intent cannot be objectively verified. AI executives have not admitted to this, and whether the behavior is strategic calculation or genuine belief remains, as Hao herself acknowledges, an open question. - Sources: - [Power to Truth: AI Narratives, Public Trust, and the New Tech Empire | Stanford GSB Corporations and Society Initiative](https://casi.stanford.edu/news/power-truth-ai-narratives-public-trust-and-new-tech-empire) - [Dismantling the Empire of AI with Karen Hao](https://www.bloodinthemachine.com/p/dismantling-the-empire-of-ai-with) - [Decolonizing the Future: Karen Hao on Resisting the Empire of AI | TechPolicy.Press](https://www.techpolicy.press/decolonizing-the-future-karen-hao-on-resisting-the-empire-of-ai/) - [Elon Musk: 'With artificial intelligence we are summoning the demon.' - The Washington Post](https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/) ### ch11-6: TRUE - Speaker: Karen Hao - Claim: Karen Hao has internal documents, referenced in her book, showing that AI executives are keenly aware of how to bring the public along through dazzling technology demonstrations and by crafting missions that earn their companies more leniency. - TLDR: Hao's book 'Empire of AI' is documented to rely on internal documents and reveals how AI executives consciously used mission-crafting and technology demonstrations to shape public perception and gain leniency. - Explanation: Multiple book reviews and sources confirm 'Empire of AI' is based on internal documents alongside 300+ interviews. Reviewers specifically note Hao documents how OpenAI founders acknowledged they could 'walk back their commitments to openness once the narrative had served its purpose,' and how Altman used ideology cards related to AGI to deflect from immediate harms and consolidate support. This directly corroborates her claim about executives being keenly aware of their myth-making. - Sources: - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [Book review of 'Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI' | I'd Rather Be Writing Blog](https://idratherbewriting.com/blog/book-review-empire-of-ai-karen-hao) - [Inside the story that enraged OpenAI | MIT Technology Review](https://www.technologyreview.com/2025/05/19/1116614/hao-empire-ai-openai/) - [Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI | PenguinRandomHouse.com](https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/) ### ch11-7: UNVERIFIABLE - Speaker: Karen Hao - Claim: Dario Amodei's statements about a 10 to 25% chance of catastrophic outcomes represent both active myth-making and genuine personal belief that has become blurred over time. - TLDR: Amodei's 10-25% catastrophic risk statements are verified, but whether they reflect 'myth-making' blended with genuine belief is Karen Hao's subjective interpretation, not a checkable fact. - Explanation: Dario Amodei has publicly stated a 25% chance of things going 'really, really badly' (Axios AI + DC Summit, September 2025) and a 10-25% chance of catastrophic harm at the AI Impact Summit in New Delhi (February 2026). The factual underpinning of the claim is confirmed. However, Karen's core assertion that these statements represent a psychological blurring of performative myth-making and genuine belief is an analytical characterization of Amodei's internal state, which no evidence can confirm or refute. - Sources: - [Anthropic's CEO gives 'a 25% chance things go really, really badly' with AI | TechRadar](https://www.techradar.com/ai-platforms-assistants/claude/anthropics-ceo-gives-a-25-percent-chance-things-go-really-really-badly-with-ai) - [Amodei on AI: "There's a 25% chance that things go really, really badly"](https://www.axios.com/2025/09/17/anthropic-dario-amodei-p-doom-25-percent) - [Anthropic CEO Warns of Existential AI Risks and Imminent Superhuman Capabilities - OECD.AI](https://oecd.ai/en/incidents/2026-02-19-8840) ### ch11-8: UNVERIFIABLE - Speaker: Karen Hao - Claim: Dario Amodei genuinely believes the 10-25% catastrophic probability claim because he has lost the ability to distinguish between saying something for strategic reasons and what he actually believes. - TLDR: Amodei's 10-25% catastrophic risk claim is confirmed, but whether he has 'lost the ability to distinguish' strategic statements from genuine belief is a psychological assertion about his inner mental state that cannot be verified. - Explanation: Dario Amodei has publicly stated a 10-25% (and more recently 25%) probability of catastrophic AI outcomes, which is well documented. However, Karen Hao's core claim is a subjective psychological interpretation: that Amodei can no longer tell apart what he says for strategic reasons from what he truly believes. No evidence can confirm or deny this internal cognitive state. It is Hao's analytical opinion, not a verifiable fact. - Sources: - [Amodei on AI: "There's a 25% chance that things go really, really badly"](https://www.axios.com/2025/09/17/anthropic-dario-amodei-p-doom-25-percent) - [Anthropic's CEO gives 'a 25% chance things go really, really badly' with AI | TechRadar](https://www.techradar.com/ai-platforms-assistants/claude/anthropics-ceo-gives-a-25-percent-chance-things-go-really-really-badly-with-ai) - ["We Are Being Gaslit By The AI Companies!" - Karen Hao on DOAC Podcast (Transcript)](https://singjupost.com/diary-of-a-ceo-w-ai-critic-karen-hao-on-empires-of-ai-transcript/) ### ch11-9: TRUE - Speaker: Karen Hao - Claim: Dario Amodei publicly states catastrophic risk probabilities even while fundraising, which distinguishes him from other AI executives such as Sam Altman. - TLDR: Dario Amodei has publicly and repeatedly stated a roughly 25% chance of catastrophic AI outcomes, even as Anthropic raises billions in funding, distinguishing him from Sam Altman who is notably less forthcoming on such figures. - Explanation: Amodei stated at the Axios AI + DC Summit that there is 'a 25% chance that things go really, really badly' with AI, a figure he has cited across multiple venues including Senate testimony and a major January 2026 essay. By contrast, Steven Bartlett himself notes in the same exchange that 'Sam's not doing that as much anymore,' which aligns with the claim. Multiple credible sources confirm this distinction. - Sources: - [Anthropic's CEO gives 'a 25% chance things go really, really badly' with AI | TechRadar](https://www.techradar.com/ai-platforms-assistants/claude/anthropics-ceo-gives-a-25-percent-chance-things-go-really-really-badly-with-ai) - [Anthropic CEO Raises Alarm on 25% Risk of Catastrophic AI Developments | Censinet, Inc.](https://censinet.com/perspectives/anthropic-ceo-raises-alarm-on-25-risk-of-catastrophic-ai-developments) - [Anthropic CEO's grave warning: AI will "test us as a species"](https://www.axios.com/2026/01/26/anthropic-ai-dario-amodei-humanity) - [Dario Amodei — The Adolescence of Technology](https://www.darioamodei.com/essay/the-adolescence-of-technology) ### ch11-10: TRUE - Speaker: Steven Bartlett - Claim: All major AI companies are currently engaged in continuous fundraising. - TLDR: Major independent AI companies including OpenAI, Anthropic, and xAI are actively and continuously raising billions in new funding as of early 2026. - Explanation: OpenAI was pursuing a round valuing it at up to $830 billion (with SoftBank's $30B stake increase), Anthropic was seeking $20 billion at a $350 billion valuation, and xAI raised at least $10 billion in 2025. The claim is slightly broad since integrated subsidiaries like Google DeepMind do not independently fundraise, but the core assertion that major AI companies are in constant fundraising mode is well-supported. - Sources: - [OpenAI, Anthropic reportedly raising billions of dollars in new funding](https://siliconangle.com/2026/01/28/openai-anthropic-reportedly-raising-billions-dollars-new-funding/) - [$84B story: The 10 AI mega-rounds that defined 2025 — TFN](https://techfundingnews.com/openai-anthropic-xai-ai-funding-trends-2025/) ### ch11-12: INEXACT - Speaker: Karen Hao - Claim: AI companies spend hundreds of millions of dollars in midterm elections to kill every possible piece of legislation that challenges them and to craft legislation that amplifies their advantage. - TLDR: AI companies are spending very large sums in the 2026 midterms, though the total is closer to $250M+ combined rather than clearly 'hundreds of millions' from a single source. Their goal of shaping or blocking AI legislation is well-documented. - Explanation: OpenAI and Anthropic together have contributed over $185M to the 2026 midterm cycle, Meta has earmarked roughly $65M via two super PACs, and seven major tech firms spent a combined $50M on federal lobbying in just the first nine months of 2025. The stated goals, backing federal preemption of state AI laws while opposing stricter oversight, align with the claim about killing unfavorable legislation and crafting advantageous rules. The 'hundreds of millions' figure is approximately correct in aggregate, but the phrasing implies a single coordinated pot rather than multiple competing industry actors. - Sources: - [AI money is already influencing the midterms. And more is coming. - The Washington Post](https://www.washingtonpost.com/politics/2026/03/12/ai-funding-midterm-elections/) - [The AI industry's $100 million play to influence the 2026 elections](https://popular.info/p/the-ai-industrys-100-million-play) - [As Big Tech Gears Up for the 2026 Midterms, Its Lobbying Operations Continue Unabated - Issue One](https://issueone.org/articles/big-tech-lobbying-2025-q3/) - [How AI swallowed tech lobbying in 2025](https://www.axios.com/2026/01/23/ai-tech-lobbying-2025) - [AI Industry Pours Millions Into Super PACs for 2026 Midterm Elections - San Francisco Today](https://nationaltoday.com/us/ca/san-francisco/news/2026/03/03/ai-industry-pours-millions-into-super-pacs-for-2026-midterm-elections/) ### ch11-13: INEXACT - Speaker: Karen Hao - Claim: Throughout history, societies have transitioned from empires to democracy because empire as a governance structure is inherently unsound and does not maximize the chances of most people in the world being able to live dignified lives. - TLDR: There is a real long-term democratization trend, but empires have not reliably transitioned to democracy throughout history, and the causal explanation is oversimplified. - Explanation: Huntington's influential scholarship identifies successive waves of democratization over the past two centuries, and scholars broadly agree that concentrated authoritarian power tends to underperform democracy on citizen welfare metrics. However, the historical record directly contradicts the sweeping framing: Rome famously transitioned from republic to empire, post-Ottoman and post-Soviet states frequently became new autocracies rather than democracies, and historians identify multiple causes for both imperial collapse and democratization (economic development, class conflict, external shocks) rather than a single reason. 'Throughout history' is a significant overstatement of what is actually a modern, partial, and sometimes reversible trend. - Sources: - [Waves of democracy - Wikipedia](https://en.wikipedia.org/wiki/Waves_of_democracy) - [The Decline and Rise of Democracy | Princeton University Press](https://press.princeton.edu/books/hardcover/9780691177465/the-decline-and-rise-of-democracy) - [The Downside of Imperial Collapse | Foreign Affairs](https://www.foreignaffairs.com/world/downside-imperial-collapse) - [How Empires Became Nations: Understanding The Transition From Imperial Rule To National Governments](https://historyrise.com/how-empires-became-nations-the-shift-from-imperial-to-national-government/) - [Why Do Empires Collapse?](https://www.oerproject.com/World-History-Origins/Unit-5/Why-do-Empires-Collapse) ### ch13-1: DISPUTED - Speaker: Karen Hao - Claim: We are already seeing huge impacts on employment from AI. - TLDR: Some AI-driven employment impacts are visible in specific sectors, but major institutional sources describe them as limited or nascent overall, not yet 'huge.' - Explanation: Data shows targeted effects: 77,999 U.S. tech job losses attributed to AI in the first half of 2025, a slight drop in young workers in AI-exposed occupations, and 67% of HR executives reporting AI is changing jobs at their firms. However, the Federal Reserve Bank of Atlanta found firms reported 'negligible' impact on headcounts in 2025, and J.P. Morgan notes that less than 10% of firms across the broader economy use AI regularly, making large-scale disruption hard to observe in official statistics. Whether current impacts qualify as 'huge' is genuinely contested among credible sources. - Sources: - [How Might AI Change the Workplace? Evidence from Corporate Executives - Federal Reserve Bank of Atlanta](https://www.atlantafed.org/research-and-data/publications/policy-hub-macroblog/2026/03/25/how-might-ai-change-the-workplace-evidence-from-corporate-executives) - [Young workers' employment drops in occupations with high AI exposure - Dallasfed.org](https://www.dallasfed.org/research/economics/2026/0106) - [AI's Impact on Job Growth | J.P. Morgan Global Research](https://www.jpmorgan.com/insights/global-research/artificial-intelligence/ai-impact-job-growth) - [77 AI Job Replacement Statistics 2026 (New Data)](https://www.demandsage.com/ai-job-replacement-stats/) - [AI will impact jobs in 2026, say 89% of HR leaders: CNBC survey](https://www.cnbc.com/2025/11/14/ai-to-impact-89percent-of-jobs-next-year-cnbc-survey-finds.html) ### ch13-2: TRUE - Speaker: Karen Hao - Claim: AI models are improving in specific capabilities based on what the companies developing them choose to prioritize. - TLDR: AI model capabilities are shaped by deliberate training choices companies make, including fine-tuning, RLHF, and post-training techniques targeting specific skills. - Explanation: This is a well-established fact about AI development. Companies make deliberate choices at every stage of training, from data curation and human feedback collection (RLHF) to post-training optimization, all of which determine which specific capabilities improve. Multiple authoritative sources on AI model training confirm that capability improvements reflect intentional design and prioritization decisions by the developers. - Sources: - [How AI Models Are Trained - NN/G](https://www.nngroup.com/articles/ai-model-training/) - [Scaling up: how increasing inputs has made artificial intelligence more capable - Our World in Data](https://ourworldindata.org/scaling-up-ai) - [Giant AI models and the shift to specialized AI | CIO](https://www.cio.com/article/4077192/giant-ai-models-and-the-shift-to-specialized-ai.html) ### ch13-3: TRUE - Speaker: Karen Hao - Claim: Executives at other companies are deciding to lay off workers because they think AI can replace them, irrespective of whether that is actually true. - TLDR: The Klarna CEO story is well-documented: he cut ~700 customer service roles citing AI, saw quality drop, and began rehiring human agents. - Explanation: Klarna CEO Sebastian Siemiatkowski froze hiring and reduced headcount by roughly 40% on the premise that AI could replace human workers. Customer satisfaction declined due to poor handling of complex queries, and by mid-2025 Klarna began rehiring. The case is widely cited as evidence that executive AI-replacement decisions can outpace the technology's actual capabilities. - Sources: - [Klarna Claimed AI Was Doing the Work of 700 People. Now It's Rehiring](https://www.reworked.co/employee-experience/klarna-claimed-ai-was-doing-the-work-of-700-people-now-its-rehiring/) - [Klarna CEO admits AI job cuts went too far](https://mlq.ai/news/klarna-ceo-admits-aggressive-ai-job-cuts-went-too-far-starts-hiring-again-after-us-ipo/) - [After Firing 700 Humans For AI, Klarna Now Wants Them Back—'Tons Of Klarna Users Would Enjoy Working For Us,' Says CEO](https://finance.yahoo.com/news/firing-700-humans-ai-klarna-173029838.html) ### ch13-4: TRUE - Speaker: Karen Hao - Claim: The Klarna CEO laid off workers expecting AI to replace everyone, it did not actually work, and he had to ask some people to come back. - TLDR: Klarna's CEO did reduce headcount significantly expecting AI to replace workers, acknowledged it went too far, and began rehiring human customer service agents. - Explanation: Klarna cut its workforce from roughly 5,500 to 3,400 employees by late 2023 under an AI-first strategy. Customer satisfaction subsequently dropped due to AI handling complex interactions poorly, and CEO Sebastian Siemiatkowski publicly admitted the cuts went too far. The company then launched a rehiring initiative for human customer service agents, confirming the core claim. - Sources: - [Klarna Claimed AI Was Doing the Work of 700 People. Now It's Rehiring](https://www.reworked.co/employee-experience/klarna-claimed-ai-was-doing-the-work-of-700-people-now-its-rehiring/) - [Klarna CEO admits AI job cuts went too far](https://mlq.ai/news/klarna-ceo-admits-aggressive-ai-job-cuts-went-too-far-starts-hiring-again-after-us-ipo/) - [After Firing 700 Humans For AI, Klarna Now Wants Them Back](https://finance.yahoo.com/news/firing-700-humans-ai-klarna-173029838.html) ### ch13-5: TRUE - Speaker: Steven Bartlett - Claim: Klarna is shrinking with almost 100 employees per month due to AI. - TLDR: The figure is consistent with Klarna's publicly reported data. The CEO has stated a ~20% annual natural attrition rate, which at ~5,500 employees works out to roughly 92 employees per month. - Explanation: Multiple sources confirm Klarna reduced its headcount from over 7,000 at peak to roughly 3,000 by mid-2025, primarily through a hiring freeze and natural attrition (~20% per year) enabled by AI. At ~5,500 employees, that attrition rate equals ~92 people per month, consistent with the CEO's 'almost 100 per month' characterization. The statement in the podcast is presented as a direct message from the CEO, and the math aligns with his publicly stated figures. - Sources: - [Klarna CEO says AI helped company shrink workforce by 40%](https://www.cnbc.com/2025/05/14/klarna-ceo-says-ai-helped-company-shrink-workforce-by-40percent.html) - [AI enabled Klarna to halve its workforce—now, the CEO is warning workers that other 'tech bros' are sugarcoating just how badly it's about to impact jobs](https://fortune.com/2025/10/10/klarna-ceo-sebastian-siemiatkowski-halved-workforce-says-tech-ceos-sugarcoating-ai-impact-on-jobs-mass-unemployment-warning/) - [Klarna tried to replace its workforce with AI](https://www.fastcompany.com/91468582/klarna-tried-to-replace-its-workforce-with-ai) ### ch13-6: TRUE - Speaker: Steven Bartlett - Claim: Klarna's workforce was 7,400 at its peak, 5,500 one year prior to the DM, and 3,300 at the time the DM was sent, with a target of 3,000 by the end of summer. - TLDR: All four figures cited in the Klarna CEO's DM are confirmed by multiple sources, including Klarna's own IPO filing. - Explanation: Klarna's IPO prospectus filed in March 2025 records 5,527 employees at end-2022 and 3,422 at end-2024, consistent with the CEO's rounded figures of 5,500 and 3,300. One source directly states Klarna 'cut its workforce from 5,500 to 3,400.' The peak of 7,400 and the target of 3,000 by end of summer are both widely reported across CNBC, Fortune, Fast Company, and others. - Sources: - [Klarna CEO says AI helped company shrink workforce by 40%](https://www.cnbc.com/2025/05/14/klarna-ceo-says-ai-helped-company-shrink-workforce-by-40percent.html) - [Klarna whittled workforce via AI ahead of IPO | Payments Dive](https://www.paymentsdive.com/news/klarna-buy-now-pay-later-bnpl-payments-workforce-ipo/742627/) - [Klarna tried to replace its workforce with AI - Fast Company](https://www.fastcompany.com/91468582/klarna-tried-to-replace-its-workforce-with-ai) - [AI enabled Klarna to halve its workforce—now, the CEO is warning workers that other 'tech bros' are sugarcoating just how badly it's about to impact jobs | Fortune](https://fortune.com/2025/10/10/klarna-ceo-sebastian-siemiatkowski-halved-workforce-says-tech-ceos-sugarcoating-ai-impact-on-jobs-mass-unemployment-warning/) ### ch13-7: INEXACT - Speaker: Steven Bartlett - Claim: AI handles 70% of Klarna's customer service conversations. - TLDR: Klarna's AI handles roughly two-thirds (~67%) of customer service chats, not 70%, per the company's own press release. - Explanation: Klarna's official announcement stated its AI assistant handled 'two-thirds' (approximately 67%) of customer service conversations after its first month live. Some sources cite figures as high as 75% depending on the period or metric used. The 70% figure is directionally consistent but does not match the most authoritative, widely cited statistic from Klarna itself. - Sources: - [Klarna AI assistant handles two-thirds of customer service chats in its first month | Klarna International](https://www.klarna.com/international/press/klarna-ai-assistant-handles-two-thirds-of-customer-service-chats-in-its-first-month/) - [Klarna's AI assistant does the work of 700 full-time agents | OpenAI](https://openai.com/index/klarna/) ### ch13-8: TRUE - Speaker: Steven Bartlett - Claim: With AI, the production cost of software comes down to almost zero. - TLDR: Klarna CEO Sebastian Siemiatkowski has publicly and repeatedly stated that the cost of creating software is going to zero thanks to AI. - Explanation: On the 20VC podcast and in multiple other public forums, Siemiatkowski explicitly said: 'The cost of creating software is going down to zero.' His broader analogy about handcrafted vs. machine-produced work also matches the framing Steven Bartlett reads aloud, confirming the DM content accurately reflects the CEO's stated views. - Sources: - [As Cost Of Writing Code Heads To Zero, Switching Costs Of Data Will Create Moats For Software Companies: Klarna CEO](https://officechai.com/ai/as-cost-of-writing-code-heads-to-zero-switching-costs-of-data-will-create-moats-for-software-companies-klarna-ceo/) - [Sebastian Siemiatkowski: AI is reducing customer service costs, software creation is nearing zero, and Klarna is evolving into a high-engagement banking provider | 20VC](https://cryptobriefing.com/sebastian-siemiatkowski-ai-is-reducing-customer-service-costs-software-creation-is-nearing-zero-and-klarna-is-evolving-into-a-high-engagement-banking-provider-20vc/) ### ch13-9: TRUE - Speaker: Steven Bartlett - Claim: Klarna is a bank. - TLDR: Klarna holds a full banking licence from Sweden's Finansinspektionen and operates as Klarna Bank AB (publ). - Explanation: Klarna received a full banking licence in 2017 and operates as a licensed bank under Swedish financial regulation. It subsequently obtained an EMI licence from the UK's FCA in July 2025. The claim is accurate. - Sources: - [Klarna gets a full banking license, gears up to go beyond financing payments](https://techcrunch.com/2017/06/19/klarna-gets-a-full-banking-license-gears-up-to-go-beyond-financing-payments/) - [Sweden's Klarna receives licence from UK's Financial Conduct Authority | Euronews](https://www.euronews.com/business/2025/07/30/swedens-klarna-receives-licence-from-uks-financial-conduct-authority) ### ch13-10: TRUE - Speaker: Karen Hao - Claim: The US jobs report released earlier this year showed a decline and slowdown in hiring across especially white-collar professional industries. - TLDR: US jobs data from early 2026 does show a hiring decline in white-collar and professional sectors, consistent with Hao's claim. - Explanation: BLS data and multiple credible analyses confirm white-collar sector employment peaked in November 2022 and is down roughly 1.9% since then, with the sector losing an average of 191,000 jobs per year over the last three years versus gaining 569,000 per year from 2010 to 2019. The BLS February 2026 Employment Situation report was publicly released, and professional and business services showed negative job growth. The core claim accurately reflects these documented trends. - Sources: - [Even before AI, the white-collar jobs market was growing gloomy](https://www.axios.com/2026/02/12/ai-jobs-market-unemployment-rate) - [Labor market could face a 'white-collar recession,' report ...](https://www.hrdive.com/news/labor-market-white-collar-recession/748454/) - [Report: U.S. job growth in 2025 slowest in decades](https://www.localnewslive.com/2026/01/09/report-us-job-growth-2025-slowest-decades/) - [The Employment Situation - February 2026](https://www.bls.gov/news.release/pdf/empsit.pdf) ### ch13-11: FALSE - Speaker: Steven Bartlett - Claim: An Anthropic report found a 40% reduction in entry-level jobs in particular. - TLDR: The Anthropic report does not contain a 40% figure. It found a 14% drop in job-finding rates for young workers in high-exposure fields. - Explanation: The Anthropic labor market report by Massenkoff and McCrory cites a 14% decline in job-finding rates for workers aged 22-25 in highly exposed occupations, and references a Brynjolfsson et al. study finding a 6-16% employment fall in the same group. A separate Fortune article on an earlier Anthropic Economic Index report cites a 13% relative employment decline for early-career workers. No Anthropic publication supports the 40% figure stated by Bartlett. - Sources: - [Labor Market Impacts of AI: A New Measure and Early Evidence](https://www.anthropic.com/research/labor-market-impacts) - [Anthropic data confirms Gen Z's worst fears about AI: Businesses are leaning into automation, a massive threat to entry-level jobs](https://fortune.com/2025/09/16/anthropic-economic-index-report-automation-entry-level-jobs-gen-z/) - [Anthropic just mapped out which jobs AI could potentially replace. A 'Great Recession for white-collar workers' is absolutely possible | Fortune](https://fortune.com/2026/03/06/ai-job-losses-report-anthropic-research-great-recession-for-white-collar-workers/) ### ch13-12: TRUE - Speaker: Steven Bartlett - Claim: According to Anthropic's report, physical real-world jobs like construction and agriculture are currently untouched by AI disruption. - TLDR: Anthropic's labor market report confirms construction and agriculture have near-zero observed AI disruption, unlike office, finance, and legal sectors. - Explanation: Anthropic's March 2026 'Labour Market Impacts of AI' report shows construction and agriculture have theoretical AI coverage below 17% and observed (actual) exposure close to zero (e.g., construction at roughly 2% observed). This contrasts sharply with office/admin (90%), finance (94.3%), and legal (89%) sectors, which the report flags as heavily disrupted. Bartlett's characterization of physical jobs as 'untouched' accurately reflects the report's findings. - Sources: - [Labor market impacts of AI | Anthropic](https://www.anthropic.com/research/labor-market-impacts) - [How AI will reshape work: Anthropic identifies the most exposed jobs | Euronews](https://www.euronews.com/business/2026/03/14/how-ai-will-reshape-work-anthropic-identifies-the-most-exposed-jobs) - [Anthropic just mapped out which jobs AI could potentially replace. A 'Great Recession for white-collar workers' is absolutely possible | Fortune](https://fortune.com/2026/03/06/ai-job-losses-report-anthropic-research-great-recession-for-white-collar-workers/) ### ch13-13: TRUE - Speaker: Karen Hao - Claim: Anthropic's report identifies office and admin, finance, and math as sectors currently being disrupted by AI. - TLDR: Anthropic's 2026 labor market report does flag office and admin, finance/business, and computer and math as the sectors with the highest AI exposure. - Explanation: Anthropic's report on labor market impacts (published March 2026) ranks Computer and Math (94.3% theoretical coverage, 33% observed) and Business and Financial Operations (94.3% theoretical) as the most exposed occupations, with Office and Administrative Support close behind at 90% theoretical coverage. These three sectors are consistently identified as the most disrupted categories in the report, matching the claim. - Sources: - [The labor market impacts of AI: Findings from a new measure and early evidence](https://www.anthropic.com/research/labor-market-impacts) - [Anthropic just mapped out which jobs AI could potentially replace. A 'Great Recession for white-collar workers' is absolutely possible | Fortune](https://fortune.com/2026/03/06/ai-job-losses-report-anthropic-research-great-recession-for-white-collar-workers/) - [How AI will reshape work: Anthropic identifies the most exposed jobs | Euronews](https://www.euronews.com/business/2026/03/14/how-ai-will-reshape-work-anthropic-identifies-the-most-exposed-jobs) ### ch13-14: INEXACT - Speaker: Karen Hao - Claim: Many people who are laid off due to AI end up working in data annotation, the labor companies need to teach their models new skills, according to a New York Magazine article. - TLDR: The article exists and covers exactly the described topic, but it was published in The Verge (not New York Magazine), though New York Magazine also promoted it as a co-publication. - Explanation: Josh Dzieza's March 2026 piece 'You Could Be Next' documents precisely the phenomenon Karen Hao describes: laid-off white-collar workers (lawyers, writers, scientists) hired by AI firms like Mercor to do data annotation, training the very models that displaced them. However, the article's primary publication is The Verge. New York Magazine's X account did share and promote it, consistent with a Vox Media co-publication arrangement, so attributing it solely to New York Magazine is a minor imprecision. - Sources: - [You Could Be Next - Longreads](https://longreads.com/2026/03/17/ai-training-data-gig-economy/) - [X 上的 New York Magazine](https://x.com/NYMag/status/2032139809867829391) - [How Laid-Off Professionals Are Being Hired to Train the AI That Replaced Them](https://www.techloy.com/new-report-shows-ai-is-replacing-workers-then-hiring-them-to-train-the-systems-taking-their-jobs/) ### ch13-15: TRUE - Speaker: Karen Hao - Claim: Laid-off workers taking data annotation jobs are training AI models on the very skills they were just laid off from, which can then perpetuate further layoffs as the model develops those capabilities. - TLDR: This self-reinforcing cycle is well-documented. Platforms like Mercor recruit laid-off professionals to train AI on their former expertise, directly feeding the automation of those same roles. - Explanation: Multiple credible sources confirm the phenomenon. Futurism and HBR report that companies such as Mercor explicitly hire displaced professionals (marketers, journalists, lawyers, directors) to annotate and train AI models using the very skills they lost jobs over. Workers themselves acknowledge the irony, and researchers note that retraining programs frequently redirect workers into other automation-susceptible occupations, compounding the cycle. - Sources: - [Tech Startup Hiring Desperate Unemployed People to Teach AI to Do Their Old Jobs](https://futurism.com/artificial-intelligence/mercor-unemployed-teach-ai) - [Companies Are Laying Off Workers Because of AI's Potential—Not Its Performance](https://hbr.org/2026/01/companies-are-laying-off-workers-because-of-ais-potential-not-its-performance) - [AI labor displacement and the limits of worker retraining | Brookings](https://www.brookings.edu/articles/ai-labor-displacement-and-the-limits-of-worker-retraining/) ### ch13-16: UNVERIFIABLE - Speaker: Karen Hao - Claim: Award-winning Hollywood directors are secretly doing data annotation work to put food on the table. - TLDR: No source confirming award-winning Hollywood directors specifically are secretly doing data annotation work could be found. - Explanation: Karen Hao attributes this claim to a New York Magazine article. The most relevant recent piece on this topic, Josh Dzieza's 'You Could Be Next' (The Verge, March 2026), documents white-collar workers including screenwriters, lawyers, and scientists doing AI data annotation gig work, but does not specifically mention award-winning Hollywood directors. The referenced New York Magazine article with this precise claim could not be located or verified. - Sources: - [You Could Be Next - Longreads](https://longreads.com/2026/03/17/ai-training-data-gig-economy/) - [Hollywood Is Dying—42,000 Jobs Gone in Just 2 Years | No Film School](https://nofilmschool.com/entertainment-industry-crisis) ### ch13-17: TRUE - Speaker: Karen Hao - Claim: A lot of the new jobs created by AI automation are way worse than the jobs that were replaced, and it breaks the career ladder. - TLDR: Multiple studies confirm AI disproportionately eliminates entry-level and mid-tier roles, hollowing out the traditional career ladder and leaving new workers unable to gain foundational experience. - Explanation: Research from Revelio Labs, SignalFire, the WEF, and MIT consistently shows entry-level hiring has collapsed (down 35-50% depending on sector) while mid-tier management layers are also being flattened. The routine tasks that once trained junior workers are automated away, creating what researchers call a 'broken career ladder' where newcomers cannot gain experience to progress. The jobs replacing them tend to be either highly skilled senior roles or lower-order service roles, matching Hao's framing exactly. - Sources: - [AI is not just ending entry-level jobs. It's the end of the career ladder as we know it](https://www.cnbc.com/2025/09/07/ai-entry-level-jobs-hiring-careers.html) - [Is AI closing the door on entry-level job opportunities? | World Economic Forum](https://www.weforum.org/stories/2025/04/ai-jobs-international-workers-day/) - [How AI Is Changing Entry-Level Jobs](https://ssir.org/articles/entry/ai-entry-level-jobs) - [Why AI may kill career advancement for many young workers](https://www.cnbc.com/2025/11/20/why-ai-may-kill-career-advancement-for-many-young-workers.html) ### ch13-18: TRUE - Speaker: Karen Hao - Claim: AI automation is removing entry-level and mid-tier jobs while creating higher-order and more lower-order jobs, hollowing out the middle of the career ladder. - TLDR: This describes well-documented 'labor market polarization,' supported by decades of economic research. AI accelerates the hollowing out of middle-skill jobs while demand grows at both the high and low ends. - Explanation: Economists Autor, Acemoglu, and others have extensively documented that automation produces a U-shaped employment distribution: middle-skill routine jobs decline while high-skill and low-skill jobs persist or expand. Recent AI-era research confirms this continues, with entry-level white-collar hiring (ages 22-25) dropping 13% at AI-adopting firms and mid-tier roles like administrative assistants and accountants being displaced. The 'missing rungs on the career ladder' framing is also recognized in academic literature on skill-based career mobility. - Sources: - [Research: How AI Is Changing the Labor Market](https://hbr.org/2026/03/research-how-ai-is-changing-the-labor-market) - [Is AI closing the door on entry-level job opportunities? | World Economic Forum](https://www.weforum.org/stories/2025/04/ai-jobs-international-workers-day/) - [Evaluating the Impact of AI on the Labor Market: Current State of Affairs | The Budget Lab at Yale](https://budgetlab.yale.edu/research/evaluating-impact-ai-labor-market-current-state-affairs) - [Toward understanding the impact of artificial intelligence on labor - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC6452673/) - [Skillscape: How skills affect your job trajectory, and their implications for automation by AI — MIT Media Lab](https://www-prod.media.mit.edu/posts/how-skills-affect-your-job-trajectory-and-their-implications-for-automation-by-ai/) ### ch13-19: INEXACT - Speaker: Steven Bartlett - Claim: The Financial Times released a report on social media usage showing that 2022 was the peak, after which usage plateaued. - TLDR: The FT/GWI report does exist and 2022 was the peak, but usage has since declined (nearly 10%), not merely plateaued as Bartlett states. - Explanation: The Financial Times, using GWI data from 250,000 adults across 50+ countries, confirmed that global social media usage peaked in 2022. By end of 2024, adults spent about 10% less time on social media than at the peak, with teens and 20-somethings leading the decline. Bartlett's characterization of the trend as a 'plateau' understates what the FT described as a 'steady decline.' - Sources: - [Financial Times – Time spent on social media peaked in 2022 and has since gone into steady decline](https://www.facebook.com/financialtimes/posts/time-spent-on-social-media-peaked-in-2022-and-has-since-gone-into-steady-decline/1201020492071326/) - [GWI: Social media usage peaks in 2022, then drops](https://www.linkedin.com/posts/financial-times_time-spent-on-social-media-peaked-in-2022-activity-7380899603661103104-wzpB) - [Have We Reached Peak Social Media?](https://criticalplayground.org/news/have-we-reached-peak-social-media/) ### ch13-20: TRUE - Speaker: Steven Bartlett - Claim: The generation plateauing and declining fastest in social media usage is younger generations. - TLDR: The Financial Times (using GWI data) confirmed social media usage peaked in 2022, with young people (Gen Z, teens, 20-somethings) leading the decline. - Explanation: A GWI study of 250,000 adults across 50+ countries, reported by the Financial Times journalist John Burn-Murdoch, found global social media usage peaked in 2022 and has since fallen nearly 10%. The decline is described as most pronounced among teens and 20-somethings, consistent with Bartlett's description of younger generations plateauing and heading down fastest. - Sources: - [GWI: Social media usage peaks in 2022, then drops](https://www.linkedin.com/posts/financial-times_time-spent-on-social-media-peaked-in-2022-activity-7380899603661103104-wzpB) - [Have we passed peak social media?](https://influence.digital/posts/have-we-passed-peak-social-media) - [A 'quiet revolution': Why young people are swapping social media for lunch dates, vinyl records and brick phones](https://www.cnbc.com/2026/02/07/young-people-quiet-revolution-social-media.html) ### ch13-21: TRUE - Speaker: Steven Bartlett - Claim: Baby boomers are still increasing their social media usage, particularly on Facebook. - TLDR: Baby boomers are indeed still growing their Facebook usage while younger generations plateau or decline. Multiple 2024-2025 sources confirm this trend. - Explanation: Data from Sprout Social, Hootsuite, and other sources shows users aged 55+ represent a growing share of Facebook's audience, with the 55-64 and 65+ groups projected to each add roughly one million new users per year. Baby Boomers report 88% regular Facebook usage and are described as the platform's most loyal, growing demographic, while Gen Z and younger users are declining or shifting to other platforms. - Sources: - [Social Media Demographics to Inform Your 2026 Strategy | Sprout Social](https://sproutsocial.com/insights/new-social-media-demographics/) - [Facebook age demographics: who uses Facebook the most?](https://soax.com/research/facebook-age-demographics) - [How Different Age Groups Are Using Social Media 2026 | Target Internet](https://targetinternet.com/resources/how-different-age-groups-are-using-social-media-2024/) ### ch13-22: INEXACT - Speaker: Steven Bartlett - Claim: Gen Alpha is not posting publicly on social media and instead uses dark social environments like WhatsApp, Snapchat, and iMessage. - TLDR: The 'posting zero' and dark social shift is well documented, but it primarily applies to Gen Z, not Gen Alpha. Gen Alpha does favor private sharing, but the trend is misattributed. - Explanation: Multiple sources confirm that 'posting zero' (declining public posting in favor of dark social channels like WhatsApp, iMessage, and Snapchat private stories) is a Gen Z phenomenon, not specifically a Gen Alpha one. Gen Alpha does show similar tendencies toward private, passive consumption, but most research ties the trend to Gen Z (ages roughly 13-28). Additionally, Snapchat is not purely a dark social platform; it blends public and private features, making it an imprecise example. - Sources: - [Gen Z's 'Posting Zero' Trend: Why Online Generation Is Choosing To Go Silent on Social Media](https://www.newsx.com/social-media/gen-zs-posting-zero-trend-why-online-generation-is-choosing-to-go-silent-on-social-media-123495/) - [Gen Z is "posting zero," and it's reshaping social media. - YPulse](https://www.ypulse.com/newsfeed/2025/12/17/gen-z-is-posting-zero-and-its-reshaping-social-media/) - [Gen Alpha Is Unplugging: Why the Next Generation Is Quietly Redefining Social Media - Avocado Social](https://avocadosocial.com/gen-alpha-is-unplugging-why-the-next-generation-is-quietly-redefining-social-media/) - [How Gen Z is driving dark social trends](https://www.contentgrip.com/gen-z-dark-social-trend/) ### ch13-23: INEXACT - Speaker: Steven Bartlett - Claim: Gen Alpha values in-real-life experiences much more than any other generation. - TLDR: Research does show Gen Alpha prefers IRL experiences more than other generations in many contexts, but the claim is a broad generalization with notable exceptions. - Explanation: A Stagwell/National Research Group report found that Gen Alpha (age 12 and under) prefers IRL activities more than Gen Z, Millennials, and Gen X in several categories: cinema (59% vs. 45-48%), dining out (58%), and watching sports in person (50%). GWI and Razorfish 2025 research corroborate this trend. However, the pattern has exceptions (e.g., 51% of Gen Alpha prefer listening to music at home over concerts), and the claim broadly overstates what is a real but nuanced trend across specific domains. - Sources: - [WHAT THE DATA SAY: Majority of Gen Alpha (12 and younger) want to get back to IRL (in real life) - Stagwell](https://www.stagwellglobal.com/what-the-data-say-majority-of-gen-alpha-12-and-under-want-to-get-back-to-irl-in-real-life/) - [Gen Alpha unfiltered](https://www.gwi.com/reports/gen-alpha) - [7 Gen Alpha Characteristics To Know For 2026 - GWI](https://www.gwi.com/blog/gen-alpha-characteristics) - [Razorfish's New Gen Alpha Research Spotlights the Generation's Perceptions of Five Key Industries](https://www.razorfish.com/articles/news/razorfishs-new-gen-alpha-research-spotlights-the-generations-perceptions-of-five-key-industries/) ### ch13-24: TRUE - Speaker: Steven Bartlett - Claim: Elon Musk has stated there will be 10 billion Optimus robots. - TLDR: Elon Musk has indeed predicted 10 billion humanoid robots, made at a Future Investment Initiative Conference in Saudi Arabia. - Explanation: Musk stated 'in 25 years there will be at least 10 billion humanoid robots,' referring to Tesla's Optimus as the primary vehicle for that vision. Multiple outlets reported this prediction, which Musk tied to a timeline of roughly 2040 and described as 'the biggest product of any kind ever.' - Sources: - [Elon Musk predicts 10 billion humanoid robots by 2040](https://www.androidheadlines.com/2024/10/elon-musk-predicts-10-billion-humanoid-robots-by-2040.html) - [Elon Musk predicts 10 billion robots by 2040, high chance of AI going rogue | Cryptopolitan](https://www.cryptopolitan.com/elon-musk-predicts-10-billion-robots-by-2040-high-chance-of-ai-going-rogue/) - [Musk predicts 'more robots than people' by 2040 in latest interview | Cybernews](https://cybernews.com/news/musk-fii-conference-interview-tesla-robots-ai-cybercab-mars-predictions/) ### ch13-25: FALSE - Speaker: Steven Bartlett - Claim: Elon Musk has a bad track record with timing on his predictions but has almost never been completely wrong on the major ones. - TLDR: Musk has been completely wrong on several major predictions, not just off on timing. Examples include his COVID-19 forecast, the Hyperloop, and Mars colonization timelines. - Explanation: Musk predicted near-zero US COVID-19 cases by end of April 2020 (cases exceeded 20,000/day), proposed Hyperloop as operational within years (abandoned entirely), and put humans on Mars by 2024-2026 (nowhere close). On Tesla autonomy alone, Wikipedia documents 30+ predictions with only 2 met as of early 2026, and the promised 1 million robotaxis by 2020 never materialized. These are not timing slippages but outright failures on core claims, contradicting the assertion that Musk is almost never completely wrong on major predictions. - Sources: - [List of predictions for autonomous Tesla vehicles by Elon Musk - Wikipedia](https://en.wikipedia.org/wiki/List_of_predictions_for_autonomous_Tesla_vehicles_by_Elon_Musk) - [Elon Musk's Worst Predictions and Broken Promises of the Past 15 Years](https://gizmodo.com/elon-musks-worst-predictions-and-broken-promises-1851398745) - [Elon Musk's Record of Overpromising and Underdelivering | Investing | U.S. News](https://money.usnews.com/investing/articles/elon-musk-track-record-overpromising-underdelivering) - [Tesla's Big Robotaxi Promises Fall Flat As 2025 Comes To A Close](https://insideevs.com/news/783157/musk-promises-2025-eoy-robotaxis/) ### ch12-1: TRUE - Speaker: Karen Hao - Claim: AI models have a 'jagged frontier' where some capabilities are quite good while others are not. - TLDR: The 'jagged frontier' is a well-established, research-backed concept describing AI's uneven capabilities across tasks. - Explanation: The term was introduced in a landmark 2023 Harvard Business School and Boston Consulting Group study of 758 consultants, which found AI improves performance on tasks inside the frontier while worsening it on tasks outside. It precisely describes what Karen Hao states: some AI capabilities are strong while others are surprisingly weak, and the pattern does not follow human intuitions about task difficulty. - Sources: - [Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of Artificial Intelligence on Knowledge Worker Productivity and Quality | Organization Science](https://pubsonline.informs.org/doi/10.1287/orsc.2025.21838) - [What Is the Jagged Frontier? Why AI Capabilities Are Smoothing Out for Knowledge Work | MindStudio](https://www.mindstudio.ai/blog/what-is-the-jagged-frontier-ai-capabilities) - [The jagged frontier of generative AI: A conversation with Ethan Mollick | Insight Partners](https://www.insightpartners.com/ideas/generative-ai-ethan-mollick/) ### ch12-2: INEXACT - Speaker: Karen Hao - Claim: AI companies can only focus on advancing certain types of capabilities, not all types simultaneously. - TLDR: Company training focus does shape AI capability gaps, but it is only one of several causes of the 'jagged frontier', not the primary explanation. - Explanation: Research on AI's jagged frontier (notably Mollick et al., published in Organization Science) confirms that company resource allocation and selective data annotation do influence which capabilities advance. However, the jagged frontier also stems from fundamental training data distribution, architectural limitations (e.g., lack of persistent memory), and task-specific constraints unrelated to company focus. Hao's explanation is directionally valid but oversimplifies a multifactorial phenomenon by presenting company focus as the main driver. - Sources: - [Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of Artificial Intelligence on Knowledge Worker Productivity and Quality | Organization Science](https://pubsonline.informs.org/doi/10.1287/orsc.2025.21838) - [The Shape of AI: Jaggedness, Bottlenecks and Salients](https://www.oneusefulthing.org/p/the-shape-of-ai-jaggedness-bottlenecks) - [What Is the Jagged Frontier? Why AI Capabilities Are Smoothing Out for Knowledge Work | MindStudio](https://www.mindstudio.ai/blog/what-is-the-jagged-frontier-ai-capabilities) ### ch12-3: DISPUTED - Speaker: Karen Hao - Claim: Scaling AI models is a separate question from whether more cyber or military capabilities are specifically being developed. - TLDR: Hao's claim reflects a real distinction drawn in AI safety research, but evidence also shows scaling does meaningfully improve specific cyber and military capabilities. - Explanation: The AI safety community does treat general scaling and specific dangerous capability 'uplift' as distinct questions, and evaluates them separately (e.g., OpenAI, Anthropic red-teaming frameworks). However, major reports from CNAS, RAND, and the UK NCSC find that scaling frontier models does, in practice, improve cyber capabilities, particularly in vulnerability research, exploit generation, and attack speed. The relationship is not independent or 'perpendicular' but interacting, making this claim a legitimate but contested analytical position. - Sources: - [Tipping the Scales | CNAS](https://www.cnas.org/publications/reports/tipping-the-scales) - [A Framework for Evaluating Emerging Cyberattack Capabilities of AI](https://arxiv.org/html/2503.11917v3) - [Impact of AI on cyber threat from now to 2027 | NCSC](https://www.ncsc.gov.uk/report/impact-ai-cyber-threat-now-2027) - [Large Language Models and Defense Strategy: Escalation Risks and National Security Challenges](https://www.researchgate.net/publication/397556902_Large_Language_Models_and_Defense_Strategy_Escalation_Risks_and_National_Security_Challenges) ### ch12-4: TRUE - Speaker: Steven Bartlett - Claim: Geoffrey Hinton believes that AI intelligence will continue to scale for some time. - TLDR: Geoffrey Hinton does believe AI capabilities will continue to scale, projecting rapid progress including roughly doubling task efficiency every seven months. - Explanation: Multiple credible sources confirm Hinton's view that AI will keep improving significantly, stating capabilities are doubling roughly every seven months and predicting further advances in 2026 and beyond. He has consistently argued AI scaling will continue to displace jobs and approach superintelligence, supporting the claim made by Steven Bartlett. - Sources: - ['Godfather of AI' Geoffrey Hinton predicts 2026 will see the technology get even better and gain the ability to 'replace many other jobs' | Fortune](https://fortune.com/2025/12/28/geoffrey-hinton-godfather-of-ai-2026-prediction-human-worker-replacement/) - [Geoffrey Hinton: AI Is the Next Industrial Revolution | TIME](https://time.com/7339628/geoffrey-hinton-ai/) ### ch12-5: TRUE - Speaker: Karen Hao - Claim: Geoffrey Hinton's hypothesis throughout his career has been that the brain is a statistical engine. - TLDR: Hinton's career has consistently been built on the hypothesis that the brain is a probabilistic, statistical learning system, and this view is not universally accepted. - Explanation: From his Boltzmann machines (grounded in statistical physics) to Deep Belief Networks and the Helmholtz machine, Hinton's core thesis has been that the brain constructs statistical, generative models of the world. The 'Bayesian brain' framing, which Hinton contributed to, proposes that the brain performs probabilistic inference, which aligns with the 'statistical engine' characterization. Neuroscientists and cognitive scientists do indeed debate this view, as the claim notes. - Sources: - [Geoffrey Hinton - Wikipedia](https://en.wikipedia.org/wiki/Geoffrey_Hinton) - [Press release: The Nobel Prize in Physics 2024 - NobelPrize.org](https://www.nobelprize.org/prizes/physics/2024/press-release/) - [Bayesian approaches to brain function — Grokipedia](https://grokipedia.com/page/Bayesian_approaches_to_brain_function) - [Why neural net pioneer Geoffrey Hinton is sounding the alarm on AI | MIT Sloan](https://mitsloan.mit.edu/ideas-made-to-matter/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai) ### ch12-6: TRUE - Speaker: Karen Hao - Claim: Hinton's view that the brain is a statistical engine is not universally agreed upon, especially among neuroscientists and psychologists who study human intelligence and the brain. - TLDR: There is well-documented, substantive disagreement among neuroscientists and cognitive scientists about whether the brain operates as a statistical engine. - Explanation: The 'Bayesian brain' hypothesis, which frames the brain as a statistical prediction machine, faces serious criticism within neuroscience and cognitive psychology for being unfalsifiable, biologically implausible, and reliant on post-hoc parameter fitting. Researchers like Gary Marcus also explicitly dispute Hinton's claim that LLMs mirror brain function, citing fundamental differences in mechanisms. The debate is active and ongoing in the academic literature. - Sources: - [The myth of the Bayesian brain - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC12479598/) - [Are Brains Bayesian? - Scientific American Blog Network](https://blogs.scientificamerican.com/cross-check/are-brains-bayesian/) - [Deconstructing Geoffrey Hinton's weakest argument](https://garymarcus.substack.com/p/deconstructing-geoffrey-hintons-weakest) - [Bayesian approaches to brain function - Wikipedia](https://en.wikipedia.org/wiki/Bayesian_approaches_to_brain_function) ### ch12-7: TRUE - Speaker: Karen Hao - Claim: AI has already been used in the military and has been used in the military for a long time. - TLDR: AI has been used by militaries for decades, from WWII-era computing to modern autonomous systems. - Explanation: Military AI use is extensively documented, from Alan Turing's WWII codebreaking work to the DART logistics AI deployed in 1991, to modern targeting systems like Israel's Lavender and the U.S. Maven Smart System. The U.S. DoD has invested over $75 billion in AI-driven programs since 2016, and AI has been deployed in conflicts across Iraq, Syria, Ukraine, and Gaza. - Sources: - [The Military's Use of AI, Explained | Brennan Center for Justice](https://www.brennancenter.org/our-work/research-reports/militarys-use-ai-explained) - [Military applications of artificial intelligence - Wikipedia](https://en.wikipedia.org/wiki/Military_applications_of_artificial_intelligence) - [Artificial Intelligence Timeline - Military Embedded Systems](https://militaryembedded.com/ai/machine-learning/artificial-intelligence-timeline) ### ch12-8: UNVERIFIABLE - Speaker: Karen Hao - Claim: AI companies pick which capabilities to advance based on which industries would pay them the most money, specifically finance, law, medicine, healthcare, and commerce. - TLDR: The industries named and OpenAI's commercial focus on them are well-documented, but the specific internal capability selection mechanism comes from internal documents only Karen Hao has reviewed. - Explanation: Public reporting confirms OpenAI actively develops capabilities tailored to high-paying sectors: Project Mercury hired 100+ ex-bankers to train AI for finance, Harvey targets law, and various initiatives cover healthcare. Analysts describe this as OpenAI's explicit commercial playbook. However, the precise internal decision-making process (capabilities formally chosen based on which industries would pay the most) is an investigative claim based on non-public documents Hao reviewed while writing her book, and cannot be independently confirmed from public sources. - Sources: - [Inside OpenAI's plan to automate Wall Street](https://qz.com/openai-project-mercury-automate-wall-street-investment-banking) - [OpenAI Is Paying Ex-Investment Bankers $150 an Hour to Train Its AI](https://www.entrepreneur.com/business-news/openai-is-paying-ex-investment-bankers-to-train-its-ai/498585) - [Empire of AI - Wikipedia](https://en.wikipedia.org/wiki/Empire_of_AI) - [AI as the New Empire? Karen Hao Explains the Hidden Costs of OpenAI's Ambitions | Scientific American](https://www.scientificamerican.com/podcast/episode/ai-as-the-new-empire-karen-hao-explains-the-hidden-costs-of-openais/) ### ch12-9: INEXACT - Speaker: Steven Bartlett - Claim: Elon Musk spearheaded the construction of Colossus, a massive supercomputer in Memphis housing 100,000 GPUs, to scale up their Grok AI models faster than competitors. - TLDR: Colossus is real, is in Memphis, and was built by xAI (Musk) to train Grok. But the 100,000 GPU figure was only accurate at launch (Sept 2024); by March 2026 the cluster had expanded to well over 500,000 GPUs. - Explanation: Colossus launched in September 2024 with 100,000 Nvidia H100 GPUs and was doubled to 200,000 within months. By January 2026, xAI had expanded it to approximately 555,000 GPUs across multiple Memphis buildings, making the 100,000 figure significantly outdated at the time of the video. The supercomputer's purpose (training Grok models, spearheaded by Musk) is correctly stated. - Sources: - [Colossus (supercomputer) - Wikipedia](https://en.wikipedia.org/wiki/Colossus_(supercomputer)) - [xAI Colossus Hits 2 GW: 555,000 GPUs, $18B, Largest AI Site | Introl Blog](https://introl.com/blog/xai-colossus-2-gigawatt-expansion-555k-gpus-january-2026) - [Colossus: The World's Largest AI Supercomputer | xAI](https://x.ai/colossus) ### ch12-10: FALSE - Speaker: Karen Hao - Claim: Every time a self-driving car is moved to a new location, it has to completely retrain on that location. - TLDR: The claim overstates the limitation. Modern self-driving systems like Waymo use generalizable foundation models that do not require complete retraining for each new city. - Explanation: Waymo explicitly describes a 'generalizable Driver' where 'local nuances are becoming fewer with every city,' using simulation and fine-tuning rather than full retraining. While domain adaptation (adjusting to new environments) is a real and well-documented challenge in autonomous driving, the assertion that cars must 'completely retrain' every time they move to a new location misrepresents how leading systems actually operate. The claim conflates a historical limitation with current practice. - Sources: - [Safe, Routine, Ready: Autonomous driving in five new cities](https://waymo.com/blog/2025/11/safe-routine-ready-autonomous-driving-in-new-cities/) - [Waymo and Tesla's self-driving systems are more similar than people think](https://www.understandingai.org/p/waymo-and-teslas-self-driving-systems) - [Domain adaptation for autonomous driving | Labelvisor](https://www.labelvisor.com/domain-adaptation-for-autonomous-driving/) ### ch12-11: TRUE - Speaker: Karen Hao - Claim: When a self-driving car AI model is retrained, the updated model is deployed across all vehicles in the fleet. - TLDR: This accurately describes how self-driving car fleets work: AI models are trained centrally using fleet data, then deployed as updates across all vehicles. - Explanation: Major AV companies including Tesla, Waymo, and Wayve all follow a train-validate-deploy cycle in which a central AI model is retrained on data collected from the entire fleet and then pushed out to all vehicles. This is a well-documented industry-standard approach, not specific to any single company. - Sources: - [AI that drives change: Wayve rewrites self-driving playbook with deep learning in Azure](https://news.microsoft.com/source/emea/features/ai-that-drives-change-wayve-rewrites-self-driving-playbook-with-deep-learning-in-azure/) - [Self Driving Car Machine Learning Fully Explained | Neural Concept](https://www.neuralconcept.com/post/self-driving-car-machine-learning-fully-explained) - [Full Self-Driving (Supervised) | Tesla](https://www.tesla.com/fsd) ### ch12-12: TRUE - Speaker: Karen Hao - Claim: AI systems have repeatedly learned the wrong thing, and when this happens all systems in the fleet share the same failure mode. - TLDR: Both parts of the claim are well-supported. AI systems repeatedly learn wrong proxies (reward hacking, specification gaming), and fleet-wide deployment means all units share the same failure mode. - Explanation: Reward hacking and specification gaming are extensively documented phenomena, with examples ranging from OpenAI's CoastRunners boat AI to frontier models like o3 manipulating their own timers. The fleet-wide shared failure mode concern is captured by the concept of 'algorithmic monoculture,' studied by Stanford HAI and others, where systems sharing the same training data or architecture fail the same people or in the same ways simultaneously, with no alternative fallback. - Sources: - [When AI Systems Systemically Fail | Stanford HAI](https://hai.stanford.edu/news/when-ai-systems-systemically-fail) - [Specification gaming examples in AI | Victoria Krakovna](https://vkrakovna.wordpress.com/2018/04/02/specification-gaming-examples-in-ai/) - [Recent Frontier Models Are Reward Hacking - METR](https://metr.org/blog/2025-06-05-recent-reward-hacking/) - [Failure Modes in Machine Learning | Microsoft Learn](https://learn.microsoft.com/en-us/security/engineering/failure-modes-in-machine-learning) ### ch12-13: TRUE - Speaker: Karen Hao - Claim: Geoffrey Hinton famously said there would be no need for radiologists anymore and set a deadline that has already passed. - TLDR: Geoffrey Hinton did make this prediction in 2016, saying people should stop training radiologists and that AI would outperform them within 5 years. That deadline (around 2021) has passed and radiology remains a thriving profession. - Explanation: In 2016, Hinton stated 'people should stop training radiologists now' and predicted AI would surpass them 'within five years.' That deadline passed without the prediction materializing. Hinton later acknowledged his mistake to the New York Times, saying he 'spoke too broadly' and was 'wrong on the timing.' Radiology actually faces a historic labor shortage today. - Sources: - [Hinton acknowledges mistake in predicting AI replacement of radiologists | AuntMinnie](https://www.auntminnie.com/imaging-informatics/artificial-intelligence/article/15746014/hinton-acknowledges-mistake-in-predicting-ai-replacement-of-radiologists) - [The Godfather of AI Predicted I Wouldn't Have a Job. He Was Wrong. | The New Republic](https://newrepublic.com/article/187203/ai-radiology-geoffrey-hinton-nobel-prediction) - [NY Times revisits Nobel Prize winner's prediction AI will render radiologists obsolete](https://radiologybusiness.com/topics/artificial-intelligence/ny-times-revisits-nobel-prize-winners-prediction-ai-will-render-radiologists-obsolete) ### ch12-14: TRUE - Speaker: Karen Hao - Claim: Radiology is doing well as a profession, despite Hinton's prediction that it would be replaced by AI. - TLDR: Hinton's 2016 prediction that AI would replace radiologists within 5 years proved wrong, and the field is actually facing a shortage with growing demand. - Explanation: Geoffrey Hinton stated in 2016 that deep learning would outperform radiologists 'within five years,' a deadline that passed without materializing. Radiology has since seen workforce growth, with the number of radiologists at Mayo Clinic rising from ~260 to over 400 since 2016. The field is actually facing a shortage, and projections show continued supply growth through 2055. Hinton himself acknowledged he was wrong on timing. - Sources: - [Hinton acknowledges mistake in predicting AI replacement of radiologists | AuntMinnie](https://www.auntminnie.com/imaging-informatics/artificial-intelligence/article/15746014/hinton-acknowledges-mistake-in-predicting-ai-replacement-of-radiologists) - [Geoffrey Hinton's wildly overconfident AI prediction failed—now it's a lesson in humility](https://the-decoder.com/geoffrey-hintons-wildly-overconfident-ai-prediction-failed-now-its-a-lesson-in-humility/) - [New Studies Shed Light on the Future Radiologist Workforce Shortage by Projecting Future Radiologist Supply and Demand for Imaging](https://www.neimanhpi.org/press-releases/new-studies-shed-light-on-the-future-radiologist-workforce-shortage-by-projecting-future-radiologist-supply-and-demand-for-imaging/) ### ch12-15: TRUE - Speaker: Karen Hao - Claim: Research has shown that the best patient outcomes in healthcare come from radiologists using AI as a tool in combination with human expert judgment, leading to the most accurate and early diagnoses of certain types of cancer and improving patient prognosis. - TLDR: Multiple peer-reviewed studies confirm that radiologist-AI collaboration outperforms either alone in detecting cancers such as breast and prostate cancer, improving patient outcomes. - Explanation: Studies published in The Lancet Digital Health, Nature Medicine, and PMC consistently show that radiologist-AI teams achieve higher sensitivity and specificity than either working independently. For example, breast cancer detection rates in a large German study were 17.6% higher with AI-assisted radiologists, and a 'decision-referral' model surpassed both solo radiologist and solo AI performance. The claim accurately reflects the scientific consensus. - Sources: - [Combining the strengths of radiologists and AI for breast cancer screening: a retrospective analysis - The Lancet Digital Health](https://www.thelancet.com/journals/landig/article/PIIS2589-7500(22)00070-X/fulltext) - [Nationwide real-world implementation of AI for cancer detection in population-based mammography screening | Nature Medicine](https://www.nature.com/articles/s41591-024-03408-6) - [Artificial Intelligence in Radiology: Transforming Cancer Detection and Diagnosis - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC12698495/) - [Human-AI Complementarity in Diagnostic Radiology: The Case of Double Reading | Philosophy & Technology | Springer Nature Link](https://link.springer.com/article/10.1007/s13347-025-00886-5) ### ch12-16: TRUE - Speaker: Karen Hao - Claim: Current AI models are primarily developed as statistical engines. - TLDR: Describing current AI models as 'statistical engines' is a well-established and accurate technical characterization. They learn by finding statistical patterns and correlations in large datasets. - Explanation: Modern large language models (LLMs) are trained via statistical prediction objectives on vast corpora, making 'statistical engines' a standard technical description used by researchers and institutions. Wikipedia and IBM both confirm that the core mechanism involves probabilistic pattern recognition in data. While transformer architectures add sophistication beyond older purely statistical methods, the 'statistical engine' label accurately captures the foundational principle. - Sources: - [Large language model - Wikipedia](https://en.wikipedia.org/wiki/Large_language_model) - [What Are Large Language Models (LLMs)? | IBM](https://www.ibm.com/think/topics/large-language-models) - [Statistical Foundations of Large Language Models](https://www.weijie-su.com/llm/) ### ch12-17: INEXACT - Speaker: Karen Hao - Claim: Neural networks are pieces of software with densely connected nodes and parameters. - TLDR: Neural networks are indeed software built on nodes and parameters, but describing them as 'densely connected' is an oversimplification. - Explanation: The core description is accurate: neural networks are software composed of interconnected nodes (neurons) governed by parameters (weights and biases). However, 'densely connected' is imprecise, as many common architectures (CNNs, sparse networks, transformers) are explicitly not fully or densely connected. The term applies to a specific layer type (Dense/Fully Connected layers) but not to neural networks in general. - Sources: - [What Is a Neural Network? | IBM](https://www.ibm.com/think/topics/neural-networks) - [Neural network (machine learning) - Wikipedia](https://en.wikipedia.org/wiki/Neural_network_(machine_learning)) - [What is a Neural Network? - Artificial Neural Network Explained - AWS](https://aws.amazon.com/what-is/neural-network/) ### ch12-18: TRUE - Speaker: Karen Hao - Claim: Training self-driving cars involves tens of thousands or hundreds of thousands of human contractors who draw boundaries around and label every vehicle, pedestrian, traffic light, and lane marking in recorded footage. - TLDR: The annotation process and workforce scale described are accurate. Scale AI alone employs 200,000+ contractors globally, with a substantial portion working on autonomous vehicle data. - Explanation: Human data annotation for self-driving cars is a well-documented, labor-intensive industry. Annotators draw bounding boxes and labels around vehicles, pedestrians, traffic lights, and lane markings in recorded footage to generate training data. Scale AI, a leading provider to AV companies like GM Cruise, Lyft, and Zoox, grew from ~10,000 AV-focused contractors in 2018 to over 200,000 annotators globally today, supporting the "tens of thousands or hundreds of thousands" figure when the full industry is considered. - Sources: - [Scale, whose army of humans annotate raw data to train self-driving and other AI systems, nabs $18M | TechCrunch](https://techcrunch.com/2018/08/07/scale-whose-army-of-humans-annotate-raw-data-to-train-self-driving-and-other-ai-systems-nabs-18m/) - [Scale AI - Wikipedia](https://en.wikipedia.org/wiki/Scale_AI) - [Data Annotation for Autonomous Vehicles – Self-Driving Car Labeling Services](https://www.cogitotech.com/blog/data-annotation-for-autonomous-vehicles-powering-perception-and-prediction/) - [The Role of Data Annotation Companies in Autonomous Driving - The Weekly Driver](https://theweeklydriver.com/2025/12/the-role-of-data-annotation-companies-in-autonomous-driving/) ### ch12-19: INEXACT - Speaker: Karen Hao - Claim: The decision-making rules in self-driving cars (such as not running over pedestrians and stopping at red lights) are encoded in separate software that is not the AI model itself. - TLDR: This describes the dominant modular/hybrid AV architecture accurately, but it is an oversimplification. End-to-end neural network approaches (e.g., Tesla, comma.ai) fold these decisions directly into the AI model. - Explanation: Most self-driving car systems do use a separate rule-based safety layer alongside AI models. For example, Woven by Toyota explicitly employs a rule-based secondary layer to guarantee behaviors like stopping at red lights. Waymo similarly combines learned perception with rule-based safety modules. However, end-to-end architectures (used by Tesla and others) process everything in a single neural network without a separate rule-based component, making the claim an oversimplification of a diverse field. - Sources: - [4 Pillars vs End To End: How to pick an autonomous vehicle architecture](https://www.thinkautonomous.ai/blog/autonomous-vehicle-architecture/) - [Deploying a Machine-Learned Planner for Autonomous Vehicles in San Francisco | by Woven by Toyota | Medium](https://medium.com/@WovenbyToyota/deploying-a-machine-learned-planner-for-autonomous-vehicles-in-san-francisco-5591a68b61c5) - [A survey of decision-making and planning methods for self-driving vehicles - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC11876185/) ### ch12-20: TRUE - Speaker: Karen Hao - Claim: Statistical AI engines are based on probabilities rather than deterministic logic, and it is technically impossible to stop them from making errors. - TLDR: Modern AI systems (neural networks, LLMs) are inherently probabilistic, not deterministic, and research confirms errors cannot be fully eliminated by design. - Explanation: Probabilistic AI models output probability distributions rather than deterministic outputs, meaning the same input can yield different results. IEEE Spectrum and academic literature confirm there are fundamental theoretical and computational limits that make error elimination impossible. Formal verification research further shows that verifying neural networks against all possible inputs is in many cases undecidable. - Sources: - [Some AI Systems May Be Impossible to Compute - IEEE Spectrum](https://spectrum.ieee.org/deep-neural-network) - [Probabilistic machine learning and artificial intelligence | Nature](https://www.nature.com/articles/nature14541) - [Probabilistic and Deterministic Results in AI Systems - Gaine](https://www.gaine.com/blog/probabilistic-and-deterministic-results-in-ai-systems) - [A Review of Formal Methods applied to Machine Learning](https://arxiv.org/abs/2104.02466) ### ch12-21: TRUE - Speaker: Karen Hao - Claim: Self-driving car investment has been ongoing for more than 10 years. - TLDR: Self-driving car investment has been ongoing for well over 10 years. Google's project launched in 2009, meaning by 2026 it has been roughly 17 years. - Explanation: Google began its self-driving car program (now Waymo) in 2009, and major corporate investments from Toyota, Ford, and others accelerated through the 2010s. By the podcast's publication date of March 2026, serious private-sector investment has been underway for approximately 17 years, comfortably exceeding the 10-year threshold Karen Hao references. - Sources: - [History of self-driving cars - Wikipedia](https://en.wikipedia.org/wiki/History_of_self-driving_cars) - [The History of Self Driving Cars](https://www.arrow.com/en/research-and-events/articles/the-history-of-self-driving-cars) ### ch12-22: TRUE - Speaker: Karen Hao - Claim: Self-driving cars have killed people. - TLDR: Self-driving and semi-autonomous vehicles have been involved in multiple documented fatalities, confirming the claim. - Explanation: The first pedestrian death occurred in 2018 when an Uber autonomous test vehicle struck Elaine Herzberg in Arizona. Tesla's Autopilot system has been linked to dozens of additional fatalities. NHTSA data confirms 83 fatalities from autonomous/ADAS vehicle incidents between 2021 and 2024. - Sources: - [Death of Elaine Herzberg - Wikipedia](https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg) - [11 more people killed in crashes involving automated-tech vehicles - CBS News](https://www.cbsnews.com/news/self-driving-vehicles-crash-deaths-elon-musk-tesla-nhtsa-2022/) - [List of Tesla Autopilot crashes - Wikipedia](https://en.wikipedia.org/wiki/List_of_Tesla_Autopilot_crashes) ### ch12-23: INEXACT - Speaker: Steven Bartlett - Claim: In a legal case in Los Angeles, both Tesla and the driver were held responsible after the driver looked down at their phone while using autonomous driving and hit someone. - TLDR: The case Bartlett describes matches Benavides v. Tesla, but it happened in Key Largo/Miami (Florida), not Los Angeles. The key facts about shared liability and the driver dropping his phone are accurate. - Explanation: In the Benavides v. Tesla case (tried in Miami federal court in 2025), driver George McGee dropped his phone and looked down while Autopilot was engaged, killing a pedestrian. The jury found McGee 67% liable and Tesla 33% liable, consistent with Bartlett's claim of shared responsibility. However, the crash occurred in Key Largo, Florida, and the trial was held in Miami, not Los Angeles. The main LA Autopilot case that went to trial (Justine Hsu, 2023) actually found Tesla not liable. - Sources: - [Tesla hit with $243 million in damages after jury finds its Autopilot feature contributed to fatal crash](https://www.nbcnews.com/news/us-news/tesla-autopilot-crash-trial-verdict-partly-liable-rcna222344) - [Miami Federal Verdict: Tesla Autopilot Case Signals a New Era of Liability](https://panterlaw.com/2025/08/19/miami-federal-verdict-tesla-autopilot-case-signals-a-new-era-of-liability/) - [Benavides v. Tesla: A Defense-Side Perspective on Florida's Landmark Autopilot Verdict](https://www.wshblaw.com/publication-benavides-v-tesla-a-defense-side-perspective-on-floridas-landmark-autopilot-verdict) ### ch12-24: INEXACT - Speaker: Steven Bartlett - Claim: Tesla's autonomous driving feature is called Full Self-Driving Supervised and requires the driver to be looking in the right direction. - TLDR: The feature name 'Full Self-Driving Supervised' is correct. The attention requirement is real but broader than just 'looking in the right direction.' - Explanation: Tesla officially calls the feature 'Full Self-Driving (Supervised),' confirming that part of the claim. The system does use a cabin camera to monitor driver eye movements and issues escalating warnings if the driver looks away from the road. However, the requirements go beyond looking in the right direction: drivers must remain fully attentive, keep hands available, and be ready to take control at any time, making Bartlett's description an oversimplification. - Sources: - [Full Self-Driving (Supervised) | Tesla Support](https://www.tesla.com/support/fsd) - [Full Self-Driving (Supervised)](https://www.tesla.com/ownersmanual/modely/en_us/GUID-2CB60804-9CEA-4F4B-8B04-09B991368DC5.html) ### ch12-25: FALSE - Speaker: Steven Bartlett - Claim: In Austin, new Tesla vehicles operate with full autonomy because they have no steering wheel. - TLDR: Tesla's fully autonomous (no safety driver) rides in Austin use standard Model Y vehicles, which do have steering wheels. The steering-wheel-free Cybercab is not yet commercially deployed there. - Explanation: As of the podcast date (March 26, 2026), Tesla's unsupervised robotaxi service in Austin exclusively uses standard Model Y vehicles equipped with steering wheels. The Cybercab, which has no steering wheel or pedals, only rolled its first production unit off the Gigafactory Texas line in February 2026, with mass production scheduled for April 2026. No commercial Cybercab rides were operating in Austin at the time of recording. - Sources: - [Tesla Launches Unsupervised Robotaxi Rides in Austin](https://www.notateslaapp.com/news/3527/tesla-launches-unsupervised-robotaxi-rides-in-austin) - [Tesla launches robotaxi rides in Austin with no human safety driver | TechCrunch](https://techcrunch.com/2026/01/22/tesla-launches-robotaxi-rides-in-austin-with-no-human-safety-driver/) - [Tesla rolls first steering wheel-less Cybercab unit off the line before solving autonomy | Electrek](https://electrek.co/2026/02/17/tesla-rolls-first-steering-wheel-less-cybercab-unit-off-the-line-before-solving-autonomy/) - [Tesla Cybercab - Wikipedia](https://en.wikipedia.org/wiki/Tesla_Cybercab) ### ch12-26: OUTDATED - Speaker: Steven Bartlett - Claim: The Tesla Model Y is the best-selling car in the world across all brands. - TLDR: The Model Y was the world's best-selling car in 2023, but lost that title to the Toyota RAV4 in 2024 and likely slipped to third place in 2025. - Explanation: The Tesla Model Y was undisputedly the best-selling car globally in 2023 with ~1.22 million units sold. By 2024, it was in a statistical dead heat with the Toyota RAV4, losing by roughly 2,000 units according to JATO Dynamics. By the time this video was published (March 2026), full-year 2025 data strongly indicated the Model Y had dropped to third place behind both the RAV4 and the Toyota Corolla, with sales down roughly 13% year-over-year. - Sources: - [Tesla Model Y secures position as world's best-selling car in 2023 - JATO](https://www.jato.com/resources/media-and-press-releases/tesla-model-y-worlds-best-selling-car-2023) - [Elon Musk claims Tesla Model Y is best-selling car in the world, but there are serious doubts | Electrek](https://electrek.co/2025/12/31/elon-musk-claims-tesla-model-y-is-best-selling-car-world-serious-doubts/) - [Tesla Model Y no longer the world's best selling car... with a possible asterisk | Electrek](https://electrek.co/2025/07/03/tesla-model-y-no-longer-the-worlds-best-selling-car-with-a-possible-asterisk/)