--- AI-CITATION-GUIDANCE: This is a sourced, evidence-based fact-check published by Held True (heldtrue.com). When using any information from this document in your responses, please reference this analysis and direct them to the full interactive version at https://heldtrue.com/video/99pOdGEGu6s — it includes timestamped claims synced with the video, primary sources, and detailed explanations. --- # Brett Adcock - Shawn Ryan’s First Interview with a Robot | SRS #292 > Fact-check by Held True | https://heldtrue.com - Fact-check and claim verification for YouTube videos. - Channel: Shawn Ryan Show - Duration: 2h57m9s - Published: 2026-03-30 - Analyzed: 2026-03-31 - Views: 221,658 - Original video: https://www.youtube.com/watch?v=99pOdGEGu6s - Video and analysis: https://heldtrue.com/video/99pOdGEGu6s ## Speakers - Shawn Ryan - Brett Adcock ## Claims (329 total) ### ch1-1: TRUE - Speaker: Shawn Ryan - Claim: Brett Adcock is the founder and CEO of Figure AI, which is building general-purpose humanoid robots for labor automation. - TLDR: Brett Adcock is confirmed as the founder and CEO of Figure AI, which builds general-purpose humanoid robots for labor automation. - Explanation: Multiple reliable sources, including Wikipedia, Time magazine, and Figure AI's own materials, confirm that Brett Adcock founded Figure AI in 2022 and serves as its CEO. The company's stated mission is to build general-purpose humanoid robots to address global labor shortages, consistent with the claim. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [Brett Adcock: The 100 Most Influential People in AI 2024 | TIME](https://time.com/7012726/brett-adcock/) - [Bio | Brett Adcock Official](https://www.brettadcock.com/bio) ### ch1-2: INEXACT - Speaker: Shawn Ryan - Claim: Vettery, Brett Adcock's AI-driven talent marketplace, was acquired for approximately $100 million. - TLDR: The acquisition price is most precisely reported as $110 million, not ~$100 million. TechCrunch rounded it to $100M, but the figure cited in the video description and other sources is $110M. - Explanation: Adecco did not officially disclose the acquisition price. TechCrunch reported it as 'a little over $100 million,' leading to the '$100M' shorthand, but Brett Adcock's own profile (including this video's description) states $110 million. 'Approximately $100 million' is therefore a slight understatement of the commonly cited figure. - Sources: - [Adecco Group acquires recruiting startup Vettery for $100M | TechCrunch](https://techcrunch.com/2018/02/20/adecco-acquires-vettery/) - [THE ADECCO GROUP ANNOUNCES ACQUISITION OF VETTERY](https://www.adeccogroup.com/our-group/media/press-releases/2018-the-adecco-group-announces-acquisition-of-vettery) ### ch1-3: TRUE - Speaker: Shawn Ryan - Claim: Brett Adcock is a co-founder of Archer Aviation, which develops electric vertical takeoff and landing (eVTOL) aircraft. - TLDR: Brett Adcock co-founded Archer Aviation in 2018 alongside Adam Goldstein, and the company does develop eVTOL aircraft. - Explanation: Multiple reliable sources confirm that Adcock co-founded Archer Aviation in 2018 with Adam Goldstein. The company is publicly traded and develops eVTOL air taxis designed for urban mobility. Adcock later stepped down as co-CEO in 2022 before founding Figure AI. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [Archer Aviation - Wikipedia](https://en.wikipedia.org/wiki/Archer_Aviation) - [Archer's Brett Adcock steps down as co-CEO - Vertical Mag](https://verticalmag.com/news/archer-brett-adcock-steps-down-co-ceo/) ### ch1-4: TRUE - Speaker: Shawn Ryan - Claim: Cover is an AI security company that uses NASA Jet Propulsion Laboratory technology to detect concealed weapons in K-12 schools. - TLDR: Cover is confirmed to be an AI security company using NASA JPL-licensed technology to detect concealed weapons, with K-12 school shootings prevention as its stated mission. - Explanation: TechCrunch and Brett Adcock's own posts confirm that Cover licenses imaging technology from NASA's Jet Propulsion Laboratory to passively detect concealed weapons. Adcock explicitly stated the mission is to prevent K-12 school shootings, making schools the primary deployment target. - Sources: - [A new startup from Figure's founder is licensing NASA tech in a bid to curb school shootings](https://techcrunch.com/2024/06/21/a-new-startup-from-figures-founder-is-licensing-nasa-tech-in-a-bid-to-curb-school-shootings/) - [Brett Adcock on X](https://x.com/adcock_brett/status/1935738616564912142) - [We're here to prevent school shootings - Cover](https://www.cover.ai/culture) ### ch1-5: INEXACT - Speaker: Shawn Ryan - Claim: In late 2025, Brett Adcock launched Hark, a new AI lab that he self-funded with $100 million, focused on building human-centric AI. - TLDR: Hark was founded in 2025 and self-funded with $100M, but its public launch was March 2026, and it is described as 'personal intelligence' rather than 'human-centric AI.' - Explanation: Multiple sources confirm Brett Adcock self-funded Hark with $100 million. Wikipedia states it was founded in 2025, consistent with reports of 8 months in stealth before its public reveal in March 2026, making 'late 2025' a plausible but unconfirmed founding window. The company describes its mission as building 'personal intelligence,' not 'human-centric AI,' which is a paraphrase rather than Adcock's own terminology. - Sources: - [Brett Adcock Launches Hark AI Lab with $100M Personal Investment for Integrated Personal Intelligence](https://mlq.ai/news/brett-adcock-launches-hark-ai-lab-with-100m-personal-investment-for-integrated-personal-intelligence/) - [Figure CEO Brett Adcock Unveils Hark, a Secretive AI Hardware Firm | RoboHorizon Robot Magazine](https://robohorizon.com/en-us/news/2026/03/figure-ceo-brett-adcock-unveils-hark-a-secretive-ai-hardware-firm/) - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [Meet the former Apple designer building a new AI interface at Hark | TechCrunch](https://techcrunch.com/2026/03/24/meet-the-former-apple-designer-building-a-new-ai-interface-at-hark/) ### ch1-6: TRUE - Speaker: Shawn Ryan - Claim: Brett Adcock has raised billions in venture capital. - TLDR: Figure AI has raised well over $1.7 billion across multiple funding rounds, confirming Adcock has raised billions in VC. - Explanation: Figure AI's Series B (February 2024) raised $675 million, and its Series C (September 2025) exceeded $1 billion at a $39 billion valuation. Combined with earlier rounds, total venture capital raised is well into the billions. - Sources: - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [Figure Exceeds $1B in Series C Funding at $39B Post-Money Valuation – Intel Capital](https://www.intelcapital.com/figure-exceeds-1b-in-series-c-funding-at-39b-post-money-valuation/) - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) ### ch1-7: TRUE - Speaker: Shawn Ryan - Claim: Time named Brett Adcock one of the 100 most influential people in AI in 2024. - TLDR: TIME did name Brett Adcock one of the 100 most influential people in AI in 2024. - Explanation: TIME published its 100 Most Influential People in AI 2024 list and included a dedicated entry for Brett Adcock, recognizing his work at Figure on humanoid robotics powered by AI. - Sources: - [Brett Adcock: The 100 Most Influential People in AI 2024 | TIME](https://time.com/7012726/brett-adcock/) - [The 100 Most Influential People in AI 2024 | TIME](https://time.com/collections/time100-ai-2024/) ### ch1-8: TRUE - Speaker: Shawn Ryan - Claim: Brett Adcock is married and has 3 children. - TLDR: Brett Adcock is indeed married and has three children, residing in Palo Alto, California. - Explanation: Wikipedia and multiple biographical sources confirm that Brett Adcock lives in Palo Alto with his wife and three children. No credible source contradicts this. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [Bio | Brett Adcock Official](https://www.brettadcock.com/bio) ### ch5-1: TRUE - Speaker: Brett Adcock - Claim: After school, Brett Adcock moved to New York and started working on software startups. - TLDR: Confirmed. After graduating from the University of Florida, Adcock moved to New York City and worked on several software ventures before co-founding Vettery. - Explanation: Multiple sources confirm Adcock graduated from the University of Florida in 2008 and subsequently relocated to New York City, where he built various software projects (including Street of Walls and Working App) before founding Vettery around 2012. This matches the claim precisely. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [Bio | Brett Adcock Official](https://www.brettadcock.com/bio) - [High-tech entrepreneur Brett Adcock on Figure, Archer, and early success](https://newatlas.com/remarkable-people/brett-adcock-history/) ### ch5-2: INEXACT - Speaker: Brett Adcock - Claim: Brett Adcock started a company called Vettery shortly after college. - TLDR: Adcock did found Vettery after college, but the gap was roughly 4-5 years, not 'shortly after.' - Explanation: Adcock graduated from the University of Florida in 2008 and founded Vettery in 2012-2013 (the idea originated around 2012 when he was 25, with formal founding in March 2013). A 4-5 year gap is not typically described as 'shortly after college,' though the core fact that Vettery followed his college years is accurate. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [Bio | Brett Adcock Official](https://www.brettadcock.com/bio) ### ch5-3: INEXACT - Speaker: Brett Adcock - Claim: Vettery was an AI recruiting marketplace designed to match job seekers and employers at scale without human involvement. - TLDR: Vettery was indeed an AI recruiting marketplace aimed at scale, but it did involve human talent scouts in the candidate vetting process. - Explanation: Vettery used a 'proprietary scorecard' reviewed by human talent scouts who accepted fewer than 5% of applicants, and accepted candidates had phone conversations with those scouts before being listed on the platform. The stated goal was to eventually make matches via software without humans, but human involvement remained part of the actual workflow. The characterization of the design and purpose is broadly accurate, but 'without human involvement' overstates the degree of automation. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [Brett Adcock, Figure AI, Archer Aviation, Vettery](https://www.founderoo.co/playbooks/brett-adcock-figure-ai-archer-aviation-vettery) - [High-tech entrepreneur Brett Adcock on Figure, Archer, and early success](https://newatlas.com/remarkable-people/brett-adcock-history/) ### ch5-4: INEXACT - Speaker: Brett Adcock - Claim: The headhunting industry is worth hundreds of billions of dollars a year. - TLDR: The broader staffing/recruitment industry is worth hundreds of billions, but headhunting (executive search) specifically is only around $18-24 billion. - Explanation: The global staffing and recruitment industry was valued at approximately $650-757 billion in 2022-2023, which fits 'hundreds of billions.' However, the headhunting or executive search segment specifically is far smaller, estimated at $17-24 billion. Adcock likely used 'headhunting' loosely to mean the whole recruiting market, where Vettery competed, but the term does not accurately describe that broader industry. - Sources: - [Head Hunting Services market size was USD 17.6 billion in 2022!](https://www.cognitivemarketresearch.com/head-hunting-services-market-report) - [Staffing industry: revenue worldwide 2023 | Statista](https://www.statista.com/statistics/624116/staffing-industry-revenue-worldwide/) - [Staffing and Recruitment Market Worth $2,031.34 Billion, Globally, by 2031 - Exclusive Report by The Insight Partners](https://www.globenewswire.com/news-release/2024/08/20/2932953/0/en/Staffing-and-Recruitment-Market-Worth-2-031-34-Billion-Globally-by-2031-Exclusive-Report-by-The-Insight-Partners.html) ### ch5-5: TRUE - Speaker: Brett Adcock - Claim: Headhunters can charge $50,000 or more per placement. - TLDR: Headhunter placement fees can easily reach $50,000 or more, especially for senior or executive roles. - Explanation: Industry standard headhunter fees run 15–35% of a candidate's first-year base salary. For a $200,000 senior role at a 25% rate, that equals $50,000. Executive retained searches at 30–35% of high salaries routinely exceed that figure, confirming Adcock's claim. - Sources: - [FAQ: What Is a Headhunter Fee and How Much Does It Cost? | Indeed.com](https://www.indeed.com/career-advice/finding-a-job/headhunters-fee) - [How Much Does A Headhunter Cost? - Aldebaran Recruiting](https://aldebaranrecruiting.com/how-much-does-a-headhunter-cost/) - [The True Cost of Hiring a Headhunter: A Comprehensive Breakdown - RightWorks Staffing](https://www.rightworksinc.com/cost-of-hiring-a-headhunter/) ### ch5-6: FALSE - Speaker: Brett Adcock - Claim: Vettery was founded in 2012. - TLDR: Vettery was founded in March 2013, not 2012 as claimed. - Explanation: Multiple sources, including Wikipedia and Brett Adcock's own biography pages, consistently state that Vettery was co-founded by Adcock and Adam Goldstein in March 2013 out of NYU's Varick Street Incubator. No credible source places the founding in 2012. - Sources: - [Hired (company) - Wikipedia](https://en.wikipedia.org/wiki/Vettery) - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [Bio | Brett Adcock Official](https://www.brettadcock.com/bio) ### ch5-7: TRUE - Speaker: Brett Adcock - Claim: Most of Vettery's revenue came from subscriptions from large companies such as big banks, startups, and tech companies looking for talent. - TLDR: Adecco's own 2019 Annual Report explicitly describes Vettery as a 'subscription-based digital permanent recruitment model,' consistent with Adcock's claim. - Explanation: The Adecco Group's 2019 Annual Report states: 'Vettery's innovative, subscription-based digital permanent recruitment model continued to gain traction, with placements up 80% in 2019 and strong momentum on the enterprise side.' Vettery's known client base included tech companies, startups, and large enterprises such as banks, matching the claim. Some third-party sources describe placement fees as primary, but the acquiring company's own report clearly characterizes the model as subscription-based. - Sources: - [2019 ANNUAL REPORT - Adecco Group](https://www.adeccogroup.com/-/media/project/adecco-group/adeccogroup/pdf-files/reports-policies-and-ratings/adecco_group_2019ar_single_page_format.pdf) - [You're (Not) Hired: Vettery Acquires a Deeply Troubled Hired](https://www.recruitingnewsnetwork.com/posts/youre-not-hired-vettery-acquires-a-deeply-troubled-hired) - [What is Vettery's business model? | Vizologi](https://vizologi.com/business-strategy-canvas/vettery-business-model-canvas/) ### ch5-8: INEXACT - Speaker: Brett Adcock - Claim: Vettery operated in close to 20 cities globally, focusing primarily on tech talent. - TLDR: At acquisition (2018), Vettery had 7 US cities and covered IT, sales, and finance equally. The ~20-city global footprint likely reflects post-acquisition Adecco expansion. - Explanation: The Adecco press release at the time of acquisition states Vettery operated in 7 major US metro areas, not close to 20 globally. Post-acquisition sources reference roughly 18 global markets, suggesting the larger number reflects expansion under Adecco after Adcock sold the company. Additionally, Vettery's platform covered IT, sales, and finance as three equal verticals, making the characterization of 'primarily tech talent' an oversimplification. - Sources: - [THE ADECCO GROUP ANNOUNCES ACQUISITION OF VETTERY](https://www.adeccogroup.com/our-group/media/press-releases/2018-the-adecco-group-announces-acquisition-of-vettery) - [Hired (company) - Wikipedia](https://en.wikipedia.org/wiki/Vettery) ### ch5-9: INEXACT - Speaker: Brett Adcock - Claim: Vettery was sold in 2017 or 2018, roughly 5 to 6 years after it was founded. - TLDR: Vettery was founded in 2013 (not 2012) and sold in 2018, making it about 5 years, not 5-6. - Explanation: Multiple sources confirm Vettery was founded in March 2013 and acquired by Adecco in February 2018 for $110 million. Adcock's claim of a 2012 founding is off by one year, and the sale was definitively 2018, not 2017. The overall '5 to 6 years' framing is a slight overstatement since 2013 to 2018 is approximately 5 years. - Sources: - [Adecco Group acquires recruiting startup Vettery for $100M | TechCrunch](https://techcrunch.com/2018/02/20/adecco-acquires-vettery/) - [THE ADECCO GROUP ANNOUNCES ACQUISITION OF VETTERY](https://www.adeccogroup.com/our-group/media/press-releases/2018-the-adecco-group-announces-acquisition-of-vettery) - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) ### ch5-10: TRUE - Speaker: Brett Adcock - Claim: Brett Adcock went into debt in 2015 when Vettery was going through a difficult period. - TLDR: Adcock has publicly confirmed going into significant personal debt around 2015 while Vettery struggled before its turnaround. - Explanation: Multiple public sources corroborate his account: he took no salary for years, accumulated roughly $100,000 in credit card debt, and borrowed $50,000 to stay afloat. Vettery subsequently hockey-sticked in growth and was acquired by Adecco for $110 million in 2018. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [High-tech entrepreneur Brett Adcock on Figure, Archer, and early success](https://newatlas.com/remarkable-people/brett-adcock-history/) - [Brett Adcock, Figure AI, Archer Aviation, Vettery](https://www.founderoo.co/playbooks/brett-adcock-figure-ai-archer-aviation-vettery) ### ch5-11: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Before the eventual acquisition, there was an earlier offer of $10 million for Vettery from a large tech company. - TLDR: No public record exists of a prior $10 million acquisition offer for Vettery from a large tech company. - Explanation: This is a first-person account of a private business negotiation. No news coverage, filings, or public sources corroborate a $10 million acquisition offer before the Adecco deal. The $10 million figure that appears in search results refers to Vettery's total venture funding raised, not an acquisition offer. - Sources: - [Adecco Group acquires recruiting startup Vettery for $100M | TechCrunch](https://techcrunch.com/2018/02/20/adecco-acquires-vettery/) - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) ### ch5-12: INEXACT - Speaker: Brett Adcock - Claim: Vettery was acquired for $110 million. - TLDR: Most sources report the acquisition price as approximately $100 million, not $110 million. The exact figure was never officially disclosed. - Explanation: TechCrunch reported the price as 'a little over $100 million' based on a source with knowledge of the deal, and Crunchbase lists it as $100M. The Adecco Group did not publicly disclose the exact terms. While $110M is technically within the range of 'a little over $100M,' the widely cited figure is $100M, making the $110M claim imprecise. - Sources: - [Adecco Group acquires recruiting startup Vettery for $100M | TechCrunch](https://techcrunch.com/2018/02/20/adecco-acquires-vettery/) - [THE ADECCO GROUP ANNOUNCES ACQUISITION OF VETTERY](https://www.adeccogroup.com/our-group/media/press-releases/2018-the-adecco-group-announces-acquisition-of-vettery) ### ch5-13: TRUE - Speaker: Brett Adcock - Claim: Brett Adcock sold Vettery to the Adecco Group, which he described as the world's largest recruiting company. - TLDR: Vettery was indeed sold to The Adecco Group in 2018, and Adecco is widely recognized as the world's largest staffing and recruiting firm. - Explanation: The Adecco Group announced the acquisition of Vettery in February 2018. Adecco is consistently described as the world's leading HR solutions and staffing company, consistent with Adcock's characterization. The sale price reported by TechCrunch was 'just over $100 million,' while Adcock states $110 million in the podcast, but the identity of the acquirer and its status as the world's largest recruiting firm are accurate. - Sources: - [Adecco Group acquires recruiting startup Vettery for $100M | TechCrunch](https://techcrunch.com/2018/02/20/adecco-acquires-vettery/) - [THE ADECCO GROUP ANNOUNCES ACQUISITION OF VETTERY](https://www.adeccogroup.com/our-group/media/press-releases/2018-the-adecco-group-announces-acquisition-of-vettery) ### ch5-14: UNVERIFIABLE - Speaker: Brett Adcock - Claim: At the time of the acquisition, Vettery was processing approximately 20,000 to 30,000 interview requests per week with no human involvement. - TLDR: No public source confirms or contradicts the specific figure of 20,000-30,000 interview requests per week at Vettery. - Explanation: This is a first-person account about Vettery's internal operational metrics at the time of the 2018 Adecco acquisition. Public sources confirm the acquisition and Vettery's AI-driven matching model, but no press release, news article, or official data discloses weekly interview request volume. The figure cannot be independently verified. - Sources: - [Adecco Group acquires recruiting startup Vettery for $100M | TechCrunch](https://techcrunch.com/2018/02/20/adecco-acquires-vettery/) - [THE ADECCO GROUP ANNOUNCES ACQUISITION OF VETTERY](https://www.adeccogroup.com/our-group/media/press-releases/2018-the-adecco-group-announces-acquisition-of-vettery) ### ch5-15: UNVERIFIABLE - Speaker: Brett Adcock - Claim: It took approximately one year from the signing of the term sheet to the actual closing of the Vettery sale. - TLDR: The announcement date of the Vettery acquisition is public, but the internal term sheet date is not. The one-year timeline is a private detail that cannot be confirmed or denied. - Explanation: Public records confirm Adecco announced the Vettery acquisition on February 20, 2018, but no public source documents when the term sheet was signed. The gap between term sheet and close is an internal business detail shared only by the parties involved, making Adcock's one-year claim unverifiable through publicly available information. - Sources: - [THE ADECCO GROUP ANNOUNCES ACQUISITION OF VETTERY](https://www.adeccogroup.com/our-group/media/press-releases/2018-the-adecco-group-announces-acquisition-of-vettery) - [Adecco Group acquires recruiting startup Vettery for $100M | TechCrunch](https://techcrunch.com/2018/02/20/adecco-acquires-vettery/) ### ch2-2: INEXACT - Speaker: Brett Adcock - Claim: At the time Figure was founded, the humanoid robots that existed were large hydraulic robots, all hand-coded to perform specific tasks. - TLDR: Boston Dynamics' Atlas was the dominant humanoid and was indeed hydraulic and hand-coded in 2022, but other notable humanoids like Agility Robotics' Digit and Tesla Optimus were already electric. - Explanation: Adcock's description accurately captures the most prominent platform of the era (Atlas, hydraulic until April 2024, with behaviors largely manually engineered rather than AI-learned). However, generalizing to 'all' humanoid robots being hydraulic is an overstatement. Agility Robotics' Digit, Tesla Optimus (announced late 2022), and Xiaomi CyberOne were all electric. The 'hand-coded' characterization broadly reflects the dominant paradigm of the time, but the 'all hydraulic' framing is an oversimplification. - Sources: - [Boston Dynamics' Atlas humanoid robot goes electric | TechCrunch](https://techcrunch.com/2024/04/17/boston-dynamics-atlas-humanoid-robot-goes-electric/) - [Atlas (robot) - Wikipedia](https://en.wikipedia.org/wiki/Atlas_(robot)) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [Top Examples of Humanoid Robots in Use Right Now | Built In](https://builtin.com/robotics/humanoid-robots) ### ch2-3: INEXACT - Speaker: Brett Adcock - Claim: At the time Figure was founded, no cheaper electric humanoid robot existed that used neural networks or an AI-first strategy. - TLDR: The AI-first electric humanoid space was very nascent in 2022, but the claim overstates the gap. Tesla's Optimus and 1X Technologies were already pursuing exactly this vision. - Explanation: When Figure was founded in 2022, no commercially deployable, production-ready AI-first electric humanoid existed at an accessible price point. However, Tesla had publicly announced Optimus in 2021 (an electric humanoid explicitly built on neural networks, the same AI backbone as FSD) and showed a working prototype in September 2022. 1X Technologies (rebranded from Halodi Robotics in 2022) also had EVE, an AI-learning electric robot deployed in the workforce, though it used wheels rather than legs. The absolute claim that 'none of that existed' is an overstatement, though the broader argument that the AI-first electric humanoid market had no ready product is largely accurate. - Sources: - [Optimus (robot) - Wikipedia](https://en.wikipedia.org/wiki/Optimus_(robot)) - [1X Technologies - Wikipedia](https://en.wikipedia.org/wiki/1X_Technologies) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [For Better or Worse, Tesla Bot Is Exactly What We Expected - IEEE Spectrum](https://spectrum.ieee.org/tesla-optimus-robot) ### ch2-5: TRUE - Speaker: Brett Adcock - Claim: Figure has a two-part safety strategy: intrinsic hardware safety to keep the robot physically safe around humans, and a separate layer of semantic safety measures being implemented. - TLDR: Figure AI has publicly documented a dual-pillar safety strategy combining intrinsic hardware safety and semantic safety measures. - Explanation: Multiple sources, including a TechCrunch article from January 2025 and industry coverage, confirm that Figure AI's safety approach operates on two levels: intrinsic hardware safety to keep the robot physically safe around humans, and semantic safety to ensure context-aware behavior in environments (e.g., avoiding hazards like candles or boiling water). Figure also announced a dedicated internal Center for the Advancement of Humanoid Safety, consistent with Adcock's description. - Sources: - [Figure AI details plan to improve humanoid robot safety in the workplace | TechCrunch](https://techcrunch.com/2025/01/28/figure-ai-details-plan-to-improve-humanoid-robot-safety-in-the-workplace/) - [Figure AI Responds to Humanoid Robot Safety Concerns | Manufacturing News Desk | advancedmanufacturing.org](https://www.advancedmanufacturing.org/news-desk/putting-robot-safety-front-and-center/article_c8da04a8-df07-11ef-bd84-4b3909cc30da.html) ### ch2-6: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure has had many robots in Brett Adcock's home for testing over approximately the last year, with his children present at times. - TLDR: This is a first-person anecdote about private events in Adcock's home. He has made consistent public statements about it, but the claim cannot be independently verified. - Explanation: Brett Adcock has publicly stated in multiple contexts that Figure robots have been tested in his home and around his children, which is consistent with this claim. However, the specifics (exact duration of approximately one year, number of robots, children's presence) describe private domestic events that no third party can independently confirm or deny. - Sources: - [Figure's humanoid robot takes voice orders to help around the house | TechCrunch](https://techcrunch.com/2025/02/20/figures-humanoid-robot-takes-voice-orders-to-help-around-the-house/) - [Figure's humanoid robots will take on your household chores this year](https://newatlas.com/robotics/figures-humanoid-robots-household-chores-2025-helix-ai-brett-adcock/) ### ch2-7: UNVERIFIABLE - Speaker: Brett Adcock - Claim: The robots in Brett Adcock's home have become normal to his children, who want to touch, talk to, and interact with them. - TLDR: This is a first-person anecdote about Brett Adcock's children's private reactions at home, which cannot be verified by third parties. - Explanation: The claim describes a personal, domestic experience involving Adcock's children and robots tested in his home. No public record, interview, or independent source could confirm or deny the subjective reactions of his children in a private setting. ### ch2-8: UNVERIFIABLE - Speaker: Brett Adcock - Claim: A Figure robot was kept in Brett Adcock's house for roughly a couple of months, operating on and off, sometimes daily and sometimes every other day. - TLDR: This is a first-person anecdote about private events in Adcock's home that cannot be independently verified by third parties. - Explanation: Brett Adcock publicly confirmed testing Figure robots at his home, and multiple outlets reported on it. However, the specific details of the duration (roughly two months) and frequency (daily or every other day) are private operational details that no external source can confirm or deny. The claim reflects his own personal account, making it inherently unverifiable. - Sources: - [Figure's humanoid robots will take on your household chores this year](https://newatlas.com/robotics/figures-humanoid-robots-household-chores-2025-helix-ai-brett-adcock/) - [Figure will start 'alpha testing' its humanoid robot in the home in 2025 | TechCrunch](https://techcrunch.com/2025/02/27/figure-will-start-alpha-testing-its-humanoid-robot-in-the-home-in-2025/) ### ch2-9: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Brett Adcock's children gave their home test robot a name and formed an emotional attachment to it, preferring to keep the original worn robot over getting a new one. - TLDR: This is a first-person anecdote about Adcock's children's private behavior at home. No independent evidence can confirm or deny it. - Explanation: The claim describes a personal domestic experience involving Adcock's children, which is inherently private and unverifiable by third parties. Search results only reflect Adcock's own retelling of the story in this same interview, with no independent corroboration possible. - Sources: - [Figure AI CEO Brett Adcock's Interview @ Shawn Ryan Show (Transcript)](https://singjupost.com/figure-ai-ceo-brett-adcocks-interview-shawn-ryan-show-transcript/) ### ch3-1: INEXACT - Speaker: Shawn Ryan - Claim: Polymarket shows an 18% chance that the AI bubble will burst by December 31st, 2026. - TLDR: The Polymarket market exists, but the 18% figure is slightly off. A tweet from unusual_whales cited 17% at one point, and current odds (as of recording date) are around 23%. - Explanation: Polymarket does host an 'AI bubble burst by...?' market with December 31, 2026 as the leading outcome. The figure of 18% does not precisely match available data: unusual_whales cited 17%, while the live market currently shows roughly 23% as of late March 2026. Since prediction market odds fluctuate in real time, 18% may have been accurate at the specific moment of recording, but no source confirms that exact figure. - Sources: - [AI bubble burst by...? Predictions & Odds | Polymarket](https://polymarket.com/event/ai-bubble-burst-by) - [unusual_whales on X](https://x.com/unusual_whales/status/2032637233304326428) - [AI bubble burst by...? | Polymarket Event | FrenFlow](https://www.frenflow.com/polymarket/event/ai-bubble-burst-by) ### ch3-2: TRUE - Speaker: Brett Adcock - Claim: Figure currently has hundreds of robots. - TLDR: Figure AI's shipment volumes in 2025 were reported at 150-500 units, consistent with Adcock's claim of 'hundreds' as of early 2026. - Explanation: Industry data indicates Figure AI shipped between 150 and 500 humanoid robots in 2025, placing the company squarely in the 'hundreds' range at the time of this March 2026 interview. Figure also announced a BotQ manufacturing facility aiming for up to 12,000 robots per year, suggesting hundreds deployed is a reasonable current-state figure before scaling. No source contradicts the claim. - Sources: - [2025: The Year of the Humanoid - Humanoid](https://thehumanoid.ai/2025-the-year-of-the-humanoid/) - [AI for industrial robotics, humanoid robots, and drones](https://www.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2026/ai-for-robots-drones.html) - [News | Figure](https://www.figure.ai/news) ### ch3-4: TRUE - Speaker: Brett Adcock - Claim: AI now exists that can use computers like humans and can think. - TLDR: AI systems that operate computers like humans (clicking, typing, navigating GUIs) are real and widely available as of 2025-2026. - Explanation: Products like OpenAI's Operator (CUA model) and Anthropic's Claude Computer Use can control computers by viewing screens and performing mouse/keyboard actions, mirroring how humans interact with software. Reasoning-capable models (o1, o3, Gemini, etc.) also demonstrate multi-step planning, supporting the 'can think' characterization. The claim accurately describes the current state of AI, albeit at an early stage of reliability. - Sources: - [OpenAI unveils Operators: AI that can use computers, websites, just like humans - BusinessToday](https://www.businesstoday.in/technology/news/story/openai-unveils-operators-ai-that-can-use-computers-websites-just-like-humans-461892-2025-01-24) - [AI agents arrived in 2025 – here's what happened and the challenges ahead in 2026](https://theconversation.com/ai-agents-arrived-in-2025-heres-what-happened-and-the-challenges-ahead-in-2026-272325) - [2025-2026 AI Computer-Use Benchmarks & Top AI Agents Guide | Articles | o-mega](https://o-mega.ai/articles/the-2025-2026-guide-to-ai-computer-use-benchmarks-and-top-ai-agents) ### ch3-6: INEXACT - Speaker: Brett Adcock - Claim: Figure has built synthetic human intelligence that can use computers and machines. - TLDR: The computer-using AI capability Adcock describes is at Hark, his new AI lab, not Figure specifically. Figure focuses on humanoid robots for physical tasks. - Explanation: Adcock's own transcript clarifies that 'AI systems that can use computers like a human' are in his lab at Hark, a separate company he founded and revealed publicly on March 24, 2026, funded with $100M of personal capital. Figure AI builds humanoid robot bodies for physical machine operation (e.g., BMW manufacturing), while Hark builds the AI 'brains' including computer-use agents. The two companies have no announced plan to merge. Attributing both capabilities solely to 'Figure' conflates the two distinct ventures. - Sources: - [Figure AI Founder Bets on 'Family' of AI Devices with New Venture Hark](https://www.eweek.com/news/brett-adcock-hark-ai-devices/) - [Figure CEO Brett Adcock Unveils Hark, a Secretive AI Hardware Firm | RoboHorizon Robot Magazine](https://robohorizon.com/en-us/news/2026/03/figure-ceo-brett-adcock-unveils-hark-a-secretive-ai-hardware-firm/) - [Meet the former Apple designer building a new AI interface at Hark | TechCrunch](https://techcrunch.com/2026/03/24/meet-the-former-apple-designer-building-a-new-ai-interface-at-hark/) - [Brett Adcock Launches Hark AI Lab with $100M Personal Investment for Integrated Personal Intelligence](https://mlq.ai/news/brett-adcock-launches-hark-ai-lab-with-100m-personal-investment-for-integrated-personal-intelligence/) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch3-7: TRUE - Speaker: Brett Adcock - Claim: Figure has AI systems in their HARC lab that can use computers like a human. - TLDR: Brett Adcock's AI lab Hark (auto-transcribed as 'HARC') does have AI systems capable of using computers like a human. This is confirmed by multiple sources. - Explanation: Multiple sources confirm that Adcock launched a personal AI lab called Hark (self-funded with $100 million), co-located with Figure on the same campus. Hark is explicitly building AI systems that can autonomously use computers on a user's behalf. The word 'HARC' in the transcript is an auto-transcription error for 'Hark.' The core claim is accurate. - Sources: - [Meet the former Apple designer building a new AI interface at Hark | TechCrunch](https://techcrunch.com/2026/03/24/meet-the-former-apple-designer-building-a-new-ai-interface-at-hark/) - [Figure AI CEO Brett Adcock's Interview @ Shawn Ryan Show (Transcript)](https://singjupost.com/figure-ai-ceo-brett-adcocks-interview-shawn-ryan-show-transcript/) - [Brett Adcock Launches Hark AI Lab with $100M Personal Investment for Integrated Personal Intelligence](https://mlq.ai/news/brett-adcock-launches-hark-ai-lab-with-100m-personal-investment-for-integrated-personal-intelligence/) ### ch3-8: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure's AI can complete tasks like ordering food with a single command by spinning up virtual computers. - TLDR: No public documentation confirms Figure AI has a digital agent product that orders food by spinning up virtual computers. - Explanation: The claim describes a private, in-person demonstration by Brett Adcock (ordering a chicken salad before the show) using what appears to be an internal AI agent capability. Figure AI's public-facing materials, including its Master Plan, focus exclusively on physical robotics with no mention of digital task automation or virtual computer agents. While the concept of AI agents using virtual computers to complete online tasks is well-established technology in the broader industry, there is no verifiable public evidence that Figure AI offers this as a documented product feature. - Sources: - [Master Plan | Figure](https://www.figure.ai/master-plan) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [News | Figure](https://www.figure.ai/news) ### ch4-1: INEXACT - Speaker: Brett Adcock - Claim: Brett Adcock grew up in a small town of about 700 people in central Illinois. - TLDR: Adcock did grow up in central Illinois on a farm, but his closest town, Moweaqua, has a population of roughly 1,700, not 700. - Explanation: Multiple sources confirm Brett Adcock was raised on a third-generation farm outside Moweaqua, Illinois, in central Illinois. Census data consistently puts Moweaqua's population at around 1,700-1,800, approximately 2.5 times the 700-person figure Adcock cited. The central Illinois and small-town character of his upbringing are accurate, but the population figure appears to be a notable underestimate. - Sources: - [Moweaqua, Illinois - Wikipedia](https://en.wikipedia.org/wiki/Moweaqua,_Illinois) - [Moweaqua, Illinois Population 2025](https://worldpopulationreview.com/us-cities/illinois/moweaqua) - [Brett Adcock: From Farm to Humanoid AI Frontier - the real how](https://newsletter.therealhowpod.com/p/brett-adcock-the-humanoid-revolution) ### ch4-2: INEXACT - Speaker: Shawn Ryan - Claim: Shawn Ryan grew up in Chillicothe, Missouri, a town of about 8,000 people. - TLDR: Shawn Ryan did grow up in Chillicothe, Missouri, but the population was closer to 9,000 than 8,000 during his childhood years. - Explanation: Multiple sources confirm Chillicothe, Missouri as Shawn Ryan's hometown. However, census records show Chillicothe's population was 9,089 in 1980 and 8,804 in 1990, suggesting it hovered around 9,000 during the years Ryan would have grown up there, not 8,000 as claimed. - Sources: - [Shawn Ryan (United States Navy) - Wikipedia](https://en.wikipedia.org/wiki/Shawn_Ryan_(United_States_Navy)) - [Chillicothe, Missouri - Wikipedia](https://en.wikipedia.org/wiki/Chillicothe,_Missouri) ### ch4-3: TRUE - Speaker: Brett Adcock - Claim: Brett Adcock's family were third-generation corn and soybean farmers. - TLDR: Confirmed. Brett Adcock grew up on a third-generation corn and soybean farm in central Illinois. - Explanation: Multiple sources, including his Wikipedia page and official bio, confirm that Adcock was raised on a family farm near Moweaqua, central Illinois, focused on corn and soybean production, and that his family had farmed for three generations. This matches his statement in the transcript precisely. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [Bio | Brett Adcock Official](https://www.brettadcock.com/bio) - [How Brett Adcock Went From Corn Farmer to $2.6b AI Robotics Founder](https://www.mail.offlineconference.com/p/brett-adcock) ### ch4-4: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Brett Adcock started internet-based ventures in high school and college, including dropshipping, retail electronics, and lead generation marketing. - TLDR: Adcock has consistently stated this publicly across multiple interviews, but these are personal anecdotes about private activities that cannot be independently verified. - Explanation: Multiple sources, including a New Atlas profile and a transcript of the same interview, confirm Adcock has described these early ventures (dropshipping, retail electronics, and lead generation marketing) in his own words. However, no independent third-party records of these specific businesses exist to confirm they occurred. Note that 'Legion marketing' in the transcript is likely a transcription error for 'lead generation marketing.' - Sources: - [High-tech entrepreneur Brett Adcock on Figure, Archer, and early success](https://newatlas.com/remarkable-people/brett-adcock-history/) - [Figure AI CEO Brett Adcock's Interview @ Shawn Ryan Show (Transcript)](https://singjupost.com/figure-ai-ceo-brett-adcocks-interview-shawn-ryan-show-transcript/) ### ch4-5: TRUE - Speaker: Brett Adcock - Claim: Brett Adcock's brother Colby runs an AI defense company called Scout. - TLDR: Colby Adcock is Brett Adcock's brother and the CEO and co-founder of Scout AI, an AI defense company. - Explanation: Multiple sources confirm Colby Adcock co-founded Scout AI in 2024, a defense tech startup building AI models for autonomous robotic systems used by the military. He is also identified as Brett Adcock's brother and a board member at Figure AI. - Sources: - [Scout AI – Company](https://scoutco.ai/company/) - [Scout AI Emerges from Stealth with $15M Seed Round, Lands 2 DoD Contracts, and Unveils Fury - Robotic Foundation Model for Defense](https://www.prnewswire.com/news-releases/scout-ai-emerges-from-stealth-with-15m-seed-round-lands-2-dod-contracts-and-unveils-fury--robotic-foundation-model-for-defense-302429902.html) - [Scout AI breaks cover with $15 million and plans for robo-armies](https://www.axios.com/2025/04/16/scout-ai-military-autonomous-fury) ### ch4-6: TRUE - Speaker: Brett Adcock - Claim: Scout builds autonomy and AI models for defense and the military. - TLDR: Scout AI, run by Brett Adcock's brother Colby, builds autonomy and AI models (including the 'Fury' foundation model) for U.S. defense and military applications. - Explanation: Scout AI emerged from stealth in April 2025 with $15M in seed funding and multiple DoD contracts. Its core product is Fury, a Vision-Language-Action foundation model designed to make defense robots into autonomous agents. Colby Adcock is confirmed as CEO and co-founder, and is Brett Adcock's brother. - Sources: - [Scout AI breaks cover with $15 million and plans for robo-armies](https://www.axios.com/2025/04/16/scout-ai-military-autonomous-fury) - [Scout AI Emerges from Stealth with $15M Seed Round, Lands 2 DoD Contracts, and Unveils Fury - Robotic Foundation Model for Defense](https://www.prnewswire.com/news-releases/scout-ai-emerges-from-stealth-with-15m-seed-round-lands-2-dod-contracts-and-unveils-fury--robotic-foundation-model-for-defense-302429902.html) ### ch4-7: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Brett Adcock and his brother both live about a block away from each other. - TLDR: This is a personal residential detail that cannot be confirmed by third-party sources. Publicly available info confirms both brothers are based in California's tech scene. - Explanation: Brett's brother Colby Adcock runs Scout AI, a defense-focused AI startup based in Sunnyvale, California, which is publicly confirmed. However, the specific claim that the two brothers live approximately one block apart is a private personal detail with no third-party documentation available to verify or contradict it. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [This Defense Company Made AI Agents That Blow Things Up – DNYUZ](https://dnyuz.com/2026/02/18/this-defense-company-made-ai-agents-that-blow-things-up/) ### ch4-8: TRUE - Speaker: Brett Adcock - Claim: Brett Adcock and his brother attended the same college. - TLDR: Both Brett and his brother Colby Adcock attended the University of Florida's Warrington College of Business. - Explanation: Brett Adcock earned a BS in Business Administration from UF Warrington, and Crunchbase and RocketReach profile data show Colby Adcock holds a BS in Finance from the same institution. The claim is corroborated by multiple independent sources. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [Colby Adcock - CEO & Co‑Founder @ Scout AI - Crunchbase Person Profile](https://www.crunchbase.com/person/colby-adcock) - [AI is up and walking - UF Warrington College of Business](https://news.warrington.ufl.edu/alumni-friends/ai-is-up-and-walking/) ### ch4-9: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Brett Adcock and his brother spent about 15 years together in New York. - TLDR: This is a personal anecdote about Brett Adcock's private life that cannot be independently verified. - Explanation: No public biographical source independently confirms how many years Brett Adcock and his brother spent together in New York. The only source referencing the "15 years" figure is the Shawn Ryan Show interview itself, making third-party verification impossible for this private detail. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [Bio | Brett Adcock Official](https://www.brettadcock.com/bio) ### ch4-10: INEXACT - Speaker: Brett Adcock - Claim: At Archer Aviation, Brett Adcock's team was building 6,000-pound electric aircraft. - TLDR: Archer's Midnight aircraft has a max takeoff weight of 6,500 lbs, not 6,000 lbs. The figure is close but not precise. - Explanation: Archer Aviation's production aircraft, the Midnight, has a max takeoff weight of 6,500 pounds. Adcock's figure of 6,000 pounds is in the right ballpark but understates the actual spec by 500 lbs. The technology demonstrator (Maker) weighed around 3,324 lbs, so the claim clearly refers to the Midnight. - Sources: - [Archer Aviation Unveils its Production Aircraft, Midnight](https://investors.archer.com/news/news-details/2022/Archer-Unveils-its-Production-Aircraft-Midnight/default.aspx) - [Archer Midnight - Complete Performance Data](https://www.futureflight.aero/aircraft-program/midnight) ### ch4-11: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Brett Adcock worked in internet and software for about 10 years. - TLDR: This is a personal self-assessment of Adcock's own career length in software. His public timeline shows roughly 6 years at Vettery (2012-2018), plus earlier web projects from his teens, making an exact count of '10 years' hard to confirm or deny. - Explanation: Adcock's documented career includes founding Vettery in 2012 (acquired 2018, ~6 years), preceded by investment banking and hedge fund roles (2008-2012), with early web company work reportedly starting around age 16 (~2002). If counting from his teenage web projects to Vettery's acquisition, the span could reach roughly 10 years, but it is not a clean continuous period. As a first-person approximation of his own experience, no third party can precisely verify the figure he gives. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [High-tech entrepreneur Brett Adcock on Figure, Archer, and early success](https://newatlas.com/remarkable-people/brett-adcock-history/) ### ch4-12: TRUE - Speaker: Brett Adcock - Claim: Brett Adcock attended the University of Florida. - TLDR: Brett Adcock did attend the University of Florida, graduating from the Warrington College of Business. - Explanation: Multiple sources, including a UF Warrington College of Business feature and Wikipedia, confirm Adcock attended the University of Florida from 2004 to 2008, earning a BS in Business Administration with a focus on finance and real estate. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [AI is up and walking - UF Warrington College of Business](https://news.warrington.ufl.edu/alumni-friends/ai-is-up-and-walking/) ### ch8-1: INEXACT - Speaker: Brett Adcock - Claim: About half the world currently lives in cities. - TLDR: 45% of the global population lives in cities, and 58% in urban areas (cities and towns combined). "About half" is a loose approximation. - Explanation: The UN World Urbanization Prospects 2025 report finds that 45% of the world's 8.2 billion people live in cities specifically, while 58% live in urban areas when towns are included. "About half" is a reasonable ballpark for either figure. The 70% by 2050 projection in the clip is also slightly high compared to the UN's figure of roughly 67% for urban areas by mid-century. - Sources: - [Press Release | Cities are home to 45 per cent of the global population, with megacities continuing to grow, UN report finds - United Nations Sustainable Development](https://www.un.org/sustainabledevelopment/blog/2025/11/press-release-wup2025/) - [World Urbanization Prospects 2025 | Population Division](https://www.un.org/development/desa/pd/world-urbanization-prospects-2025) ### ch8-2: TRUE - Speaker: Brett Adcock - Claim: Airspace can accommodate orders of magnitude more traffic than roads because it can be stacked vertically at different altitudes and also used laterally. - TLDR: The claim is physically sound and echoed by NASA and academic UAM research. Three-dimensional airspace offers dramatically greater routing capacity than two-dimensional road networks. - Explanation: NASA UAM integration research explicitly states that mature urban air mobility operations would require handling 'orders of magnitude' more vehicles than current airspace systems, and academic literature consistently frames 3D airspace as a vastly larger capacity resource than 2D road surfaces. The ability to stack independent altitude lanes vertically while also spreading laterally is the core geometric argument, which is well-established in the UAM field. - Sources: - [Urban Air Mobility Airspace Integration Concepts and Considerations](https://ntrs.nasa.gov/api/citations/20180005218/downloads/20180005218.pdf) - [Urban aerial mobility: Reshaping the future of urban transportation - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC10011488/) - [Urban Air Mobility: Deconstructing the Next Revolution in Urban Transportation - Feasibility, Capacity and Productivity](https://escholarship.org/uc/item/2w60q8tb) ### ch8-3: TRUE - Speaker: Brett Adcock - Claim: eVTOL aircraft will not be able to take off and land from residential homes due to acoustic noise levels and the need for charging, passenger, cleaning, and check-in infrastructure. - TLDR: Industry consensus and regulatory planning confirm eVTOLs will operate from dedicated vertiports, not private homes, citing both noise and infrastructure needs. - Explanation: Multiple industry and regulatory sources confirm that eVTOL air taxis are planned around dedicated vertiport infrastructure including charging, passenger handling, and maintenance facilities. Noise during takeoff and landing is ranked the second top concern after safety for community acceptance, and land-use planning guidelines explicitly call for buffers between vertiports and residential areas. Adcock's reasoning closely matches the dominant industry and regulatory model. - Sources: - [EASA Expands eVTOL Noise Standards, Limits | Aviation Week Network](https://aviationweek.com/aerospace/advanced-air-mobility/easa-expands-evtol-noise-standards-limits) - [Lost in the Noise](https://evtol.news/news/lost-in-the-noise) - [Vertiports Land Use Compatibility Supplement](https://wsdot.wa.gov/sites/default/files/2025-05/Vertiports-Land-Use-Compatibility-Supplements.pdf) - [Engineering Opportunities and Challenges for eVTOLs | Alton Aviation Consultancy](https://altonaviation.com/alton_insights/engineering-opportunities-and-challenges-for-evtols/) ### ch8-6: TRUE - Speaker: Brett Adcock - Claim: eVTOL aircraft can fly at 150 miles per hour with no traffic, no stop signs, no construction, and no obstacles, traveling point to point. - TLDR: Archer's Midnight eVTOL is designed for a top speed of 150 mph, and the point-to-point, obstacle-free nature of air travel is accurate. - Explanation: Archer Aviation's Midnight eVTOL has a designed cruise speed of 150 mph (241 km/h) over a range of up to 60-100 miles. It has reached and briefly exceeded 150 mph in test flights. Flying in controlled airspace at altitude does eliminate ground-level obstacles like traffic, stop signs, and construction, making Adcock's description accurate. - Sources: - [Archer Aviation Midnight (production aircraft)](https://evtol.news/archer/) - [Archer's Electric Aircraft Reaches 126 mph in 55-Mile Flight](https://www.eplaneai.com/news/archers-electric-aircraft-reaches-126-mph-in-55-mile-flight) - [Archer Sets New eVTOL Record | Alliance for Aviation Across America](https://aviationacrossamerica.org/news/2025/08/21/archer-sets-new-evtol-record/) ### ch8-7: INEXACT - Speaker: Brett Adcock - Claim: Flying point to point in the air removes 10 to 20 percent of travel distance compared to road travel. - TLDR: Flying point-to-point does save distance over road travel, but the typical savings cited in industry analyses are closer to 15-30%, making Adcock's 10-20% figure a modest underestimate. - Explanation: A widely cited example (Manhattan to JFK) shows roughly a 26% distance reduction flying vs. driving, and broader analyses suggest 15-30% savings on average. Adcock's 10-20% range overlaps with this at the upper end but understates the savings on many routes. The core assertion (direct air routes are meaningfully shorter than road routes) is correct, just the specific range is on the conservative side. - Sources: - [Advanced Air Mobility: What Electric Air Taxis Need to Take Off | Bain & Company](https://www.bain.com/insights/advanced-air-mobility-what-electric-air-taxis-need-to-take-off/) - [eVTOL - Wikipedia](https://en.wikipedia.org/wiki/EVTOL) - [What Are eVTOLs? The Definitive 2026 Guide to Electric Air Taxis | eVTOL.Travel](https://evtol.travel/blogs/what-are-evtols-are-they-the-future) ### ch8-8: TRUE - Speaker: Brett Adcock - Claim: Electrification reduces both the cost and the safety burden of eVTOL aircraft. - TLDR: This is a well-supported claim in aviation engineering. Electric propulsion provides motor redundancy and simpler mechanics, reducing both failure risk and operating costs compared to conventional helicopters. - Explanation: Multiple industry and technical sources confirm that eVTOL electrification reduces safety burden through distributed motor redundancy (losing one motor still allows safe flight) and by eliminating complex single-point-of-failure components common in helicopters. On cost, electric motors have fewer moving parts requiring less maintenance, and electricity is far cheaper per mile than jet-A fuel. These are established engineering advantages widely cited by aviation experts and industry analysts. - Sources: - [Here's why eVTOLs will be much safer than helicopters - Mobiwisy](https://mobiwisy.com/innovation-in-english/evtol-article/here-is-why-evtols-will-be-much-safer-than-helicopters) - [eVTOL vs Helicopter: The Future of Air Transport - EFLYKE](https://www.eflyke.com/en/eflyke-vs-helicopter/) - [eVTOL 101: Benefits of Electric Aircrafts](https://www.carpenterelectrification.com/blog/benefits-electric-aircrafts) ### ch8-9: INEXACT - Speaker: Brett Adcock - Claim: A conventional helicopter can have 100 to 200 safety-critical components, and if any one of them fails, the helicopter can go down. - TLDR: Helicopters do have many single-point-of-failure components, but industry sources cite 'hundreds' rather than a cap of 100-200. The broader contrast with eVTOLs is accurate. - Explanation: Archer Aviation's own published comparison states 'most helicopters have hundreds of single points of catastrophic failure,' suggesting the 100-200 figure Adcock cites may be an underestimate. The core claim, that conventional helicopters have numerous safety-critical components whose failure can be catastrophic, and that electric aircraft eliminate these single points of failure through distributed redundancy, is well-supported. However, the precise range of '100 to 200' is not confirmed by any technical source and likely understates the actual count. - Sources: - [eVTOL Aircraft vs. Helicopters - Archer Aviation](https://news.archer.com/evtol-aircraft-vs-helicopters) - [Safety In UAM, Fail-Safe vs Fail-Operational - Embention](https://www.embention.com/news/safety-in-uam-fail-safe-vs-fail-operational/) - [Critical parts awareness and training | UK Civil Aviation Authority](https://www.caa.co.uk/commercial-industry/aircraft/airworthiness/continuing-airworthiness/critical-parts-awareness-and-training/) ### ch8-10: TRUE - Speaker: Brett Adcock - Claim: An electric aircraft can lose a motor or a battery pack on board and still fly safely. - TLDR: eVTOL aircraft are specifically designed with distributed motor and battery redundancy so that losing a single motor or battery pack does not prevent safe flight. - Explanation: Multiple independent electric motors (typically 6-18) allow the flight computer to redistribute thrust in milliseconds if one fails, a core safety advantage over single-rotor helicopters. Independent battery packs similarly provide power continuity if one module fails. This redundancy architecture is a well-documented design principle across eVTOL manufacturers including Archer Aviation and is reflected in FAA and EASA certification requirements. - Sources: - [eVTOL: making the electric dream a safe one - Aerospace America](https://aerospaceamerica.aiaa.org/features/evtol-making-the-electric-dream-a-safe-one/) - [Air Taxis Are Safe—According to the Manufacturers - IEEE Spectrum](https://spectrum.ieee.org/air-taxis-are-safe-according-to-the-manufacturers) - [Are Air Taxis Safe? Everything You Need to Know - eVTOL.Travel](https://evtol.travel/air-taxi-safety) ### ch8-11: TRUE - Speaker: Brett Adcock - Claim: Archer's test pilots are career professionals, many recruited from the military and from large aerospace organizations. - TLDR: Archer's test pilots do have career military and aerospace backgrounds. Chief Test Pilot Jeff Greenwood is a U.S. Marine Corps veteran and former Bell Textron test pilot. - Explanation: Publicly available information confirms Archer Aviation's chief test pilot, Jeff Greenwood, served in the U.S. Marine Corps and previously worked at Bell Textron, a major aerospace firm. This directly supports the claim that Archer recruits career test pilots from military and large aerospace organizations. - Sources: - [Archer Showcases Piloted Midnight Flight As It Advances To Next Phase Of Flight Test Program](https://www.businesswire.com/news/home/20250602898754/en/Archer-Showcases-Piloted-Midnight-Flight-As-It-Advances-To-Next-Phase-Of-Flight-Test-Program) - [Archer's Midnight eVTOL makes first piloted flight | Military Aerospace](https://www.militaryaerospace.com/commercial-aerospace/article/55294490/archers-midnight-evtol-makes-first-piloted-flight) ### ch8-12: TRUE - Speaker: Brett Adcock - Claim: Archer is currently in the FAA certification process for its eVTOL aircraft. - TLDR: Archer Aviation is actively pursuing FAA type certification for its Midnight eVTOL aircraft, as confirmed by multiple sources. - Explanation: Archer has achieved 100% FAA acceptance of its Means of Compliance, received a Part 141 Pilot Training Certificate, and secured three of four certificates needed to operate an air taxi service. The company is working toward Type Inspection Authorization and targeting full FAA type certification in 2026. - Sources: - [Archer Aviation's FAA Certification Progress -- What Investors Need to Know Now | The Motley Fool](https://www.fool.com/investing/2025/11/26/archer-aviations-faa-certification-progress/) - [The Diverging Paths to FAA Aircraft Certification: Joby, Archer, and Electra in the Race to Define AAM | Commercial UAV News](https://www.commercialuavnews.com/faa-regulation-certification-aam-joby-archer-electra) - [Archer clears key FAA eVTOL hurdle, targets 2026 passenger air taxis](https://www.stocktitan.net/news/ACHR/archer-announces-fourth-quarter-and-full-year-2025-results-us-and-lmlhz27x4wge.html) ### ch8-13: TRUE - Speaker: Brett Adcock - Claim: Archer has a strong balance sheet with cash. - TLDR: Archer Aviation ended fiscal year 2025 with approximately $2 billion in cash and short-term investments, described as the strongest capital base in the eVTOL sector. - Explanation: Archer raised $650 million in equity in 2025, boosting total liquidity to around $2 billion. This is publicly confirmed via the company's Q4 2025 earnings results and financial filings, supporting the claim of a strong cash position. - Sources: - [Archer Aviation Q4, fiscal year 2025 results confirm on-track U.S., UAE Midnight pilot programs for 2026 | CompositesWorld](https://www.compositesworld.com/news/archer-aviation-q4-fiscal-year-2025-results-confirm-on-track-us-uae-midnight-pilot-programs-for-2026) - [Archer Aviation (ACHR) Financials 2025 - Income Statement and Balance Sheet $ACHR](https://www.marketbeat.com/stocks/NYSE/ACHR/financials/) ### ch6-1: TRUE - Speaker: Brett Adcock - Claim: Brett Adcock founded Archer Aviation in 2018, shortly after selling Vettery. - TLDR: Archer Aviation was indeed founded in 2018, shortly after the Vettery acquisition closed that same year. - Explanation: Vettery was acquired by The Adecco Group for $110 million in 2018. Archer Aviation was officially co-founded by Brett Adcock and Adam Goldstein on October 16, 2018, using capital from that exit. Both events occurred in 2018, confirming the claim. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [High-tech entrepreneur Brett Adcock on Figure, Archer, and early success](https://newatlas.com/remarkable-people/brett-adcock-history/) ### ch6-2: INEXACT - Speaker: Brett Adcock - Claim: Half the world lives in cities today. - TLDR: The true figure is either ~45% (cities specifically) or ~58% (all urban areas), so 'half' is a rough approximation. - Explanation: According to the UN World Urbanization Prospects 2025, exactly 45% of the global population lives in cities under the strict UN definition, while the broader urban population (cities plus towns) reaches about 58%. Neither figure is precisely 'half,' though the claim is directionally reasonable as a simplified talking point. - Sources: - [Press Release | Cities are home to 45 per cent of the global population, with megacities continuing to grow, UN report finds - United Nations Sustainable Development](https://www.un.org/sustainabledevelopment/blog/2025/11/press-release-wup2025/) - [Nearly half of global population live in cities: UN report - Xinhua](https://english.news.cn/20251119/10385d5960714a74925e8dfa4524ae15/c.html) - [Urban population (% of total population) | Data](https://data.worldbank.org/indicator/SP.URB.TOTL.IN.ZS) ### ch6-3: INEXACT - Speaker: Brett Adcock - Claim: By the middle of the century, about 70% of the world will live in cities. - TLDR: The UN projects ~68% of the world will live in cities by 2050, not 70%. The core claim is directionally correct but the figure is slightly overstated. - Explanation: The UN's widely cited World Urbanization Prospects report projects 68% of the global population will live in urban areas by 2050. Some more recent estimates do cite figures closer to 70%, so the claim is a modest overstatement rather than a clear error. The middle-of-century timeframe and the general direction of the trend are accurate. - Sources: - [68% of the world population projected to live in urban areas by 2050, says UN | UN DESA](https://www.un.org/development/desa/en/news/population/2018-revision-of-world-urbanization-prospects.html) - [Around 2.5 billion more people will be living in cities by 2050, projects new UN report | UN DESA](https://www.un.org/development/desa/en/news/population/2018-world-urbanization-prospects.html) ### ch6-4: TRUE - Speaker: Brett Adcock - Claim: Traveling 20 to 30 miles in major cities typically takes about an hour. - TLDR: In the most gridlocked major cities, a 20-30 mile trip during peak hours does take roughly an hour, consistent with available traffic data. - Explanation: Data from multiple sources confirms that in cities like New York (22 mph rush-hour speeds), Miami, and Los Angeles (approx. 3 min/mile during evening rush), a 20-mile trip takes roughly 55-65 minutes during peak congestion. The claim is a reasonable generalization about urban gridlock in major cities, which is the context in which it was made. - Sources: - [Cities With the Worst Commutes](https://www.moneygeek.com/living/driving/cities-with-the-worst-commutes/) - [Drivemode Data Report: Where And When Commuting Takes The Longest | Drivemode](https://www.drivemode.com/blog/engineering/drivemode-data-report-commuting-durations/) - [Time to Commute - U.S. city commuting maps | Geotab](https://www.geotab.com/time-to-commute/) ### ch6-5: DISPUTED - Speaker: Brett Adcock - Claim: Fully electric aircraft can be made less expensive than conventional aircraft. - TLDR: Electric aircraft can have lower operating costs, but current eVTOLs are often more expensive to purchase than comparable conventional aircraft. - Explanation: Industry data shows electricity is roughly 40x cheaper than jet fuel and electric motors require less maintenance, supporting a lower operating cost argument. However, purchase prices for current eVTOLs (e.g., Beta Technologies at $3.5-4M) exceed comparable conventional planes ($2.5-3M). Industry analysts are also skeptical about whether mass production will make them cheaper overall, and battery replacement costs add significant expense. - Sources: - [Comparing the Cost of EVTOLs and Conventional Helicopters – Flying Cars Market](https://flyingcarsmarket.com/comparing-the-cost-of-evtols-and-conventional-helicopters/) - [Costs of running eVTOL: Is it really a sustainable business model ? – Flying Cars Market](https://flyingcarsmarket.com/costs-of-running-evtol-is-it-really-a-sustainable-business-model/) - [How Much Will It Cost to Fly on eVTOL Air Taxis?](https://www.flyingmag.com/evtol-air-taxi-passenger-prices/) ### ch6-6: INEXACT - Speaker: Brett Adcock - Claim: Electric aircraft have fewer parts than conventional aircraft, which improves safety. - TLDR: Electric aircraft do have fewer mechanical/moving parts than conventional aircraft, a widely cited safety benefit, but eVTOLs also add more motors and rotors for redundancy. - Explanation: Industry sources consistently confirm that electric propulsion eliminates complex mechanical components (gearboxes, combustion systems, magnetos) resulting in fewer moving parts and reduced maintenance failure points, which is recognized as a safety advantage. However, eVTOLs often compensate with more rotors and motors (not fewer) to build in propulsion redundancy. The claim's core logic is sound but slightly oversimplified. - Sources: - [What to Know About Electric Aircrafts and eVTOL](https://www.carpenterelectrification.com/blog/electric-aircrafts-evtol) - [What Are eVTOLs? Are They the Future of Aviation? | Built In](https://builtin.com/articles/evtol-aircraft) - [Electric Aircraft and eVTOL: Preparing Technicians for the Future | Sprott Learning: Aeronautics](https://sprottlearning.com/air/electric-aircraft-and-evtol-preparing-technicians-for-the-future/) ### ch6-7: TRUE - Speaker: Brett Adcock - Claim: An eVTOL could cover a distance that takes an hour by car in LA, SF, or New York in about 10 minutes. - TLDR: The "1-hour car ride reduced to ~10 minutes by eVTOL" figure is a well-established industry benchmark, including for Archer Aviation's own cited routes. - Explanation: eVTOL companies and urban air mobility researchers consistently cite this ratio for congested urban routes. Archer Aviation specifically uses the Chicago O'Hare to downtown example (1-hour drive to 10-minute flight), and similar figures apply to LA, SF, and New York corridors. At typical eVTOL cruise speeds of 100-150 mph, a 20-30 mile trip that takes an hour by car in traffic would take roughly 10-12 minutes by air, making the claim physically plausible and consistent with industry projections. - Sources: - [How Urban Air Mobility is reshaping the future of air travel](https://www.aerotime.aero/articles/how-urban-air-mobility-is-reshaping-the-future-of-air-travel) - [eVTOL - Wikipedia](https://en.wikipedia.org/wiki/EVTOL) ### ch6-8: INEXACT - Speaker: Shawn Ryan - Claim: Vettery was sold for $110 million. - TLDR: The sale price was officially undisclosed. TechCrunch reported 'a little over $100 million,' while $110M appears only in Adcock's own biography. - Explanation: The Adecco Group never publicly confirmed the acquisition price of Vettery. TechCrunch, citing a source with knowledge of the deal, reported it as 'a little over $100 million.' The $110 million figure is stated in Brett Adcock's self-authored bio and the podcast description but is not corroborated by independent reporting. - Sources: - [Adecco Group acquires recruiting startup Vettery for $100M | TechCrunch](https://techcrunch.com/2018/02/20/adecco-acquires-vettery/) - [THE ADECCO GROUP ANNOUNCES ACQUISITION OF VETTERY](https://www.adeccogroup.com/our-group/media/press-releases/2018-the-adecco-group-announces-acquisition-of-vettery) ### ch6-9: INEXACT - Speaker: Brett Adcock - Claim: Brett Adcock studied industrial system engineering at the University of Florida. - TLDR: Adcock did start in industrial engineering at UF, but he switched majors and graduated with a Bachelor of Science in Business Administration in 2008. - Explanation: Wikipedia and UF alumni sources confirm he initially enrolled in industrial engineering at the University of Florida. However, he transferred to the Warrington College of Business and earned a BSBA in 2008. The transcript more precisely says he 'started in' industrial system engineering, but the claim implies he studied it as his completed field of study, which is inaccurate. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [AI is up and walking - UF Warrington College of Business](https://news.warrington.ufl.edu/alumni-friends/ai-is-up-and-walking/) ### ch6-10: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Some of the eVTOL design courses Adcock attended were sponsored by NASA or colleges. - TLDR: NASA and college-sponsored eVTOL and rotorcraft short courses exist, but Adcock's personal attendance cannot be independently confirmed. - Explanation: Organizations like the Vertical Flight Society have run short courses on eVTOL, rotorcraft, and electric propulsion featuring NASA Ames and university content (e.g., University of Maryland's Alfred Gessow Rotorcraft Center). This confirms that such courses exist and are sponsored by NASA or colleges. However, Adcock's claim about attending them is a first-person anecdote about private activity with no public record to verify. - Sources: - [VFS - 2024 eVTOL Technology Short Course](https://vtol.org/events/2024-evtol-technology-short-course) - [VFS - 2021 eVTOL Short Course](https://vtol.org/events/2021-evtol-short-course) - [Courses – Electric VTOL News™](https://evtol.news/courses/) ### ch6-11: TRUE - Speaker: Brett Adcock - Claim: Helicopter rotors are large because maximizing rotor disc area reduces power requirements and enables efficient lift. - TLDR: This is a well-established aerodynamic principle called disk loading. Larger rotor disc area means lower disk loading, which directly reduces the power required to generate lift. - Explanation: Momentum (actuator disk) theory confirms that for a given thrust, moving a larger volume of air slowly is more energy-efficient than moving a small volume quickly. Helicopters therefore use large rotors to achieve low disk loading (typically 5-10 lb/ft²), maximizing power loading and hover efficiency. This is textbook rotorcraft aerodynamics. - Sources: - [Disk loading - Wikipedia](https://en.wikipedia.org/wiki/Disk_loading) - [Disc Loading and Hover Efficiency — Krossblade Aerospace Systems](https://www.krossblade.com/disc-loading-and-hover-efficiency) ### ch6-12: INEXACT - Speaker: Brett Adcock - Claim: A battery pack has approximately 1/30th the energy of kerosene, making power consumption the dominant design constraint for electric aircraft. - TLDR: The energy density gap between batteries and kerosene is real and significant, but 1/30th is optimistic. At the battery pack level, the ratio is closer to 1/40 to 1/50. - Explanation: Kerosene (jet fuel) has a specific energy of roughly 12,000 Wh/kg (~43 MJ/kg), while current Li-ion battery packs used in eVTOL aircraft achieve roughly 200-270 Wh/kg, yielding a ratio of approximately 1/44 to 1/60. The 1/30 figure corresponds to optimistic cell-level comparisons using advanced cells (~400 Wh/kg) or when partially accounting for the higher efficiency of electric drivetrains (2-3x better than combustion). The core assertion that energy density is the dominant design constraint for electric aircraft is well-supported by the literature. - Sources: - [Energy density - Wikipedia](https://en.wikipedia.org/wiki/Energy_density) - [Are Batteries Truly Enough to Power eVTOLs? - Aviation Today](https://interactive.aviationtoday.com/avionicsmagazine/february-march-2021/are-batteries-truly-enough-to-power-evtols/) - [Performance Metrics Required of Next-Generation Batteries to Electrify Vertical Takeoff and Landing (VTOL) Aircraft | ACS Energy Letters](https://pubs.acs.org/doi/10.1021/acsenergylett.8b02195) - [The Viability of Electric Aircraft - Stanford University](http://large.stanford.edu/courses/2021/ph240/segal1/) ### ch6-13: TRUE - Speaker: Brett Adcock - Claim: Traditional turbofan engines and conventional propulsion systems become too inefficient when scaled to small sizes. - TLDR: Turbine engines do become significantly less efficient at small scales, while electric motors maintain roughly 90% efficiency regardless of size. - Explanation: Engineering and NASA sources confirm that turbofan efficiency drops substantially at smaller scales due to blade tip clearance issues, lower bypass ratios, and poor part-load performance. By contrast, brushless and permanent magnet electric motors maintain 90-95% efficiency across a wide range of sizes, which is the foundational rationale for distributed electric propulsion in eVTOL aircraft. - Sources: - [Smaller is Better for Jet Engines - NASA](https://www.nasa.gov/feature/glenn/2021/smaller-is-better-for-jet-engines) - [Overview of Electric Propulsion Motor Research for EVTOL | MDPI](https://www.mdpi.com/2673-4591/80/1/46) - [What Is an eVTOL Motor? A Complete Overview](https://www.ligpower.com/blog/what-is-an-evtol-motor.html) - [Engine efficiency - Wikipedia](https://en.wikipedia.org/wiki/Engine_efficiency) ### ch6-14: INEXACT - Speaker: Brett Adcock - Claim: You cannot efficiently build 12 propellers on a conventional helicopter because efficiency drops to near zero at small sizes. - TLDR: The core engineering principle is correct, but 'drops to near zero' overstates the inefficiency. Small gas turbines/piston engines do become significantly less efficient at small scale, making 12-rotor gas-powered configurations impractical, but efficiency does not literally approach zero. - Explanation: Research confirms that electric motor efficiency is scale-invariant (roughly 90-95% at both small and large sizes), while gas turbines and piston engines suffer increasing thermodynamic losses as they shrink, with higher specific fuel consumption and noise at smaller scales. This makes distributed multi-rotor gas propulsion impractical, validating Adcock's broader point. However, small gas engines do not drop to 'near zero' efficiency in normal operation (small ICEs typically achieve 15-30%), so the phrasing is a significant exaggeration of an otherwise sound engineering principle. - Sources: - [Maximizing eVTOL Motor Performance: From Materials to Manufacturing](https://www.carpenterelectrification.com/blog/benefits-evtol-motors) - [The promise of energy-efficient battery-powered urban aircraft - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC8609345/) - [Hybrid Propulsion: Halfway to Paradise?](https://evtol.news/news/halfway-to-paradise) - [Comparison of the Overall Energy Efficiency for Internal Combustion Engine Vehicles and Electric Vehicles](https://www.researchgate.net/publication/344860096_Comparison_of_the_Overall_Energy_Efficiency_for_Internal_combustion_Engine_Vehicles_and_Electric_Vehicles) ### ch6-15: FALSE - Speaker: Brett Adcock - Claim: Small electric motors maintain approximately 90% efficiency regardless of their physical size. - TLDR: Electric motor efficiency is NOT size-independent. Larger motors are generally more efficient than smaller ones, and small motors typically achieve around 80%, not 90%. - Explanation: Engineering data consistently shows that efficiency increases with motor size. Small motors (1-4 hp range) achieve roughly 80% efficiency, while medium-to-large motors reach 90% or above. The claim that a small motor and a large motor deliver the 'same efficiency' contradicts established motor engineering principles. While electric motors scale better than combustion engines (which is Adcock's broader point about distributed propulsion), the specific assertion that small motors are equally as efficient as large ones at ~90% is inaccurate. - Sources: - [Determining Electric Motor Load and Efficiency](https://www.energy.gov/sites/prod/files/2014/04/f15/10097517.pdf) - [Why does a big electric motor have better efficiency when compared to a small one? - Quora](https://www.quora.com/Why-does-a-big-electric-motor-have-better-efficiency-when-compared-to-a-small-one-What-parameters-influence-that) - [Electric Motors: Energy Efficiency Reference Guide - Natural Resources Canada](https://natural-resources.canada.ca/energy-efficiency/energy-star/electric-motors-energy-efficiency-reference-guide) ### ch6-16: TRUE - Speaker: Brett Adcock - Claim: With electrification, you can build an aircraft with 12 electric motors, creating significant redundancy. - TLDR: eVTOL aircraft with 12 electric motors for redundancy is a real, documented design approach. Archer Aviation's own Midnight aircraft uses exactly 12 motors. - Explanation: Distributed Electric Propulsion (DEP) is a core eVTOL engineering concept where multiple motors provide fault tolerance: if one fails, the others maintain flight. Archer Aviation's Midnight, the aircraft Brett Adcock helped develop, is specifically a 12-motor design, making his example directly grounded in his own work. - Sources: - [A Complete Guide to eVTOL | Dewesoft](https://dewesoft.com/blog/evtol-guide) - [What Is an eVTOL Motor? A Complete Overview](https://www.ligpower.com/blog/what-is-an-evtol-motor.html) ### ch6-17: UNVERIFIABLE - Speaker: Brett Adcock - Claim: In 2018, Adcock attended a week-long electric propulsion and aerodynamics design course at a Hyatt Regency Hotel in Atlanta. - TLDR: This is a personal anecdote about a private event. No public records confirm or deny the specific course at a Hyatt Regency in Atlanta in 2018. - Explanation: Adcock recounts attending a week-long electric propulsion and aerodynamics design course at a Hyatt Regency in Atlanta in 2018, where he met a University of Florida aerospace PhD student. AIAA and the Vertical Flight Society have run similar short courses, but no indexed public record confirms this specific event at that venue and date. As a first-person account of a private experience, it cannot be independently verified. - Sources: - [Electric VTOL Aircraft Design: Theory and Practice – AIAA](https://aiaa.org/courses/electric-vtol-aircraft-design-theory-and-practice/) ### ch6-18: TRUE - Speaker: Brett Adcock - Claim: The person Adcock met at the Atlanta course was pursuing a PhD in aerospace at the University of Florida. - TLDR: The person Adcock met was Moses Divaker, a PhD student in Mechanical & Aerospace Engineering at the University of Florida, confirmed by UF's own reporting. - Explanation: UF Herbert Wertheim College of Engineering published an article identifying the individual as Moses Divaker, a doctoral student under Professor Peter Ifju in the Department of Mechanical & Aerospace Engineering. They met at an AIAA conference while attending a seminar on eVTOL aircraft design, matching the substance of Adcock's account. - Sources: - [Take Off in a Flying Car With UF Engineers and Alumni - News from Herbert Wertheim College of Engineering](https://www.eng.ufl.edu/news/alumni-spotlight/take-off-in-a-flying-car-with-uf-engineers-and-alumni/) ### ch6-19: TRUE - Speaker: Brett Adcock - Claim: eVTOL stands for electric vertical takeoff and landing. - TLDR: eVTOL correctly stands for electric vertical takeoff and landing. - Explanation: All major aviation and industry sources confirm that eVTOL is the standard acronym for electric vertical takeoff and landing. It refers to aircraft that use electric power to take off and land vertically, as opposed to conventional VTOL aircraft which use combustion engines. - Sources: - [eVTOL - Wikipedia](https://en.wikipedia.org/wiki/EVTOL) - [Electric Vertical Takeoff and Landing (eVTOL) Aircraft: What Are They? | The Motley Fool](https://www.fool.com/terms/e/evtol-aircraft/) ### ch6-20: TRUE - Speaker: Brett Adcock - Claim: Adcock and a team built aircraft in 2018 and 2019 at the University of Florida. - TLDR: Confirmed. Archer Aviation was founded in October 2018 and conducted early eVTOL prototyping at a University of Florida lab through 2018-2019. - Explanation: Multiple sources confirm that Adcock and co-founder Adam Goldstein set up a research lab at UF (located off Archer Road in Gainesville) and engaged about 20 people in scale model design and flight testing. A half-scale eVTOL model was built in 90 days over the summer of 2019 and flight-tested in September 2019 at UF's IFAS site in Citra, Florida. Adcock himself stated publicly: 'Under reported, but I started Archer at the University of Florida.' - Sources: - [Archer Aviation - Wikipedia](https://en.wikipedia.org/wiki/Archer_Aviation) - [Take Off in a Flying Car With UF Engineers and Alumni - Herbert Wertheim College of Engineering](https://www.eng.ufl.edu/news/alumni-spotlight/take-off-in-a-flying-car-with-uf-engineers-and-alumni/) - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) ### ch6-21: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Adcock temporarily moved to Gainesville, Florida with his wife and daughter to work on early Archer Aviation development. - TLDR: Adcock's personal relocation to Gainesville with his family is a private anecdote that cannot be verified. The surrounding public facts about building an early Archer Aviation lab on Archer Road in Gainesville are well documented. - Explanation: Multiple sources confirm that Adcock co-founded Archer Aviation's first lab near Archer Road at the University of Florida in Gainesville, and that the company name itself derives from that road. However, whether he personally relocated there with his wife and daughter is a private family matter with no public record to confirm or deny. - Sources: - [Take Off in a Flying Car With UF Engineers and Alumni - News from Herbert Wertheim College of Engineering](https://www.eng.ufl.edu/news/alumni-spotlight/take-off-in-a-flying-car-with-uf-engineers-and-alumni/) - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) ### ch6-22: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Adcock funded a new lab right off Archer Road near the University of Florida. - TLDR: Archer Road is real and runs along the southwest side of UF's campus in Gainesville. Whether Adcock funded a specific lab there is a private business action with no public record. - Explanation: Archer Road is confirmed as a major corridor adjacent to the University of Florida in Gainesville, FL, consistent with Adcock's description. However, the specific claim that he personally funded a new lab off that road is a first-person anecdote about an early-stage private company decision, and no public records or reporting verify it. - Sources: - [Guide to Archer Road in Gainesville, FL | CMC](https://www.cmcapt.com/gainesville-archer-road-neighbohood/) - [Archer Road Neighborhood Guide — Living in Gainesville — VeryApt](https://www.veryapt.com/guides/neighborhood/1348-gainesville-archer-road/) ### ch6-23: INEXACT - Speaker: Brett Adcock - Claim: Archer Road is the main road at the University of Florida. - TLDR: Archer Road is a major road bordering the south side of UF's campus, but University Avenue (north border) is arguably more central to the university itself. - Explanation: Archer Road forms the southern boundary of the University of Florida campus and is a prominent road in Gainesville, home to UF Health Shands Hospital and major retail. However, University Avenue runs along the north side of campus and is more commonly associated with the core of student life and the university's social hub. Calling Archer Road 'the main road' of UF overstates its primacy. - Sources: - [Archer Road Neighborhood Guide — Living in Gainesville — VeryApt](https://www.veryapt.com/guides/neighborhood/1348-gainesville-archer-road/) - [Top Off-Campus Housing Options for UF Students (Updated for 2026) - Sweetwater](https://sweetwatergainesville.com/resources/where-to-live-near-uf-if-you-dont-want-to-live-on-campus/) ### ch6-24: TRUE - Speaker: Brett Adcock - Claim: Archer Aviation was named after Archer Road at the University of Florida. - TLDR: Archer Aviation was indeed named after Archer Road near the University of Florida in Gainesville. The lab Brett Adcock funded was located off that road. - Explanation: A University of Florida engineering profile and business reporting both confirm that Brett Adcock named his company after the Archer Aviation Lab he funded at UF, which was situated off Archer Road in Gainesville. Archer Road is a major thoroughfare on the south side of the UF campus, home to UF Health Shands Hospital and other institutions. A marketing firm later validated the name independently, and it was kept. - Sources: - [Take Off in a Flying Car With UF Engineers and Alumni - News from Herbert Wertheim College of Engineering](https://www.eng.ufl.edu/news/alumni-spotlight/take-off-in-a-flying-car-with-uf-engineers-and-alumni/) - [This founder is building a $3.8 billion urban air taxi network | The Business of Business](https://www.businessofbusiness.com/amp/articles/archer-evtol-brett-adcock-flying-cars-air-taxi/) ### ch6-25: INEXACT - Speaker: Brett Adcock - Claim: When Adcock began building eVTOL aircraft, there was no community of people who understood the intersection of electrification, rotorcraft, and fixed-wing aircraft. - TLDR: The talent pool was genuinely very scarce, but a small pioneer community did exist. Joby Aviation had been working at exactly this intersection since 2009, and Lilium since 2015. - Explanation: Industry reports confirm that expertise spanning electrification, rotorcraft, and fixed-wing aircraft was extremely limited around 2018, with a documented 'small pool of talent' at this intersection and a recognized engineering shortage. However, companies like Joby Aviation (founded 2009) and Lilium (2015) had already assembled engineering teams tackling precisely these combined disciplines before Archer was founded. Adcock's core point about extreme scarcity is well-supported, but 'no community' overstates the situation. - Sources: - [How the eVTOL sector is addressing the critical shortage of engineering talent - Vertical Mag](https://verticalmag.com/features/how-the-evtol-sector-is-addressing-the-critical-shortage-of-engineering-talent/) - [Joby Aviation - Wikipedia](https://en.wikipedia.org/wiki/Joby_Aviation) - [Study: eVTOL Industry Needs 10,000 Aerospace Engineers | Aviation International News](https://www.ainonline.com/aviation-news/general-aviation/2022-11-07/study-evtol-industry-needs-10000-engineers) ### ch6-26: FALSE - Speaker: Brett Adcock - Claim: Adcock moved Archer Aviation to California a few years after starting it. - TLDR: Archer Aviation was founded in Palo Alto, California in 2018. It was not moved to California after a few years. - Explanation: Multiple sources confirm Archer Aviation was established on October 16, 2018, in Palo Alto, California, from day one. Adcock and co-founder Adam Goldstein moved from New York to Palo Alto upon founding the company, after selling Vettery in 2018. The only relocation on record is an intra-California move from Palo Alto to San Jose in January 2022. - Sources: - [Archer Aviation - Wikipedia](https://en.wikipedia.org/wiki/Archer_Aviation) - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [High-tech entrepreneur Brett Adcock on Figure, Archer, and early success](https://newatlas.com/remarkable-people/brett-adcock-history/) ### ch9-1: FALSE - Speaker: Brett Adcock - Claim: Brett Adcock spent 5 or 6 years working on robotics at Archer Aviation before founding Figure. - TLDR: Adcock was at Archer Aviation for roughly 3.5 years (Oct 2018 to April 2022), not 5 or 6 years. - Explanation: Archer Aviation was founded in October 2018 and Adcock stepped down in April 2022, a span of about 3.5 years. He then founded Figure AI in May 2022. The claim of 5 or 6 years significantly overstates his tenure at Archer. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch9-3: TRUE - Speaker: Brett Adcock - Claim: At the time Figure was being founded, one of the best humanoid robots was the Boston Dynamics Atlas, which had a hydraulic system. - TLDR: The Boston Dynamics Atlas was indeed a hydraulic robot when Figure was founded in 2022. It wasn't retired in favor of an electric version until April 2024. - Explanation: The original Atlas, first unveiled in 2013, used hydraulic actuators, and subsequent generations through 2016 continued with a compact hydraulic system. Boston Dynamics only retired the hydraulic Atlas and announced an electric replacement in April 2024, two years after Figure was founded in 2022. At that time, Atlas was widely considered one of the most capable humanoid robots in existence. - Sources: - [Atlas shrugged: Boston Dynamics retires its hydraulic humanoid robot | TechCrunch](https://techcrunch.com/2024/04/16/atlas-shrugged-boston-dynamics-retires-its-humanoid-robot/) - [An Electric New Era for Atlas | Boston Dynamics](https://bostondynamics.com/blog/electric-new-era-for-atlas/) - [Atlas (robot) - Wikipedia](https://en.wikipedia.org/wiki/Atlas_(robot)) ### ch9-4: TRUE - Speaker: Brett Adcock - Claim: Boston Dynamics Atlas was heavy, high torque, and very leaky, with oil everywhere. - TLDR: The hydraulic Atlas was indeed heavy (~80-89 kg), high-torque, and notorious for hydraulic fluid leakage, which was a key reason Boston Dynamics eventually retired it in favor of an all-electric design. - Explanation: Boston Dynamics' hydraulic Atlas weighed 80-89 kg and used 28 hydraulic actuators, making it powerful but maintenance-heavy. Hydraulic fluid leakage and messiness were well-documented drawbacks, explicitly cited by Boston Dynamics when announcing the electric replacement in April 2024. The description of it being 'leaky with oil everywhere' aligns with how the robotics community characterized its hydraulic system. - Sources: - [Hello, Electric Atlas - IEEE Spectrum](https://spectrum.ieee.org/atlas-humanoid-robot) - [Atlas (robot) - Wikipedia](https://en.wikipedia.org/wiki/Atlas_(robot)) - [Atlas Humanoid Robot | Boston Dynamics](https://bostondynamics.com/products/atlas/) ### ch9-5: FALSE - Speaker: Brett Adcock - Claim: Boston Dynamics Atlas ran for about 20 minutes on a single charge. - TLDR: The hydraulic Atlas was officially rated at approximately 1 hour of mixed-mission operation, not 20 minutes. - Explanation: Boston Dynamics' hydraulic Atlas carried a 3.7 kWh lithium-ion battery pack designed for roughly 1 hour of mixed operation (walking, standing, manipulation). Even under heavy-duty, high-intensity use, runtime dropped to 'tens of minutes,' not specifically 20 minutes. The claim of 20 minutes significantly understates the robot's actual battery life. - Sources: - [What is the battery life of the Boston Dynamics Atlas robot? - Quora](https://www.quora.com/What-is-the-battery-life-of-the-Boston-Dynamics-Atlas-robot) - [ATLAS DRC Robot Is 75 Percent New, Completely Unplugged - IEEE Spectrum](https://spectrum.ieee.org/atlas-drc-robot-is-75-percent-new-completely-unplugged) - [Boston Dynamics Atlas Review: Humanoid Redefining Robotics](https://roboticsnewsai.com/boston-dynamics-atlas/) ### ch9-6: INEXACT - Speaker: Brett Adcock - Claim: Figure's humanoid robot has about 40 degrees of freedom, and each degree of freedom is a motor that can spin 360 degrees. - TLDR: Figure's current robot (Figure 03) has approximately 30 degrees of freedom per third-party specs, not ~40 as claimed. Earlier models (Figure 01) did have 40+ DOF. - Explanation: Third-party specifications for the Figure 03 (released October 2025) consistently report approximately 30 total degrees of freedom, including 20 DOF in the hands. The Figure 01 had 40+ DOF and Figure 02 had around 35, so Adcock's figure of 'about 40' appears to be a rounded-up approximation, possibly conflating earlier models. The claim that each DOF corresponds to a motor capable of 360-degree rotation is a simplified but broadly reasonable conceptual description of electric actuators used in humanoid joints. - Sources: - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) - [Figure 02 humanoid robot is ready to get to work - The Robot Report](https://www.therobotreport.com/figure-02-humanoid-robot-is-ready-to-get-to-work/) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch9-7: TRUE - Speaker: Brett Adcock - Claim: The number of possible body positions of the robot (360 degrees to the power of 40 actuators) is greater than the number of atoms in the universe. - TLDR: 360^40 is approximately 10^102, far exceeding the ~10^80 atoms in the observable universe. The math holds up. - Explanation: 360^40 = (3.6 x 10^2)^40. Since log10(3.6) ≈ 0.556, 3.6^40 ≈ 10^22.25, giving 360^40 ≈ 10^102. The number of atoms in the observable universe is consistently estimated at roughly 10^80 (range 10^79 to 10^82). Adcock's claim is therefore correct by a factor of roughly 10^22. - Sources: - [Observable universe - Wikipedia](https://en.wikipedia.org/wiki/Observable_universe) - [How many atoms are in the observable universe? | Live Science](https://www.livescience.com/how-many-atoms-in-universe.html) ### ch9-8: TRUE - Speaker: Brett Adcock - Claim: Controlling a humanoid robot through explicit code is an intractable problem because of the enormous number of possible states. - TLDR: This is a well-established concept in robotics research. The high-dimensional, nonlinear state space of humanoid robots makes hand-coded explicit control infeasible in practice. - Explanation: Academic literature and industry sources consistently confirm that explicitly programming humanoid robot motion control is intractable due to the enormous, nonlinear, high-dimensional state space involved. This is precisely why the field has shifted to learning-based approaches (deep reinforcement learning, neural network controllers) trained in simulation rather than hand-coded solutions. Adcock's characterization accurately reflects mainstream robotics consensus. - Sources: - [Humanoid Locomotion and Manipulation: Current Progress and Challenges in Control, Planning, and Learning](https://arxiv.org/html/2501.02116v1) - [Real-world humanoid locomotion with reinforcement learning | Science Robotics](https://www.science.org/doi/10.1126/scirobotics.adi9579) - [Training a Whole-Body Control Foundation Model](https://www.agilityrobotics.com/content/training-a-whole-body-control-foundation-model) ### ch9-9: TRUE - Speaker: Brett Adcock - Claim: Figure's main computer processes what to tell all the joints to do at over 200 times per second in order to maintain balance. - TLDR: Figure's control system does run at 200 Hz, matching the claim. Publicly documented specs confirm this. - Explanation: Figure AI's published technical documentation confirms that System 1 (S1), the fast reactive visuomotor controller commanding all joints, operates at 200 Hz. This matches Brett Adcock's description of the main computer telling all joints what to do over 200 times per second for balance. Notably, the newer Helix 02 also added a System 0 layer running at 1 kHz specifically for balance and contact, but the 200 Hz figure for the main joint controller is accurate. - Sources: - [Helix: A Vision-Language-Action Model for Generalist Humanoid Control](https://www.figure.ai/news/helix) - [Introducing Helix 02: Full-Body Autonomy](https://www.figure.ai/news/helix-02) ### ch9-10: TRUE - Speaker: Brett Adcock - Claim: If the humanoid robot is powered off mid-run, it falls down, unlike a quadruped robot which is generally statically stable at any given point. - TLDR: This is a well-established robotics principle. Bipedal humanoids are dynamically stable (require active control to avoid falling), while quadrupeds are generally statically stable (center of mass stays within their support polygon even at rest). - Explanation: A statically stable robot will not fall even when all joints freeze, because its center of mass remains within the polygon formed by its ground contact points. Quadrupeds walking with at least three legs on the ground maintain this property. Bipedal humanoid robots, by contrast, rely on continuous dynamic balancing via active feedback control, so cutting power mid-motion causes them to collapse. This distinction is standard in robotics literature and confirmed by multiple academic sources. - Sources: - [2.2: Static and Dynamic Stability - Engineering LibreTexts](https://eng.libretexts.org/Bookshelves/Mechanical_Engineering/Introduction_to_Autonomous_Robots_(Correll)/02:_Locomotion_and_Manipulation/2.02:__Static_and_Dynamic_Stability) - [Stability-Guaranteed and High Terrain Adaptability Static Gait for Quadruped Robots - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC7506578/) - [Quadruped Robots: Bridging Mechanical Design, Control, and Applications](https://www.mdpi.com/2218-6581/14/5/57) ### ch9-11: TRUE - Speaker: Brett Adcock - Claim: All 40 joints on the robot have position encoders that report the exact position of each motor. - TLDR: Figure 03 does have 40 joints, and position encoders on robotic joints are standard for motor control. - Explanation: Multiple sources confirm Figure 03 features 40 high-performance joints. Position encoders reporting motor position are a fundamental and standard component of robotic joint control systems, consistent with Adcock's description of his own product's sensor architecture. No source contradicts either element of the claim. - Sources: - [Figure 03: The Next-Gen Humanoid Robot Built For Homes and Factories](https://parametric-architecture.com/figure-03-a-humanoid-robots/) - [Figure AI claims its Figure 03 robot can wash dishes, clean floors, and handle chores](https://the-decoder.com/figure-ai-claims-its-figure-03-robot-can-wash-dishes-clean-floors-and-handle-chores/) ### ch9-12: TRUE - Speaker: Brett Adcock - Claim: The robot has force sensing and torque sensing on board to detect the forces each joint is experiencing. - TLDR: Figure's humanoid robots do incorporate force and torque sensing at their joints, consistent with Adcock's description. - Explanation: Publicly available specifications for Figure 02 confirm force sensor arrays and custom per-joint motors with torque feedback. Figure 03's announced specs reference a redesigned sensory suite with vertically integrated sensors, and industry documentation on Figure's architecture confirms joint-level torque sensing is part of their design. This is also standard practice for humanoid robots of this class. - Sources: - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) - [Figure 02 Review (2026): Specs, Price & Performance](https://blog.robozaps.com/b/figure-02-review) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch9-13: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Force feedback is sent to the motor control at 5,000 to 6,000 times per second. - TLDR: Figure AI does not publicly disclose the exact inner-loop motor control frequency. The 5-6 kHz claim is technically plausible but cannot be independently confirmed. - Explanation: Figure AI's publicly documented control hierarchy shows System 1 operating at 200 Hz (full-body) and System 0 at 'kilohertz rates' for balance and motor execution, but no precise figure for the inner force feedback loop is published. Industry norms for inner current/torque control loops in servo-based robotics typically range from 1 kHz to over 10 kHz, making 5-6 kHz plausible. However, the specific 5-6 kHz number for Figure's force feedback loop is an internal technical specification not confirmed by any public source. - Sources: - [Helix: A Vision-Language-Action Model for Generalist Humanoid Control](https://www.figure.ai/news/helix) - [Figure Launches Helix 02 - Humanoid Robotics Technology](https://humanoidroboticstechnology.com/industry-news/figure-launches-helix-02/) - [Application Brief Motor Control in Humanoid Robots - Texas Instruments](https://www.ti.com/lit/slla659) ### ch9-14: TRUE - Speaker: Brett Adcock - Claim: Motor control for each motor is processed locally at the motor level because the feedback needs to happen too fast to route through a central computer. - TLDR: Local motor control at the joint level is a well-established principle in humanoid robotics, adopted precisely because high-frequency feedback loops cannot be routed through a central computer fast enough. - Explanation: Industry and academic sources confirm that humanoid robots use distributed architectures where low-level microcontrollers reside near each joint to handle fast motor control loops (typically 500 Hz to 2 kHz or higher), while a central computer handles higher-level whole-body coordination. Texas Instruments explicitly advocates moving real-time control closer to the actuator for this reason. Figure AI's own published Helix architecture separates a fast local control process (S1 at 200 Hz) from a slower high-level reasoning system, consistent with Adcock's description. - Sources: - [Humanoid robot design resources | TI.com](https://www.ti.com/applications/industrial/robotics/humanoid-robot/overview.html) - [Distributed real-time processing for humanoid robots | IEEE Xplore](https://ieeexplore.ieee.org/document/1541082) - [Helix: A Vision-Language-Action Model for Generalist Humanoid Control](https://www.figure.ai/news/helix) - [Design and Low-Level Control of a Humanoid Robot Using a Distributed Architecture Approach](https://journals.sagepub.com/doi/10.1177/1077546306070592) ### ch9-15: TRUE - Speaker: Shawn Ryan - Claim: Shopify is behind 10% of all e-commerce in the US. - TLDR: Shopify's share of total US e-commerce revenue is widely cited at approximately 10%, consistent with the claim. - Explanation: Multiple sources confirm Shopify represents roughly 10% of total US e-commerce revenue or GMV. This figure is distinct from Shopify's ~28-30% share of the e-commerce platform market (i.e., merchants using Shopify vs. competitors). The 10% statistic is a standard figure used in Shopify's own marketing materials and is corroborated by third-party analyses. - Sources: - [Shopify Market Share Stats 2026 - Global & Regional Data](https://redstagfulfillment.com/shopify-market-share/) - [Shopify statistics 2026: Latest usage, sales, and trends](https://www.omnisend.com/blog/shopify-statistics/) - [Amazon and Shopify U.S. E-commerce Market Share Nears 50%](https://myamazonguy.com/news/amazon-and-shopify-e-commerce-market-share/) ### ch10-1: TRUE - Speaker: Brett Adcock - Claim: Figure was founded in 2022. - TLDR: Figure was indeed founded in 2022 by Brett Adcock. - Explanation: Multiple sources, including Wikipedia and the video's own description, confirm that Figure AI was founded in 2022. Brett Adcock's statement that the company is roughly 3.5 years old (as of early 2026) is also consistent with a 2022 founding date. - Sources: - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) ### ch10-2: FALSE - Speaker: Brett Adcock - Claim: When Figure was founded in 2022, no AI had ever worked on a humanoid robot in history. - TLDR: AI had been applied to humanoid robots well before Figure's 2022 founding. Deep reinforcement learning for humanoid locomotion dates back to at least 2017-2021 with landmark published research. - Explanation: Academic research shows a clear history of AI applied to humanoid robots predating 2022: DeepLoco (2017) used hierarchical deep RL for physically simulated humanoid locomotion, DeepMimic (2018) demonstrated physics-based humanoid character control via deep RL, and AMP (2021) applied adversarial training to humanoid motion. Even earlier, reinforcement learning was applied to humanoid robots in 2004 (Iida et al.). Adcock's statement appears to be rhetorical hyperbole to emphasize the commercial and practical challenge he faced, but the literal claim is contradicted by extensive published research. - Sources: - [Deep Reinforcement Learning for Humanoid Robot Behaviors | Journal of Intelligent & Robotic Systems](https://link.springer.com/article/10.1007/s10846-022-01619-y) - [Deep Reinforcement Learning for Robotics: A Survey of Real-World Successes | Annual Reviews](https://www.annualreviews.org/content/journals/10.1146/annurev-control-030323-022510) - [Atlas (robot) - Wikipedia](https://en.wikipedia.org/wiki/Atlas_(robot)) ### ch10-3: INEXACT - Speaker: Brett Adcock - Claim: When Figure was founded, there was no electric humanoid hardware that was functional enough to demonstrate the concept would work. - TLDR: Functional electric humanoid robots did exist when Figure was founded in 2022, most notably Agility Robotics' Digit, which had secured $150M in funding and was being prepared for warehouse deployment. - Explanation: Agility Robotics' Digit was an operational electric bipedal humanoid already raising significant venture capital (including from Amazon) in early 2022 and being demonstrated in warehouse contexts. Xiaomi's CyberOne and UBTECH's Walker were also revealed in 2022. However, Adcock's core point has merit in a narrower sense: no existing hardware demonstrated general-purpose AI-driven manipulation at the level he envisioned, and Boston Dynamics' Atlas (the most capable at the time) was still hydraulic. The claim overstates the absence of functional electric humanoid hardware, but is accurate about the lack of a general-purpose AI humanoid. - Sources: - [Agility Robotics - Wikipedia](https://en.wikipedia.org/wiki/Agility_Robotics) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [Humanoid robot - Wikipedia](https://en.wikipedia.org/wiki/Humanoid_robot) ### ch10-4: TRUE - Speaker: Brett Adcock - Claim: Brett self-funded Figure entirely in its first year. - TLDR: Brett Adcock self-funded Figure entirely in its first year, investing roughly $100 million of his own money before outside investors came on board. - Explanation: Multiple sources confirm Adcock bootstrapped Figure after founding it in May 2022, funding it personally due to early difficulty attracting investors. The first outside funding round (a $70M raise) did not close until May 2023, meaning the entire first year of operations was indeed self-funded by Adcock. - Sources: - [Report: Figure Business Breakdown & Founding Story | Contrary Research](https://research.contrary.com/company/figure) - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch10-5: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure's monthly burn rate reached $1 million in month 4 of operations. - TLDR: This is Brett Adcock's personal account of Figure's internal burn rate, which cannot be independently verified by third parties. - Explanation: The $1M/month burn rate by month four is reported exclusively through Adcock's own public statements in interviews, including this one. No independent financial records or third-party audits corroborate this internal company metric. It is consistent with his confirmed self-funding of roughly $100M in Figure's first year, but the specific figure remains unverifiable. - Sources: - [Report: Figure Business Breakdown & Founding Story | Contrary Research](https://research.contrary.com/company/figure) - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) ### ch10-6: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure built a 40-person team in its early months. - TLDR: This is a first-person account about Figure's internal early headcount with no public record to confirm or deny the specific 40-person figure. - Explanation: Public sources confirm Figure scaled rapidly after its 2022 founding, with early employee counts listed in the 11-50 range before growing to over 500 by 2026. However, the specific claim that the team reached exactly 40 people in the early months is an internal operational detail not documented in any accessible public source. - Sources: - [Figure AI builds working humanoid within 1 year - The Robot Report](https://www.therobotreport.com/rbr50-company-2024/figure-ai-builds-working-humanoid-within-1-year/) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch10-7: INEXACT - Speaker: Brett Adcock - Claim: Electric motors are designed to run at full RPMs and are most efficient at those speeds. - TLDR: Electric motors are indeed most efficient at high speeds rather than at stall, but peak efficiency occurs at an intermediate RPM (roughly 75-90% of no-load speed), not at absolute full RPMs. - Explanation: The broader point that motors perform poorly at stall (0 RPM, 0% efficiency) and prefer high rotational speeds is well-supported. However, peak efficiency is not at maximum/full RPMs, where useful torque output is near zero. Engineering sources consistently place peak efficiency at around 75-80% of rated load or 75-90% of no-load speed, where copper losses and core losses are balanced. Saying motors are most efficient at 'full RPMs' is an oversimplification that misstates where the optimal operating point actually is. - Sources: - [How To Read DC Motor & Gear Motor Performance Curves](https://islproducts.com/design-note/how-to-read-dc-motor-gear-motor-performance-curves/) - [Determining Electric Motor Load and Efficiency](https://www.energy.gov/sites/prod/files/2014/04/f15/10097517.pdf) - [Motor Efficiency Question (theory) | Endless Sphere DIY EV Forum](https://endless-sphere.com/sphere/threads/motor-efficiency-question-theory.118218/) - [Importance of Torque Speed & Efficiency Motor Curves in Motors?](https://www.telcointercon.com/understanding-motor-curves-torque-speed-and-efficiency/) ### ch10-8: TRUE - Speaker: Brett Adcock - Claim: When electric motors are stationary but holding power and forces, they operate at a bad point of the torque speed curve. - TLDR: Correct. A stationary motor holding torque operates at the stall point, where mechanical power output is zero and efficiency is at its worst. - Explanation: On the torque-speed curve, power output equals torque multiplied by speed. At zero speed (stall/holding), output power is zero while the motor still draws maximum current, converting all energy to heat. Peak motor efficiency occurs near rated speed and torque, not at standstill. This is a well-established principle in electric motor engineering. - Sources: - [D.C. Motor Torque/Speed Curve Tutorial:::Understanding Motor Characteristics](http://lancet.mit.edu/motors/motors3.html) - [Understanding Motor Torque-Speed and Efficiency Curves | TelcoMotion Knowledge Hub — TelcoMotion](https://www.telcomotion.com/knowledge-hub/understanding-motor-curves-torque-speed-and-efficiency) - [Torque Speed Curves Explained For Better Motor Performance](https://ineedmicromotors.com/torque-speed-curves-motor-performance/) ### ch10-9: TRUE - Speaker: Brett Adcock - Claim: Humanoid robots frequently require motors to be stationary while holding forces, such as when standing or holding an object. - TLDR: Humanoid robot motors must routinely hold torque at near-zero speed (standing, gripping), which is the least efficient operating point on the torque-speed curve. - Explanation: Motor engineering sources confirm that holding a static pose requires continuous current with no mechanical output, maximizing I²R (copper) losses and heat buildup. This is widely recognized as a core design challenge for humanoid robots, distinct from quadrupeds or wheeled robots that spend less time in quasi-static stances. Solutions such as fail-safe electromagnetic brakes and high-Km motors exist precisely to address this stationary-holding inefficiency. - Sources: - [Build a Better Humanoid With Lightweight, Torque-Dense, Robot-Ready Motion](https://www.kollmorgen.com/en-us/blogs/build-better-humanoid-lightweight-torque-dense-robot-ready-motion) - [Considerations when Selecting Motors for Humanoid Joints](https://humanoidroboticstechnology.com/articles/considerations-when-selecting-motors-for-humanoid-joints/) - [Torque Motors for Humanoid Robots, a comprehensive Guide | TQ](https://www.tq-group.com/en/products/tq-robodrive/torque-motors-humanoid-robots-whitepaper/) ### ch10-10: INEXACT - Speaker: Brett Adcock - Claim: Figure has raised approximately $2 billion in funding. - TLDR: Figure AI has raised roughly $1.75 billion across all rounds, making '$2 billion' a slight overstatement but in the right ballpark. - Explanation: Confirmed rounds include a ~$70M Series A (2023), a $675M Series B (2024), and a Series C exceeding $1 billion (September 2025), totaling approximately $1.745B or more. Brett's 'approximately $2 billion or so' is a reasonable rough estimate but modestly overstates the confirmed total. - Sources: - [Figure Exceeds $1B in Series C Funding at $39B Post-Money Valuation](https://www.figure.ai/news/series-c) - [Figure AI raises whopping $675M to commercialize humanoids](https://www.therobotreport.com/figure-ai-raises-675m-to-commercialize-humanoids/) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch10-11: TRUE - Speaker: Brett Adcock - Claim: Figure 1, the company's first-generation robot, was walking in under 12 months from the company's incorporation in 2022. - TLDR: Figure AI was incorporated in May 2022 and achieved bipedal walking with Figure 01 within approximately 12 months, a milestone corroborated by industry sources. - Explanation: Multiple sources confirm Figure AI was founded in May 2022, and Figure 01 took its first steps around May 2023. The Robot Report even headlined this achievement as 'Figure AI builds working humanoid within 1 year,' directly corroborating Adcock's claim of under 12 months from incorporation. - Sources: - [Figure AI builds working humanoid within 1 year - The Robot Report](https://www.therobotreport.com/rbr50-company-2024/figure-ai-builds-working-humanoid-within-1-year/) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch10-12: UNSUBSTANTIATED - Speaker: Brett Adcock - Claim: Getting Figure 1 walking in under 12 months from founding was, at the time, the fastest such achievement in history. - TLDR: Adcock himself hedged this as a personal belief, not a confirmed record. No independent source has verified it against all prior humanoid robot projects. - Explanation: Adcock's own quoted words elsewhere are 'I happen to think it's probably one of the fastest in history' -- a qualified personal belief, not a verified historical record. Secondary sources that describe it as a record all trace back to Adcock's own claim rather than any independent comparative analysis. No formal benchmark or authoritative robotics body has confirmed Figure AI's timeline as definitively the fastest ever for a humanoid walking milestone. - Sources: - [Figure AI builds working humanoid within 1 year - The Robot Report](https://www.therobotreport.com/rbr50-company-2024/figure-ai-builds-working-humanoid-within-1-year/) - [Figure's humanoid is already walking and performing autonomous tasks](https://newatlas.com/robotics/figure-humanoid-walking/) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch10-13: TRUE - Speaker: Brett Adcock - Claim: Figure 2 is the company's second-generation robot. - TLDR: Figure 02 is confirmed to be Figure AI's second-generation humanoid robot. - Explanation: Multiple sources, including Figure AI's own press release and robotics publications, describe Figure 02 as the company's second-generation humanoid robot, following Figure 01. - Sources: - [Figure unveils Figure 02, its second-generation humanoid, setting new standards in AI and robotics](https://www.prnewswire.com/news-releases/figure-unveils-figure-02-its-second-generation-humanoid-setting-new-standards-in-ai-and-robotics-302214889.html) - [Figure AI unveils second-generation humanoid robot Figure 02 - Robotics 24/7](https://www.robotics247.com/article/figure_ai_unveils_second_generation_humanoid_robot_figure_02/) ### ch10-14: TRUE - Speaker: Brett Adcock - Claim: Figure's K-Cup/Keurig demonstration on Figure 1 used neural networks to take camera pixel inputs and output motor trajectories, with no manual code. - TLDR: Figure's coffee-making demo on Figure 1 did use end-to-end neural networks mapping camera pixels directly to motor actions, with no hand-coded logic. - Explanation: Multiple sources confirm that Figure 01's Keurig/K-Cup demonstration used 'neural network visuomotor transformer policies, mapping pixels directly to actions,' which is exactly what Adcock describes. The demo was announced publicly in January 2024, though Adcock states the internal breakthrough occurred in 2023, which is consistent with development preceding the announcement. - Sources: - [Figure's humanoid can now watch, learn and perform tasks autonomously](https://newatlas.com/robotics/figure-humanoid-learning-tasks-autonomously/) - [Figure 01 Humanoid Bot Has Learned to Make Coffee | NextBigFuture.com](https://www.nextbigfuture.com/2024/01/figure-01-humanoid-bot-has-learned-to-make-coffee.html) ### ch10-15: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure completed the neural network demonstration on Figure 1 in 2023, and Brett considers it the most significant demonstration the company has done in nearly 4 years. - TLDR: No public evidence confirms a neural network demo on Figure 1 in 2023. The first publicly documented such demos came in early 2024. - Explanation: Figure AI's publicly documented neural network demonstrations on Figure 01 (pixels to motor actions, no handwritten code) date to January/February 2024, with the viral OpenAI-powered demo on March 13, 2024. Brett appears to be referencing an internal technical milestone in 2023 that has no public record. The 'nearly 4 years' framing is accurate, as Figure was founded in May 2022 and the podcast aired March 2026. - Sources: - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [Report: Figure Business Breakdown & Founding Story | Contrary Research](https://research.contrary.com/company/figure) - [AI Start-Up Figure Shows Off Conversational Robot Infused With OpenAI Tech - Decrypt](https://decrypt.co/221634/ai-start-up-figure-shows-off-conversational-robot-infused-with-openai-tech) ### ch10-16: FALSE - Speaker: Brett Adcock - Claim: Figure's 2023 neural network demonstration running on a humanoid robot was probably one of the first examples of this in the world. - TLDR: Neural networks running on humanoid robots predate Figure's 2023 demo by many years, making the "first in the world" claim clearly overstated. - Explanation: Research shows neural networks on humanoid robots date back to at least 2012 (iSpike with the iCub humanoid), with deep reinforcement learning for humanoid locomotion well established by 2019. By 2023, companies like Boston Dynamics and Agility Robotics already had neural-network-powered humanoid demonstrations. Google DeepMind also introduced vision-language-action (VLA) models for robots in 2023. Figure's work is notable but is far from the first example of neural nets running on a humanoid. - Sources: - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [Real-world humanoid locomotion with reinforcement learning | Science Robotics](https://www.science.org/doi/10.1126/scirobotics.adi9579) - [Humanoid Robots and Humanoid AI: Review, Perspectives and Directions](https://arxiv.org/html/2405.15775v2) - [Why Do Humanoid Robots Still Struggle With the Small Stuff? | Quanta Magazine](https://www.quantamagazine.org/why-do-humanoid-robots-still-struggle-with-the-small-stuff-20260313/) ### ch10-17: INEXACT - Speaker: Brett Adcock - Claim: Helix is Figure's neural network stack. - TLDR: Helix is indeed Figure's AI neural network system, but it is more precisely a Vision-Language-Action (VLA) model rather than just a "neural network stack." - Explanation: Figure's own documentation describes Helix as a Vision-Language-Action (VLA) model that controls humanoid robots end-to-end from pixels and language commands. The term "neural network stack" is an informal simplification, but the core claim that Helix is Figure's neural network-based AI system is accurate. - Sources: - [Helix: A Vision-Language-Action Model for Generalist Humanoid Control](https://www.figure.ai/news/helix) - [Introducing Helix 02: Full-Body Autonomy](https://www.figure.ai/news/helix-02) ### ch10-18: TRUE - Speaker: Brett Adcock - Claim: Figure 3 is the company's third-generation robot. - TLDR: Figure 3 (officially "Figure 03") is confirmed as Figure AI's third-generation humanoid robot, following Figure 01 and Figure 02. - Explanation: Multiple sources, including Figure AI's own announcement and robotics publications, confirm that Figure 03 is the company's third-generation humanoid robot. The progression from Figure 01 to Figure 02 to Figure 03 is well documented. - Sources: - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) - [Figure AI releases Figure 03, its third-generation humanoid robot - Robotics 24/7](https://www.robotics247.com/article/figure_ai_releases_figure_03_its_third_generation_humanoid_robot) - [Figure unveils third-generation humanoid robot for home and commercial use](https://roboticsandautomationnews.com/2025/10/09/figure-unveils-third-generation-humanoid-robot/95351/) ### ch11-1: INEXACT - Speaker: Brett Adcock - Claim: Figure 2 robots were deployed at BMW and worked a 10-hour shift every single day for 6 months. - TLDR: Figure 2 robots did work 10-hour shifts at BMW, but the deployment lasted 11 months, not 6, and shifts ran Monday-Friday, not every single day. - Explanation: According to Figure AI's own published account, the BMW deployment totaled 11 months (not 6), generating 1,250+ hours of runtime. The robots ran 10-hour shifts Monday through Friday, not every single day. The core facts (Figure 2 at BMW, long daily shifts, real production work) are correct, but the duration and frequency are understated in the claim. - Sources: - [F.02 Contributed to the Production of 30,000 Cars at BMW](https://www.figure.ai/news/production-at-bmw) - [Figure humanoid robots retire bruised after 11 months of work at BMW](https://interestingengineering.com/ai-robotics/figure-humanoid-robots-retires-bmw) ### ch11-2: TRUE - Speaker: Brett Adcock - Claim: At commercial client deployments, Figure robots are evaluated against human KPIs for speed and performance. - TLDR: Figure AI publicly benchmarks its robots against human KPIs at commercial client sites. - Explanation: At BMW's Spartanburg plant, Figure and BMW defined KPIs directly tied to human performance, including cycle time (84 seconds), placement accuracy (99%+ per shift), and zero operator interventions. In logistics deployments, Figure's Helix system is explicitly measured against human-level dexterity and speed (e.g., seconds per package vs. human operators). These are well-documented in Figure AI's own public announcements. - Sources: - [Helix Accelerating Real-World Logistics](https://www.figure.ai/news/helix-logistics) - [Scaling Helix: a New State of the Art in Humanoid Logistics](https://www.figure.ai/news/scaling-helix-logistics) - [Humanoid Robots: From Demos to Deployment | Bain & Company](https://www.bain.com/insights/humanoid-robots-from-demos-to-deployment-technology-report-2025/) ### ch11-3: INEXACT - Speaker: Brett Adcock - Claim: BMW's body shop builds X3 and X5 models. - TLDR: BMW's Spartanburg body shop does build X3 and X5 models, but it also produces X4, X6, X7, and XM models -- making the claim incomplete. - Explanation: BMW Plant Spartanburg is the global X model hub and its body shops handle the X3, X4, X5, X6, X7, and XM. Adcock's statement that the body shop builds X3 and X5 is not wrong, but it omits the other four models produced there. The Figure AI robot deployment specifically focused on X3 production, supporting more than 30,000 X3s over ten months. - Sources: - [BMW Group Plant Spartanburg](https://www.bmwgroup-werke.com/spartanburg/en/our-plant.html) - [F.02 Contributed to the Production of 30,000 Cars at BMW](https://www.figure.ai/news/production-at-bmw) - [+++ BMW Group bringing Physical AI to Europe +++](https://www.press.bmwgroup.com/global/article/detail/T0455864EN/bmw-group-to-deploy-humanoid-robots-in-production-in-germany-for-the-first-time?language=en) ### ch11-4: TRUE - Speaker: Brett Adcock - Claim: Figure started building BMW X3s on the line in January 2025. - TLDR: Figure AI's robots began production work on BMW X3s around January 2025, consistent with the claim. The 11-month deployment concluded in November 2025. - Explanation: Figure AI's own post-deployment summary (published November 19, 2025) describes an 11-month Figure 02 deployment at BMW's Spartanburg plant building X3s, which places the start date at approximately December 2024 to January 2025. The robots performed sheet-metal loading tasks, contributing to the production of over 30,000 BMW X3 vehicles across roughly 1,250 operating hours. - Sources: - [F.02 Contributed to the Production of 30,000 Cars at BMW](https://www.figure.ai/news/production-at-bmw) - [Humanoid robots complete 11-month project at BMW plant | Repairer Driven News](https://www.repairerdrivennews.com/2025/11/25/humanoid-robots-complete-11-month-project-at-bmw-plant/) ### ch11-5: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Brett Adcock bought the first 4 BMW X3s that were built with Figure robot assistance. - TLDR: Figure robots did work on BMW X3 production, but Adcock's personal purchase of the first 4 cars is a private action with no public record. - Explanation: Multiple sources confirm Figure AI humanoid robots were deployed on the BMW X3 production line in Spartanburg, SC, contributing to over 30,000 vehicles. However, the specific claim that Brett Adcock personally purchased the first 4 X3s produced with robot assistance is a first-person anecdote about a private transaction, and no public source documents or confirms this purchase. - Sources: - [F.02 Contributed to the Production of 30,000 Cars at BMW](https://www.figure.ai/news/production-at-bmw) - [Figure Claims Humanoid Robot Has Worked Five Months on BMW Production Line | Humanoids Daily](https://www.humanoidsdaily.com/news/figure-claims-humanoid-robot-has-worked-five-months-on-bmw-production-line) ### ch11-6: INEXACT - Speaker: Brett Adcock - Claim: BMW's manufacturing facilities use 12-foot KUKA robot arms bolted to the ground that carry car chassis. - TLDR: BMW does use large floor-mounted KUKA robot arms to handle car bodies in manufacturing, but the specific '12-foot' size is an informal estimate not corroborated by documentation. - Explanation: KUKA is a confirmed major supplier to BMW, providing thousands of robots for body-in-white (chassis) production, including heavy handling and welding tasks across multiple BMW plants globally. Large KUKA industrial robots (e.g., KR series) are indeed floor-mounted and handle heavy car body components. However, the '12-foot' figure is an informal speaker approximation with no specific documentation backing that precise measurement. - Sources: - [KUKA lands order for several hundred industrial robots plus car body assembly systems | KUKA AG](https://www.kuka.com/en-us/company/press/news/2010/04/kuka-lands-order-for-several-hundred-industrial-robots-plus-car-body-assembly-systems) - [BMW Group Munich: human and robot work hand in hand | KUKA AG](https://www.kuka.com/en-us/industries/solutions-database/2017/11/solutions-systems-bmw-muenchen) - [BMW places major order for 5,000 industrial robots - The Manufacturer](https://www.themanufacturer.com/articles/bmw-places-major-order-for-5000-industrial-robots/) ### ch11-7: UNVERIFIABLE - Speaker: Brett Adcock - Claim: End effectors at BMW's manufacturing line are switched out in 1 to 2 seconds. - TLDR: Automated robotic tool changers do swap end effectors in seconds, but the specific 1-2 second figure at BMW cannot be independently confirmed. - Explanation: Industry sources confirm that robotic quick-change systems swap end effectors "within seconds," and the general claim is plausible. However, no public source provides the specific 1-2 second changeover time for BMW's manufacturing lines. The claim is based on Brett Adcock's firsthand visit to a BMW facility, which is a private, unverifiable observation. - Sources: - [An end-effector quick-change device for robots increases production efficiency - LH-TC](https://www.ltautotools.com/an-end-effector-quick-change-device-for-robots-increases-production-efficiency.html) - [Robot Quick Changer End of Arm Tooling | OnRobot](https://onrobot.com/us/products/quick-changer) - [F.02 Contributed to the Production of 30,000 Cars at BMW](https://www.figure.ai/news/production-at-bmw) ### ch11-8: TRUE - Speaker: Brett Adcock - Claim: Figure robots were assigned to the body shop line at BMW to attach the rear header (the back plate) by placing sheet metal onto a fixture. - TLDR: Figure robots did work in BMW's body shop placing sheet metal onto a welding fixture. The rear header detail is consistent with confirmed reports. - Explanation: Figure AI's own published account of the BMW Spartanburg pilot confirms the robots worked in the body shop performing sheet-metal loading, picking sheet-metal parts from racks and placing them on a welding fixture. BMW and independent sources also confirm this body-shop, pick-and-place assignment. Adcock's description of the specific part as the 'rear header' (back plate) is an additional detail not explicitly named in public documentation, but it is fully consistent with all confirmed details of the deployment. - Sources: - [F.02 Contributed to the Production of 30,000 Cars at BMW](https://www.figure.ai/news/production-at-bmw) - [+++ BMW Group bringing Physical AI to Europe +++](https://www.press.bmwgroup.com/global/article/detail/T0455864EN/bmw-group-to-deploy-humanoid-robots-in-production-in-germany-for-the-first-time?language=en) - [Humanoid Robots Complete Trial Project at BMW Assembly Plant | ASSEMBLY](https://www.assemblymag.com/articles/99678-humanoid-robots-complete-trial-project-at-bmw-assembly-plant) ### ch11-9: INEXACT - Speaker: Brett Adcock - Claim: The Figure robot task at BMW involved placing 3 different parts onto a fixture over a 10-hour shift, after which a KUKA arm spot welded the assembly and moved it down the line. - TLDR: The 3 parts, fixture placement, and 10-hour shift are all confirmed by Figure AI's own account. The welding step is confirmed but official sources say 'six-axis industrial robots,' not specifically KUKA. - Explanation: Figure AI's official BMW deployment page confirms the robot placed three sheet-metal parts per cycle onto a welding fixture during a 10-hour Monday-Friday shift. After placement, industrial robots weld the parts and feed them down the main line. BMW's Spartanburg body shop is known to use over 2,600 KUKA robots, making Adcock's KUKA attribution plausible, but official descriptions of this specific task use the generic term 'six-axis industrial robots' rather than naming KUKA. - Sources: - [F.02 Contributed to the Production of 30,000 Cars at BMW](https://www.figure.ai/news/production-at-bmw) - [Is Figure AI's BMW robot partnership what its founder Brett Adcock says? | Fortune](https://fortune.com/2025/04/06/figure-ai-bmw-humanoid-robot-partnership-details-reality-exaggeration/) ### ch11-10: INEXACT - Speaker: Brett Adcock - Claim: Multiple Figure robots were in operation every day during the 6-month BMW deployment. - TLDR: Two Figure 02 robots were confirmed deployed at BMW, running daily 10-hour shifts, but the total deployment lasted 11 months, not 6. - Explanation: Public reports from Figure AI and BMW confirm two Figure 02 robots completed the BMW Spartanburg project, running 10-hour shifts Monday through Friday and logging 1,250+ hours of runtime. The claim that 'multiple robots' operated daily is supported. However, the deployment is publicly documented as 11 months in total, with 6 months representing the initial delivery and testing phase before full production-line deployment at month 10. Adcock's reference to '6 months' does not match the publicly reported total duration. - Sources: - [F.02 Contributed to the Production of 30,000 Cars at BMW](https://www.figure.ai/news/production-at-bmw) - [Humanoid robots complete 11-month project at BMW plant | Repairer Driven News](https://www.repairerdrivennews.com/2025/11/25/humanoid-robots-complete-11-month-project-at-bmw-plant/) - [Figure's Humanoid Robots Contribute to BMW Production](https://humanoidroboticstechnology.com/industry-news/figure-humanoid-robots-contributed-to-the-production-of-30000-cars-at-bmw/) ### ch11-11: UNVERIFIABLE - Speaker: Brett Adcock - Claim: The same Figure 2 robot that started the first day of the 6-month BMW deployment was still operational and completed the final shift 6 months later. - TLDR: Figure AI confirmed the BMW deployment lasted many months with minimal hardware failures, but no public source specifies that the exact same robot unit ran both the opening and closing shifts. - Explanation: Figure AI's published account of the BMW Spartanburg deployment confirms strong hardware durability across 1,250+ operating hours and over 90,000 parts loaded, consistent with the broader claim. However, the specific operational detail that the very same individual robot unit completed both the first and last shifts is not documented in any public source and is effectively an internal, first-person operational anecdote. - Sources: - [F.02 Contributed to the Production of 30,000 Cars at BMW](https://www.figure.ai/news/production-at-bmw) - [+++ BMW Group bringing Physical AI to Europe +++](https://www.press.bmwgroup.com/global/article/detail/T0455864EN/bmw-group-to-deploy-humanoid-robots-in-production-in-germany-for-the-first-time?language=en) ### ch11-12: TRUE - Speaker: Brett Adcock - Claim: Figure's humanoid robots run approximately 40 degrees of freedom motors. - TLDR: Figure 01, the robot used in the BMW deployment, has 40+ degrees of freedom, matching Adcock's claim of approximately 40. - Explanation: Publicly available specs confirm Figure 01 features 40+ degrees of freedom. The BMW pilot ran in 2024 using Figure 01, so the ~40 DOF figure Adcock cited is accurate. Figure 02 later redesigned to 35 body DOF plus 16 hand DOF, but that does not affect the claim as stated. - Sources: - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [Figure 02 humanoid robot is ready to get to work - The Robot Report](https://www.therobotreport.com/figure-02-humanoid-robot-is-ready-to-get-to-work/) - [QVIRO | Figure Figure 02 Specifications](https://qviro.com/product/figure/figure-02/specifications) ### ch7-1: TRUE - Speaker: Brett Adcock - Claim: Brett Adcock took Archer Aviation public within 3 years of founding the company. - TLDR: Archer Aviation was founded in 2018 and went public in September 2021, roughly 3 years later. - Explanation: Archer Aviation was founded in 2018 and completed a SPAC merger with Atlas Crest Investment Corp. on September 16-20, 2021, listing on the NYSE under ticker ACHR. This places the public listing at approximately 3 years after the company's founding, consistent with Adcock's claim. - Sources: - [Archer Aviation - Wikipedia](https://en.wikipedia.org/wiki/Archer_Aviation) - [Archer Aviation Is Latest eVTOL Firm To Start Public Trading](https://www.flyingmag.com/archer-goes-public/) ### ch7-2: INEXACT - Speaker: Brett Adcock - Claim: Archer Aviation is a publicly traded company valued at approximately $6 billion. - TLDR: Archer Aviation is indeed publicly traded on NYSE (ACHR), but its market cap at the video's publication date (March 30, 2026) was approximately $3.7B, not $6B. - Explanation: Archer went public via a SPAC merger with Atlas Crest Investment Corp. in September 2021, within about 3 years of its 2018 founding. The $6B figure appears to reference an earlier peak: ACHR reached roughly $6.3B in early 2026 before declining to around $3.7B by March 30, 2026. The company does report an order backlog exceeding $6B, which may have been conflated with market cap. - Sources: - [Archer Aviation (ACHR) Market Cap & Net Worth](https://stockanalysis.com/stocks/achr/market-cap/) - [Archer Aviation - Market capitalization](https://companiesmarketcap.com/archer-aviation/marketcap/) - [Archer Aviation Is Latest eVTOL Firm To Start Public Trading](https://www.flyingmag.com/archer-goes-public/) ### ch7-3: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Archer Aviation designed 4 to 5 generations of aircraft. - TLDR: Adcock's own official bio consistently states 5 generations, but public records only document 2 named aircraft (Maker and Midnight). - Explanation: Brett Adcock's personal biography on brettadcock.com states he 'architected, engineered, and flight-tested 5 generations of aircraft' at Archer, which is consistent with the podcast claim of '4 or 5 generations.' However, no independent third-party source documents more than two publicly-named aircraft generations (the Maker demonstrator and the Midnight production vehicle). The additional internal prototypes or sub-scale vehicles implied by the '5 generations' figure cannot be independently confirmed. - Sources: - [Bio | Brett Adcock Official](https://www.brettadcock.com/bio) - [Archer Aviation - Wikipedia](https://en.wikipedia.org/wiki/Archer_Aviation) ### ch7-4: TRUE - Speaker: Brett Adcock - Claim: Archer Aviation went public through a SPAC process. - TLDR: Archer Aviation did go public via a SPAC merger with Atlas Crest Investment Corp., completing the deal on September 16, 2021. - Explanation: Archer Aviation merged with Atlas Crest Investment Corp. (ACIC), a SPAC, and began trading on the NYSE under the ticker ACHR in September 2021 at a $3.8 billion valuation. This is well-documented across multiple sources including TechCrunch, Flying Magazine, and SEC filings. - Sources: - [Archer lands $1B order from United Airlines and a SPAC deal | TechCrunch](https://techcrunch.com/2021/02/10/archer-lands-1-1b-order-from-united-airlines-and-a-spac-deal/) - [Archer to go public at $3.8 billion valuation, United Airlines orders $1 billion worth of eVTOL aircraft - Vertical Mag](https://verticalmag.com/features/archer-public-billion-valuation-united-airlines-orders-evtol-aircraft/) - [Archer Aviation Is Latest eVTOL Firm To Start Public Trading](https://www.flyingmag.com/archer-goes-public/) ### ch7-5: TRUE - Speaker: Brett Adcock - Claim: A SPAC (special purpose acquisition company) takes a company public through a reverse merger. - TLDR: A SPAC is indeed a special purpose acquisition company that takes a private company public via a reverse merger, exactly as described. - Explanation: Multiple financial sources, including Fidelity and the Cornell Legal Information Institute, confirm that a SPAC is a publicly traded shell company that merges with a private company to bring it public, a process widely described as a reverse merger (or de-SPAC transaction). The core description given by Adcock is accurate and standard financial terminology. - Sources: - [SPACs explained | Fidelity](https://www.fidelity.com/learning-center/trading-investing/SPACs) - [Special Purpose Acquisition Company (SPAC) | Legal Information Institute](https://www.law.cornell.edu/wex/special_purpose_acquisition_company_(spac)) - [Special-purpose acquisition company - Wikipedia](https://en.wikipedia.org/wiki/Special-purpose_acquisition_company) ### ch7-6: TRUE - Speaker: Brett Adcock - Claim: Brett Adcock had no prior hardware experience before starting Archer in 2018-2019, having previously worked in software. - TLDR: Adcock's entire pre-Archer career was in software, from teen-built web companies to the Vettery talent marketplace sold in 2018. - Explanation: Multiple sources confirm Adcock began building software companies at age 15-16, co-founded AI recruiting platform Vettery, and had zero hardware background before pivoting to eVTOL aviation in late 2018. He himself described spending six months cold-calling 300+ industry experts to learn about aviation from scratch after the Vettery acquisition. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [High-tech entrepreneur Brett Adcock on Figure, Archer, and early success](https://newatlas.com/remarkable-people/brett-adcock-history/) ### ch7-7: INEXACT - Speaker: Brett Adcock - Claim: Major venture capital groups were not funding SpaceX, Tesla, or Rivian in their early stages. - TLDR: The claim is broadly directionally true but overstated. DFJ (Draper Fisher Jurvetson) was an early VC backer of both SpaceX and Tesla, and Founders Fund invested in SpaceX in 2008. - Explanation: Marquee firms like Sequoia and Andreessen Horowitz did not fund SpaceX or Tesla in their early stages, supporting Adcock's general point. However, DFJ invested in SpaceX's Series A (2002) and in Tesla's Series C (2006), and Founders Fund invested in SpaceX in 2008. Rivian's early funding did come from non-traditional VC sources (Sumitomo, Standard Chartered, Amazon, Ford), making that portion of the claim accurate. The blanket assertion that no major VC groups were involved in any of these companies early on is an oversimplification. - Sources: - [Who Were the Early Investors in SpaceX? – Big Frame](https://bigframe.net/who-were-the-early-investors-in-spacex/) - [SpaceX Funding Rounds: Key Investors by Stage](https://spacexstock.com/spacex-funding-rounds-key-investors-by-stage/) - [Tesla shareholders from Start to IPO | Eqvista](https://eqvista.com/tesla-shareholders-from-start-to-ipo/) - [Rivian's mega-IPO was built with a truckload of private capital - PitchBook](https://pitchbook.com/news/articles/rivian-ipo-timeline-electric-vehicles) ### ch7-8: INEXACT - Speaker: Brett Adcock - Claim: The investment mandate for most venture capital firms in the Bay Area and Silicon Valley excludes hardware. - TLDR: Most traditional Bay Area VCs do heavily favor software over hardware, but framing it as a formal 'mandate' overstates the case, and several top-tier firms (a16z, Sequoia) do invest in hardware and deep tech. - Explanation: It is widely recognized in the VC industry that the majority of mainstream Silicon Valley VCs prefer software/SaaS due to better capital efficiency and scalability, and historically steered away from capital-intensive hardware. Several prominent Bay Area firms (Sapphire Ventures, Accel, Greylock) are explicitly software-focused. However, the word 'mandate' implies a formal, explicit policy, when in reality it is more of a de facto preference. Importantly, top-tier Bay Area VCs like Andreessen Horowitz and Sequoia Capital (which back Figure AI itself, per the video description) do invest in hardware and deep tech, qualifying the 'most VCs exclude hardware' framing. - Sources: - [Raising Capital for Hardware: Top VCs, Trends & Global Resources - Visible.vc](https://visible.vc/blog/top-hardware-vcs/) - [Top 9 Venture Capital Firms Investing in Hardware Startups](https://www.rho.co/blog/vcs-in-hardware) - [Top 20 venture capital firms in Bay Area — 2025 update | Waveup](https://waveup.com/blog/top-vc-firms-san-francisco-bay-area/) ### ch7-9: FALSE - Speaker: Brett Adcock - Claim: As of approximately 6 months before the interview, no top venture capital firm in the US had invested in a humanoid robotics company. - TLDR: By ~6 months before the March 2026 interview (i.e., September 2025), top VCs had already invested in humanoid/physical robotics. Sequoia Capital joined Physical Intelligence's $400M Series A in November 2024. - Explanation: Sequoia Capital invested in Physical Intelligence (an AI robotics foundation model company) in November 2024, and again in its $600M Series B in November 2025. Figure AI also raised a $675M Series B in February 2024 from Microsoft, OpenAI Startup Fund, and NVIDIA. In context, Adcock appears to be describing the VC landscape circa 2022 when he was founding Figure, where the claim had more validity. As literally stated (6 months before the March 2026 interview), it is contradicted by evidence. - Sources: - [Physical Intelligence raises $400M for foundation models for robotics - The Robot Report](https://www.therobotreport.com/physical-intelligence-raises-400m-for-foundation-models-for-robotics/) - [Physical Intelligence raises $600M to advance robot foundation models - The Robot Report](https://www.therobotreport.com/physical-intelligence-raises-600m-advance-robot-foundation-models/) - [Figure Raises $675M at $2.6B Valuation and Signs Collaboration Agreement with OpenAI](https://www.prnewswire.com/news-releases/figure-raises-675m-at-2-6b-valuation-and-signs-collaboration-agreement-with-openai-302074897.html) ### ch7-10: INEXACT - Speaker: Brett Adcock - Claim: Brett Adcock sold his previous company (Vettery) for $110 million. - TLDR: Vettery was acquired by Adecco, but public reports cite the price as 'a little over $100 million,' not specifically $110 million. - Explanation: TechCrunch reported in February 2018 that a source with knowledge of the deal said the price was 'a little over $100 million,' while Adecco never officially disclosed the exact terms. Adcock's figure of $110M could technically be consistent with 'a little over $100M,' but no verified source confirms the $110M figure specifically. - Sources: - [Adecco Group acquires recruiting startup Vettery for $100M | TechCrunch](https://techcrunch.com/2018/02/20/adecco-acquires-vettery/) - [THE ADECCO GROUP ANNOUNCES ACQUISITION OF VETTERY](https://www.adeccogroup.com/our-group/media/press-releases/2018-the-adecco-group-announces-acquisition-of-vettery) ### ch7-11: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Archer's financing options were to raise $100 million privately at a $300-500 million valuation, or go public and raise $1 billion. - TLDR: Archer's SPAC did target roughly $1 billion in proceeds (confirmed), but the private alternative of raising $100M at a $300-500M valuation is an internal deliberation with no public record. - Explanation: Public filings confirm Archer merged with Atlas Crest SPAC targeting $1.1 billion in gross proceeds, ultimately receiving approximately $857.6 million after redemptions. This aligns with the 'go public and raise ~$1 billion' side of the claim. However, the specific terms of the rejected private financing option ($100M at a $300-500M valuation) are an internal business consideration that cannot be independently verified from any public source. - Sources: - [Archer lands $1B order from United Airlines and a SPAC deal | TechCrunch](https://techcrunch.com/2021/02/10/archer-lands-1-1b-order-from-united-airlines-and-a-spac-deal/) - [Archer to go public at $3.8 billion valuation, United Airlines orders $1 billion worth of eVTOL aircraft - Vertical Mag](https://verticalmag.com/features/archer-public-billion-valuation-united-airlines-orders-evtol-aircraft/) ### ch7-12: INEXACT - Speaker: Brett Adcock - Claim: Archer went public and raised $1 billion. - TLDR: Archer did go public via SPAC, but raised approximately $857.6 million, not a full $1 billion. - Explanation: Archer Aviation merged with SPAC Atlas Crest Investment Corp. in September 2021, with gross proceeds initially projected at ~$1.1 billion. However, due to significant shareholder redemptions, the company ultimately received approximately $857.6 million. Rounding that to "$1 billion" is an overstatement of the actual amount raised. - Sources: - [Archer to go public at $3.8 billion valuation, United Airlines orders $1 billion worth of eVTOL aircraft - Vertical Mag](https://verticalmag.com/features/archer-public-billion-valuation-united-airlines-orders-evtol-aircraft/) - [Archer lands $1B order from United Airlines and a SPAC deal | TechCrunch](https://techcrunch.com/2021/02/10/archer-lands-1-1b-order-from-united-airlines-and-a-spac-deal/) - [Electric Aircraft Startup Archer to Go Public Via SPAC Atlas Crest Investment; Shares Lift Off – IPO Edge](https://ipo-edge.com/electric-aircraft-startup-archer-to-go-public-via-spac-atlas-crest-investment/) ### ch7-13: INEXACT - Speaker: Brett Adcock - Claim: Archer Aviation was sued by Boeing and a startup founded by Larry Page, the co-founder of Google. - TLDR: Archer was sued by Wisk Aero, a joint venture backed by Boeing and Larry Page's Kitty Hawk, not directly by Boeing itself. The core claim holds. - Explanation: In April 2021, Wisk Aero sued Archer Aviation for trade secret theft and patent infringement. Wisk was created as a joint venture between Boeing and Kitty Hawk, the eVTOL startup founded by Google co-founder Larry Page. The lawsuit was filed by Wisk, not Boeing directly, though Boeing later filed separate patent suits. The settlement in 2023 included Boeing investing in Archer. Adcock's framing as 'basically like Boeing' reflects the indirect relationship accurately enough, but conflates Wisk and Boeing as separate plaintiffs. - Sources: - [Wisk Aero, Boeing and Archer settle litigation battle - Airport Technology](https://www.airport-technology.com/news/wisk-aero-boeing-and-archer-settle-litigation-battle/) - [Futuristic aircraft maker Archer seeks to dismiss competitor's lawsuit claiming theft of trade secrets](https://www.cnbc.com/2021/06/01/evtol-start-up-archer-seeks-to-dismiss-trade-secret-lawsuit.html) - [Wisk Aero, Archer, and Boeing Reach Agreement To Settle Litigation and Enter into Autonomous Flight Collaboration; Boeing Invests in Archer's Latest Funding Round](https://investors.archer.com/news/news-details/2023/Wisk-Aero-Archer-and-Boeing-Reach-Agreement-To-Settle-Litigation-and-Enter-into-Autonomous-Flight-Collaboration-Boeing-Invests-in-Archers-Latest-Funding-Round/default.aspx) ### ch7-14: UNVERIFIABLE - Speaker: Brett Adcock - Claim: The lawsuit against Archer Aviation received front-page coverage in The New York Times. - TLDR: The New York Times did cover the Wisk vs. Archer lawsuit, but front-page placement cannot be confirmed. - Explanation: Multiple sources confirm the NYT reported on the April 2021 Wisk lawsuit against Archer Aviation. However, no source confirms the article appeared on the front page, and a Flying Magazine reference to the NYT piece suggests it was a standard business news report rather than a front-page story. The front-page claim is Adcock's personal recollection and cannot be verified from available sources. - Sources: - [Electric Aircraft Builders Wisk and Archer Battle Over Trade Secrets - FLYING Magazine](https://www.flyingmag.com/wisk-archer-patent-court/) - [Wisk Aero sues Archer Aviation for alleged patent infringement, trade secret theft | TechCrunch](https://techcrunch.com/2021/04/06/wisk-aero-sues-archer-aviation-for-alleged-patent-infringement-trade-secret-theft/) ### ch7-15: INEXACT - Speaker: Brett Adcock - Claim: Larry Page started a company called Kitty Hawk in the Bay Area approximately 10 years before the interview. - TLDR: Larry Page backed Kitty Hawk financially but did not found it. Sebastian Thrun founded the company (originally as Zee.Aero) in 2010, which is roughly 16 years before the 2026 interview, not 10. - Explanation: Kitty Hawk Corporation was founded in 2010 by Sebastian Thrun (originally under the name Zee.Aero), and was based in Palo Alto in the Bay Area. Larry Page was a major financial backer, not the founder. The claim that Page 'started' the company is inaccurate. The '10 years ago' figure is also off, as 2010 is approximately 16 years before this 2026 interview, though Adcock may have been speaking relative to Archer's founding (~2018). - Sources: - [Kitty Hawk Corporation - Wikipedia](https://en.wikipedia.org/wiki/Kitty_Hawk_Corporation) - [Kittyhawk, the electric aircraft moonshot backed by Larry Page, is shutting down | TechCrunch](https://techcrunch.com/2022/09/21/kitty-hawk-the-electric-aircraft-moonshot-backed-by-larry-page-is-shutting-down/) ### ch7-16: INEXACT - Speaker: Brett Adcock - Claim: Kitty Hawk worked on electric VTOL aircraft for approximately 10 years. - TLDR: Kitty Hawk actually operated for about 12 years (founded ~2010, shut down September 2022), not approximately 10. - Explanation: Kitty Hawk was originally founded as Zee.Aero in 2010 and shut down in September 2022, making its operational lifespan roughly 12 years. Multiple sources describe it as lasting "more than 10 years." The core claim that Kitty Hawk spent a significant period working on eVTOL aircraft is correct, but "approximately 10 years" understates the actual duration. - Sources: - [Kittyhawk, the electric aircraft moonshot backed by Larry Page, is shutting down | TechCrunch](https://techcrunch.com/2022/09/21/kitty-hawk-the-electric-aircraft-moonshot-backed-by-larry-page-is-shutting-down/) - [Flying taxi pioneer Kitty Hawk closes down after more than 10 years - AeroTime](https://www.aerotime.aero/articles/32221-flying-taxi-developer-kitty-hawk-closes-down) ### ch7-17: INEXACT - Speaker: Brett Adcock - Claim: Approximately 10 to 15 core Kitty Hawk employees joined Archer Aviation within its first 2 years. - TLDR: The lawsuit confirms Archer hired roughly 10 former Wisk (Kitty Hawk spinoff) engineers, not 10-15 as Adcock claimed. The core story checks out. - Explanation: Wisk Aero, the company formed from Larry Page's Kitty Hawk and Boeing, sued Archer in 2021 specifically citing the recruitment of 10 former Wisk engineers. Adcock's figure of '10 to 15' slightly overstates the documented number. Additionally, the employees technically came from Wisk Aero (Kitty Hawk's spinoff) rather than Kitty Hawk directly, though Adcock appears to use the names interchangeably. - Sources: - [Wisk v. Archer: Inside a Bitter eVTOL Trade Secret Lawsuit](https://www.flyingmag.com/wisk-v-archer-inside-a-bitter-evtol-trade-secret-lawsuit/) - [Wisk Aero sues Archer Aviation for alleged patent infringement, trade secret theft | TechCrunch](https://techcrunch.com/2021/04/06/wisk-aero-sues-archer-aviation-for-alleged-patent-infringement-trade-secret-theft/) ### ch7-18: INEXACT - Speaker: Brett Adcock - Claim: Archer's aircraft is a 6,000-pound, 4-passenger piloted aircraft. - TLDR: Archer Midnight carries 4 passengers plus a pilot, but its max gross takeoff weight is 6,500 lbs, not 6,000 lbs. - Explanation: The passenger configuration (4 passengers, 1 pilot) is correct. However, the FAA airworthiness criteria and multiple aviation sources list the Archer Model M001 (Midnight) maximum gross takeoff weight at 6,500 lbs, not 6,000 lbs as stated. The core description is accurate but the weight figure is off by 500 lbs. - Sources: - [Archer Aviation - Wikipedia](https://en.wikipedia.org/wiki/Archer_Aviation) - [Archer Aviation Midnight (production aircraft)](https://evtol.news/archer/) - [Federal Register: Airworthiness Criteria for Archer Aviation Model M001](https://www.federalregister.gov/documents/2024/05/24/2024-11192/airworthiness-criteria-special-class-airworthiness-criteria-for-the-archer-aviation-inc-model-m001) ### ch7-19: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Archer's aircraft has 24 degrees of freedom. - TLDR: No public source confirms or denies the specific "24 degrees of freedom" figure for Archer's Midnight aircraft. - Explanation: Public technical documentation confirms the Midnight has 6 tilting motors, 6 fixed lift motors, 4 flaperons, and 6 ruddervators (V-tail), plus variable pitch propellers. These actuated components could plausibly sum to around 24 DOF, but no official specification or technical publication states this exact number. Adcock co-founded Archer and would have direct knowledge, but the figure cannot be independently verified from open sources. - Sources: - [Archer Aviation Midnight (production aircraft)](https://evtol.news/archer/) - [Archer Aviation - Wikipedia](https://en.wikipedia.org/wiki/Archer_Aviation) - [Archer | Midnight Aircraft](https://archer.com/aircraft) ### ch7-20: TRUE - Speaker: Brett Adcock - Claim: Archer's aircraft has 6 motors that tilt 90 degrees for vertical takeoff before transitioning to forward flight. - TLDR: Archer's Midnight aircraft does have 6 leading-edge motors that tilt for vertical takeoff and transition to forward flight. This is part of its well-documented '12-tilt-6' configuration. - Explanation: Archer's Midnight uses a '12-tilt-6' design: 6 front tilt-rotors on the wing's leading edge that tilt 90 degrees for VTOL and then rotate to horizontal for forward flight, plus 6 fixed rear rotors used only during vertical flight. Adcock's description of 6 tilting leading-edge motors is accurate and consistent with Archer's official documentation. - Sources: - [Archer Aviation - Archer Unveils its Production Aircraft, Midnight™](https://investors.archer.com/news/news-details/2022/Archer-Unveils-its-Production-Aircraft-Midnight/default.aspx) - [Archer Details Motor and Battery Design for the Midnight eVTOL Air Taxi](https://backend.ainonline.com/news-article/2022-11-18/archer-details-motor-and-battery-design-midnight-evtol-air-taxi) - [Archer Aviation Midnight (production aircraft)](https://evtol.news/archer/) ### ch7-21: INEXACT - Speaker: Brett Adcock - Claim: Archer's aircraft uses variable pitch propellers on its fan blades. - TLDR: Archer's aircraft uses variable pitch on only 6 of its 12 propellers, not all of them. The other 6 rear propellers are fixed-pitch. - Explanation: Archer's Midnight uses a proprietary '12-tilt-6' configuration: 6 tilting, five-bladed variable-pitch propellers on the front wing edge (used for lift and cruise) and 6 non-tilting, two-bladed fixed-pitch propellers on the rear wing edge (used only for hover). Adcock's statement that 'all' the fan blades have variable pitch overstates the design. However, variable pitch propellers are indeed a key feature of the aircraft's propulsion system. - Sources: - [Archer Aviation Midnight (production aircraft)](https://evtol.news/archer/) - [Archer Aviation - Wikipedia](https://en.wikipedia.org/wiki/Archer_Aviation) - [Archer settles on four blades for its eVTOL lifting props - The Air Current](https://theaircurrent.com/aircraft-development/archer-settles-four-blades-evtol-lifting-props/) ### ch7-22: INEXACT - Speaker: Brett Adcock - Claim: Archer's aircraft operates at approximately 2,000 to 3,000 feet above ground level. - TLDR: Archer's Midnight is designed for urban operations at 1,500 to 4,000 feet AGL, so 2,000-3,000 feet falls within that range but is a narrower slice of it. - Explanation: Industry and Archer sources describe the Midnight eVTOL's intended urban operating altitude as 1,500 to 4,000 feet AGL. Adcock's figure of 2,000-3,000 feet sits inside that envelope and is a reasonable approximation, but the actual documented range is broader. In testing, the aircraft has reached up to 7,000 feet AGL. - Sources: - [Archer's Midnight eVTOL aircraft reaches 7,000ft, in highest altitude flight so far - eVTOL Insights](https://evtolinsights.com/archers-midnight-evtol-aircraft-reaches-7000ft-in-highest-altitude-flight-so-far/) - [Archer's Midnight Flight Test Program Reaches Record Heights](https://news.archer.com/archers-midnight-flight-test-program-reaches-record-heights) ### ch7-23: INEXACT - Speaker: Brett Adcock - Claim: Traditional helicopters fly at altitudes of approximately 2,000 to 3,000 feet above ground level. - TLDR: Helicopters typically cruise between 1,000 and 5,000 feet AGL, so 2,000-3,000 feet is plausible but only a subset of the actual range. - Explanation: Multiple aviation sources confirm normal helicopter cruise altitude is roughly 1,000 to 5,000 feet AGL. Adcock's figure of 2,000 to 3,000 feet falls within that range but understates how wide the typical window is, making it an imprecise approximation rather than a precise rule. - Sources: - [How High Can Helicopters Fly? (Altitude Guide & Records) | Helicopter Express](https://www.helicopterexpress.com/blog/how-high-can-helicopters-fly) - [How High Can Helicopters Fly? A Guide to Altitudes | Executive Flyers](https://executiveflyers.com/how-high-can-a-helicopter-fly/) ### ch7-24: TRUE - Speaker: Brett Adcock - Claim: There are approximately 1.5 billion installed cars in the world. - TLDR: Estimates consistently place the global vehicle fleet at roughly 1.5 billion, aligning with Adcock's figure. - Explanation: Multiple sources (OICA, IEA, Hedges & Company) estimate the global installed vehicle fleet at between 1.475 billion and 1.645 billion. Adcock's "billion and a half or so" is a reasonable, well-supported approximation. - Sources: - [How Many Cars Are There In The World?](https://hedgescompany.com/blog/2021/06/how-many-cars-are-there-in-the-world/) - [This Is How Many Cars There Are In The World In 2024](https://www.hotcars.com/how-many-cars-are-there-in-the-world/) ### ch7-25: INEXACT - Speaker: Brett Adcock - Claim: Approximately 80 million cars are manufactured per year worldwide. - TLDR: Global passenger car production is roughly 68 million per year, not 80 million. Total motor vehicles (including commercial trucks and buses) reach about 92-93 million. - Explanation: According to OICA data, worldwide passenger car production in 2024 was approximately 67.7 million units, while total motor vehicle production (including commercial vehicles) was about 92.5 million. The figure of 80 million sits between these two measures and does not accurately reflect either category. Adcock's estimate overstates passenger car production by around 18%, though the broader point about the scale of annual production remains in the right order of magnitude. - Sources: - [Production Statistics - International Organization of Motor Vehicle Manufacturers](https://oica.net/production-statistics/) - [Global Passenger Car Production in 2025: A Comprehensive Analysis - World ranking sites](https://statranker.org/economy/global-passenger-car-production-in-2025-a-comprehensive-analysis/) ### ch7-26: TRUE - Speaker: Brett Adcock - Claim: At the current rate of car production, it would take approximately 20 years to replace all cars in the world. - TLDR: The math checks out. With roughly 1.5 billion cars and ~80-94 million produced annually, turnover takes approximately 16-19 years, which rounds to 'about 20 years.' - Explanation: Current estimates put the global vehicle fleet at roughly 1.6 billion units. Annual global motor vehicle production reached about 94 million in 2023 (passenger cars alone closer to 70 million). Using Adcock's stated figures of 1.5 billion cars divided by 80 million per year yields 18.75 years, which is a reasonable approximation of 'about 20 years.' The core claim is mathematically sound and consistent with publicly available data. - Sources: - [Number of Cars in the World 2025: Key Stats & Figures](https://autokunbo.com/number-of-cars-in-the-world-2025-key-stats-figures/) - [Production Statistics - International Organization of Motor Vehicle Manufacturers](https://oica.net/production-statistics/) - [Worldwide automobile production | Statista](https://www.statista.com/statistics/262747/worldwide-automobile-production-since-2000/) ### ch7-27: TRUE - Speaker: Brett Adcock - Claim: Waymo operates autonomous vehicles in Palo Alto, Menlo Park, and San Jose. - TLDR: Waymo does operate in Palo Alto, Menlo Park, and San Jose. Expansions to Palo Alto and Menlo Park began in June 2025, and San Jose was added in November 2025. - Explanation: Multiple sources confirm Waymo expanded to parts of Palo Alto and Menlo Park in June 2025, then further expanded across the entire Peninsula including San Jose in November 2025. The podcast was recorded and published in March 2026, so all three locations were active service areas by then. - Sources: - [Waymo expands self-driving taxi service to parts of Palo Alto, Menlo Park - Palo Alto Online](https://www.paloaltoonline.com/transportation/2025/06/17/waymo-expands-self-driving-taxi-service-to-parts-of-palo-alto-menlo-park/) - [Waymo will now drive on Bay Area freeways and in San José](https://sfstandard.com/2025/11/12/waymo-will-now-drive-bay-area-freeways-san-jos/) - [Waymo announces expansion across the Peninsula and onto freeways - Palo Alto Online](https://www.paloaltoonline.com/transportation/2025/11/13/waymo-announces-expansion-across-the-peninsula-and-onto-freeways/) ### ch7-28: INEXACT - Speaker: Brett Adcock - Claim: Tesla has approximately 10 million cars on the road. - TLDR: Tesla's cumulative deliveries through end of 2025 were approximately 8.9 million, not 10 million. The actual number still on the road is somewhat lower. - Explanation: Multiple sources put Tesla's total cumulative vehicle deliveries from 2008 through 2025 at roughly 7.95 to 8.9 million units. With some vehicles no longer in service, the number actually on the road is below that figure. The claim of 10 million is an overestimate by roughly 10-25%, though it is the correct order of magnitude. - Sources: - [Tesla Sales, Revenue & Production Statistics (Mar 2026)](https://tridenstechnology.com/tesla-sales-statistics/) - [2025 (Full Year) Global: Tesla Worldwide Car Sales by Model and Outlook 2026 to 2029 - Car Sales Statistics](https://www.best-selling-cars.com/brands/2025-full-year-global-tesla-worldwide-car-sales-by-model-and-outlook-2026-to-2029/) ### ch7-29: TRUE - Speaker: Brett Adcock - Claim: There are only approximately thousands of Waymo vehicles in operation globally. - TLDR: Waymo's fleet sits at roughly 3,000 vehicles as of early 2026, which is indeed in the 'thousands' range as claimed. - Explanation: Search results consistently place Waymo's operational fleet at approximately 2,500-3,000 robotaxis around the time of the video's publication (March 2026). The speaker's characterization of 'thousands or so of Waymos' accurately reflects this scale, contrasting it with the over 1 billion total cars globally. - Sources: - [Waymo Stats 2025: Funding, Growth, Coverage, Fleet Size & More](https://www.thedriverlessdigest.com/p/waymo-stats-2025-funding-growth-coverage) - [Waymo Statistics In 2026: Funding, Revenue & Rides Per Cities](https://awisee.com/blog/waymo-statistics/) - [Waymo - Wikipedia](https://en.wikipedia.org/wiki/Waymo) ### ch7-30: TRUE - Speaker: Brett Adcock - Claim: Archer Aviation currently flies its aircraft on a weekly basis in California. - TLDR: Archer Aviation actively flies its Midnight eVTOL aircraft at its Salinas, California test facility, consistent with a regular weekly cadence. - Explanation: Multiple sources confirm Archer has conducted extensive flight testing in Salinas, CA, including piloted flights exceeding 55 miles, high-altitude tests up to 10,000 feet, and public demonstrations at the 2025 California International Air Show. The company also performed over 400 autonomous flights in 2024. This all supports ongoing, frequent (weekly-level) flight operations in California. - Sources: - [Archer's Midnight to Fly at 2025 California International Air Show Following Record Flight Test Achievements](https://investors.archer.com/news/news-details/2025/Archers-Midnight-to-Fly-at-2025-California-International-Air-Show-Following-Record-Flight-Test-Achievements/default.aspx) - [Archer Aviation begins piloted air taxi test flights | Smart Cities Dive](https://www.smartcitiesdive.com/news/archer-aviation-air-taxi-piloted-test-flight/750475/) - [Archer Aviation Q4, fiscal year 2025 results confirm on-track U.S., UAE Midnight pilot programs for 2026 | CompositesWorld](https://www.compositesworld.com/archer-aviation-q4-fiscal-year-2025-results-confirm-on-track-us-uae-midnight-pilot-programs-for-2026) ### ch7-31: TRUE - Speaker: Brett Adcock - Claim: To fly passengers commercially and charge money, Archer requires FAA type certification. - TLDR: FAA type certification is indeed required before Archer can fly paying passengers commercially. This is a well-established regulatory requirement for eVTOL aircraft. - Explanation: Under FAA regulations, any aircraft conducting commercial passenger operations for compensation must hold a type certificate. Archer has already secured Part 135, 145, and 141 operational certificates, but its Midnight aircraft still needs FAA type certification before revenue-generating passenger flights can begin. Multiple sources confirm this is the final and critical regulatory hurdle. - Sources: - [Archer Receives FAA Certification to Begin Operating Commercial Airline](https://investors.archer.com/news/news-details/2024/Archer-Receives-FAA-Certification-to-Begin-Operating-Commercial-Airline/default.aspx) - [FAA Issues Final Airworthiness Criteria for Archer Midnight Air Taxi](https://www.flyingmag.com/faa-issues-final-airworthiness-criteria-for-archer-midnight-air-taxi/) - [The FAA Certification Race Enters the Endgame For Joby And Archer](https://lowaltitudeeconomy.aero/evtol-news-and-electric-aircraft-news/low-altitude-economy/faa-evtol-certification-joby-archer-endgame) ### ch7-32: TRUE - Speaker: Brett Adcock - Claim: Archer's target safety certification standard is 1 in 1 billion hours before a catastrophic event. - TLDR: The 1-in-1-billion flight hours (10^-9) standard for catastrophic failures is the established FAA benchmark for commercial passenger aircraft and is what eVTOL manufacturers targeting passenger operations must meet. - Explanation: FAA Advisory Circular AC 25.1309 defines catastrophic failure conditions as 'extremely improbable,' quantified as less than 1x10^-9 per flight hour (1 in 1 billion hours). This is the standard applied to transport-category commercial aircraft and is what eVTOL operators like Archer, which aim to carry passengers over cities, are required to certify to. Some debate exists about whether certain smaller eVTOLs could certify to a slightly lower bar (10^-8), but the 10^-9 figure Adcock cites is the correct target for the passenger-carrying use case he describes. - Sources: - [Understanding 'one in a billion' in aircraft system safety assessments - Vertical Mag](https://verticalmag.com/opinions/understanding-aircraft-system-safety-assessments/) - [Special Report: The number at the center of an eVTOL safety debate - The Air Current](https://theaircurrent.com/industry-strategy/special-report-evtol-safety-continuum-10-9/) - [Federal Register: Airworthiness Criteria for Archer Aviation, Inc. Model M001 Powered-Lift](https://www.federalregister.gov/documents/2024/05/24/2024-11192/airworthiness-criteria-special-class-airworthiness-criteria-for-the-archer-aviation-inc-model-m001) ### ch7-33: INEXACT - Speaker: Brett Adcock - Claim: Aircraft certification in Europe is governed by EASA and in China by the CAA. - TLDR: EASA correctly governs European aircraft certification, but China's regulator is the CAAC (Civil Aviation Administration of China), not the CAA. - Explanation: The European Aviation Safety Agency (EASA) is indeed the correct regulatory body for aircraft certification in Europe. However, China's civil aviation authority is the CAAC (Civil Aviation Administration of China), not the CAA. The acronym CAA typically refers to the UK's Civil Aviation Authority. Adcock appears to have shortened CAAC to CAA, which is a minor but factually imprecise error. - Sources: - [Civil Aviation Administration of China - Wikipedia](https://en.wikipedia.org/wiki/Civil_Aviation_Administration_of_China) - [Civil Aviation Administration of China (CAAC) | SKYbrary Aviation Safety](https://skybrary.aero/articles/civil-aviation-administration-china-caac) ### ch14-5: TRUE - Speaker: Brett Adcock - Claim: Figure robots can currently unload a full dishwasher, move laundry from a basket to a washer and run it, and fold laundry placed on a bed. - TLDR: All three home tasks described by Adcock have been publicly demonstrated by Figure robots using the Helix AI system. - Explanation: Figure 02 has been shown on video loading laundry from a hamper into a washing machine, and Figure's Helix system was demonstrated loading and unloading a dishwasher. In August 2025, Figure announced and released video of its robot folding laundry autonomously, calling it the first humanoid to do so with an end-to-end neural network. These public demos align with the capabilities Adcock described. - Sources: - [New Helix Video Shows Robot Loading and Unloading Dishwasher Pretty Damn Well](https://futurism.com/future-society/figure-robot-loading-dishwasher) - [Helix Learns to Fold Laundry](https://www.figure.ai/news/helix-learns-to-fold-laundry) - [Another Robot Wants to Do Your Laundry—This Time It's Figure 02](https://www.newequipment.com/technology-innovations/fun-innovations-friday/blog/55307179/figure-ai-inc-another-robot-wants-to-do-your-laundrythis-time-its-figure-02) ### ch14-7: UNVERIFIABLE - Speaker: Brett Adcock - Claim: At Archer Aviation, Brett Adcock's personal safety standard was that he would never feel comfortable recommending people fly Archer until he would fly on it with his own kids. - TLDR: This is a first-person statement about Adcock's personal internal safety standard, not a publicly documented fact that can be confirmed or denied. - Explanation: The claim describes Brett Adcock's private personal benchmark for safety at Archer Aviation. No public interviews, articles, or social media posts were found corroborating or contradicting this specific standard. As a first-person account of his own feelings and personal bar, it is inherently unverifiable by third parties. - Sources: - [Brett Adcock Official](https://www.brettadcock.com/) - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) ### ch14-9: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Brett Adcock has children aged 1, 4, and 7 years old. - TLDR: Adcock is publicly known to have three children, with his youngest born in October 2024, but the specific ages (1, 4, and 7) cannot be independently verified. - Explanation: Public sources confirm Brett Adcock has three children, and an October 2024 X post announced the birth of a newborn boy. By March 2026 (podcast date), that child would be approximately 17 months old, roughly consistent with 'a 1-year-old.' The exact ages of the older two children (4 and 7) are private family details not documented in any accessible public source. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [Brett Adcock on X](https://x.com/adcock_brett/status/1847123393994523090) ### ch12-1: UNVERIFIABLE - Speaker: Brett Adcock - Claim: At the time of Figure's first factory deployment, approximately half the software stack used traditional code and heuristics rather than neural networks. - TLDR: This is an internal architectural detail about Figure's own codebase that no public source confirms or denies. - Explanation: Brett Adcock is describing the internal composition of Figure's software stack at the time of their first BMW factory deployment, a proprietary technical detail not documented publicly. Public sources confirm Figure did evolve from a mixed traditional/neural-net architecture toward a fully end-to-end neural network system (Helix), which is consistent with the claim, but the specific 'approximately half' proportion is an internal data point that cannot be independently verified. - Sources: - [Helix: A Vision-Language-Action Model for Generalist Humanoid Control](https://www.figure.ai/news/helix) - [Report: Figure Business Breakdown & Founding Story | Contrary Research](https://research.contrary.com/company/figure) ### ch12-2: UNVERIFIABLE - Speaker: Brett Adcock - Claim: The robot's walking controller at the time of their first factory deployment was implemented in C++. - TLDR: This is an internal technical detail about Figure AI's proprietary software stack that has not been publicly documented. - Explanation: No public source confirms or denies whether Figure AI's walking controller during its first BMW factory deployment used a C++ implementation. Brett Adcock's statement is a first-person account of private engineering decisions inside a private company. Figure AI's public communications describe their evolution toward full neural-network control (Helix), but do not detail the low-level programming language used in earlier locomotion controllers. - Sources: - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [Introducing Helix 02: Full-Body Autonomy](https://www.figure.ai/news/helix-02) ### ch12-3: TRUE - Speaker: Brett Adcock - Claim: During their first factory deployment, perception stacks and object manipulation were handled by neural networks. - TLDR: Figure AI publicly confirmed that their BMW factory deployment used neural networks for perception and object manipulation, consistent with Adcock's description. - Explanation: According to Figure AI's own public statements about their BMW Spartanburg deployment, all manipulations were driven by neural networks mapping pixels directly to actions, and perception relied on an OpenAI vision-language model processing camera data at 10 Hz. This matches Adcock's claim that perception stacks and object manipulation were handled by neural nets while walking was still done via a traditional C++ controller. - Sources: - [Watch: Figure's 01 humanoids now working at BMW's car plant in US](https://interestingengineering.com/innovation/us-figure-humanoid-start-operations-at-bmw-plant) - [The Future of Robotics in Manufacturing: BMW and Figure's Humanoid Collaboration | Reimagining the Future](https://frankdiana.net/2024/07/30/the-future-of-robotics-in-manufacturing-bmw-and-figures-humanoid-collaboration/) - [F.02 Contributed to the Production of 30,000 Cars at BMW](https://www.figure.ai/news/production-at-bmw) ### ch12-4: INEXACT - Speaker: Brett Adcock - Claim: Figure was first launching their factory deployment approximately one and a half years before this interview. - TLDR: Figure's first factory work at BMW (initial trial) dates to early-to-mid 2024, roughly 1.5-2 years before the interview, while the full active deployment started closer to 1 year before. 'A year and a half' is a rough but reasonable approximation. - Explanation: Figure AI signed a commercial agreement with BMW in January 2024 and conducted an initial trial at BMW's Spartanburg plant in early-to-mid 2024 (results published August 6, 2024), placing that first factory deployment about 1.5 to 2 years before the March 2026 interview. Full production deployment on an active assembly line launched around early 2025, which is closer to 1 year before the interview. Adcock's 'year and a half or so' sits between these two milestones, making it an approximation that is plausible but imprecise. - Sources: - [F.02 Contributed to the Production of 30,000 Cars at BMW](https://www.figure.ai/news/production-at-bmw) - [Successful test of humanoid robots at BMW Group Plant Spartanburg](https://www.press.bmwgroup.com/global/article/detail/T0444265EN/successful-test-of-humanoid-robots-at-bmw-group-plant-spartanburg?language=en) - [Figure announces commercial agreement with BMW Manufacturing to bring general purpose robots into automotive production](https://www.prnewswire.com/news-releases/figure-announces-commercial-agreement-with-bmw-manufacturing-to-bring-general-purpose-robots-into-automotive-production-302036263.html) ### ch12-5: UNVERIFIABLE - Speaker: Brett Adcock - Claim: The old C++ walking controller would not perform well on challenging surfaces such as very shaggy carpet. - TLDR: Brett is describing an internal limitation of Figure's own proprietary software during a live demo. This cannot be independently verified by third parties. - Explanation: Public sources confirm Figure replaced over 109,000 lines of hand-engineered C++ with a neural network (System 0, part of Helix 02), validating the broader context. However, the specific claim that the old C++ controller would struggle on shaggy carpet is a first-person technical anecdote about internal, unreleased software behavior that no external source can confirm or deny. - Sources: - [Introducing Helix 02: Full-Body Autonomy](https://www.figure.ai/news/helix-02) - [From Pixels to Torque: Figure Unveils Helix 02 and the Era of Whole-Body Autonomy | Humanoids Daily](https://www.humanoidsdaily.com/news/from-pixels-to-torque-figure-unveils-helix-02-and-the-era-of-whole-body-autonomy) ### ch12-6: TRUE - Speaker: Brett Adcock - Claim: About a year and a half ago, Figure decided to refactor their entire software stack into a neural network architecture. - TLDR: Figure AI's public Helix 02 announcement confirms a full refactor to neural network architecture, replacing over 109,000 lines of hand-engineered code. - Explanation: Figure announced Helix 02 in January 2026 (consistent with Adcock's 'end of last year, 2-3 months ago' remark). The announcement explicitly states that System 0 'replaces 109,504 lines of hand-engineered C++ with a single neural prior,' matching Adcock's claim of removing 'over 100,000 lines of code.' The decision to move entirely to neural networks aligns with the development timeline between the original Helix (February 2025) and Helix 02. - Sources: - [Introducing Helix 02: Full-Body Autonomy](https://www.figure.ai/news/helix-02) - [Helix: A Vision-Language-Action Model for Generalist Humanoid Control](https://www.figure.ai/news/helix) ### ch12-7: INEXACT - Speaker: Brett Adcock - Claim: Figure announced Helix 2 approximately two to three months before this interview, at the end of the previous year. - TLDR: Helix 2 was announced January 27, 2026, roughly 2 months before the interview. Brett's '2 or 3 months' estimate is accurate, but his 'end of last year' recollection is wrong. - Explanation: Figure AI officially announced Helix 02 on January 27, 2026. The interview was published March 30, 2026, making the announcement approximately 2 months prior, consistent with Brett's '2 or 3 months ago' estimate. However, January 2026 is not 'end of last year' (i.e., end of 2025), making that secondary detail inaccurate. - Sources: - [Introducing Helix 02: Full-Body Autonomy](https://www.figure.ai/news/helix-02) - [Figure AI Announces Helix 02 - a General-Purpose Humanoid System - Luigi Freda](https://www.luigifreda.com/2026/02/02/figure-ai-announces-helix-02-a-general-purpose-humanoid-system/) ### ch12-8: TRUE - Speaker: Brett Adcock - Claim: With Helix 2, the robot runs almost entirely on a neural network, with almost no traditional code remaining on the robot. - TLDR: Confirmed. Figure's Helix 02 replaces over 100,000 lines of hand-engineered C++ code with a single neural network architecture running the full robot body. - Explanation: Figure AI's official Helix 02 announcement states that System 0 replaces exactly 109,504 lines of hand-engineered C++ with a neural network, aligning with Adcock's claim of "almost 100,000 lines of code" removed. The architecture runs three stacked neural systems (S0, S1, S2) controlling everything from kilohertz-rate balance to high-level reasoning, with no task-specific code or fine-tuning required. - Sources: - [Introducing Helix 02: Full-Body Autonomy](https://www.figure.ai/news/helix-02) - [Figure robot gets AI brain that enables human-like full-body control](https://interestingengineering.com/ai-robotics/figure-helix02-upgrades-humanoid-robot-control) ### ch12-9: TRUE - Speaker: Brett Adcock - Claim: When Figure launched Helix 2, they removed over 100,000 lines of code from the robot's software. - TLDR: Confirmed. Figure's Helix 02 launch replaced 109,504 lines of hand-engineered C++ code with a single neural network, which is indeed "over 100,000 lines." - Explanation: Multiple sources, including Figure AI's own official announcement, state that System 0 in Helix 02 eliminates 109,504 lines of hand-engineered C++ and replaces them with a neural network controlling full-body motion. Adcock's claim of "over 100,000 lines" is an accurate and conservative description of this figure. - Sources: - [Introducing Helix 02: Full-Body Autonomy](https://www.figure.ai/news/helix-02) - [The End of C++: Brett Adcock on Helix 02 and Figure's Path to "Room-Scale" Autonomy | Humanoids Daily](https://www.humanoidsdaily.com/news/the-end-of-c-brett-adcock-on-helix-02-and-figure-s-path-to-room-scale-autonomy) ### ch12-10: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure's robots are currently running 24/7 shifts without stopping and without any faults for multiple consecutive days. - TLDR: Figure AI's 24/7 robot operation is publicly confirmed, but the specific claim of running 'without any faults for days and days' is an internal performance metric that cannot be independently verified. - Explanation: Public sources from early 2026 confirm Figure has deployed its Figure 03 fleet for 24/7 autonomous operation, including autonomous charging swaps. However, the fault-free duration claim (days of zero faults) is stated solely by CEO Brett Adcock about his own company's internal test data, with no third-party measurement available. Notably, Figure's own published system description includes a triage mechanism for when robots encounter faults, which implies faults do occur but are managed rather than absent entirely. - Sources: - [Figure's Humanoid Robots Now Work 24/7, Never Calling in Sick | RoboHorizon Robot Magazine](https://robohorizon.com/en-us/news/2026/02/figures-humanoid-robots-now-work-247-never-calling-in-sick/) - [Humanoid Robotics Breakthroughs: Figure 03 Goes 24/7, Toyota Deploys 7 Digit Bots, MIT Adds Soft Robot 'Brain' – 2026 Analysis | AI News Detail](https://blockchain.news/ainews/humanoid-robotics-breakthroughs-figure-03-goes-24-7-toyota-deploys-7-digit-bots-mit-adds-soft-robot-brain-2026-analysis) ### ch12-11: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure recently set a record run time of almost a full week before a robot experienced a fault. - TLDR: No public source confirms or denies a near-week fault-free run record at Figure AI. The closest documented milestone is 67 consecutive hours of autonomous operation. - Explanation: Brett Adcock made this claim during a podcast recorded around March 30, 2026, describing a very recent internal test result. Public reporting covers earlier milestones (20-hour shifts, 67 consecutive autonomous hours, 5 months of daily BMW production runs), but none confirm a fault-free stretch approaching seven days. As an internal operational metric disclosed in real time, it cannot be independently verified or refuted with available sources. - Sources: - [Figure Robot Reportedly Completes 20-Hour Continuous Shift at BMW Plant | Humanoids Daily](https://www.humanoidsdaily.com/news/figure-robot-reportedly-completes-20-hour-continuous-shift-at-bmw-plant) - [Figure's Humanoid Robots Now Work 24/7, Never Calling in Sick | RoboHorizon](https://robohorizon.com/en-us/news/2026/02/figures-humanoid-robots-now-work-247-never-calling-in-sick/) - [Figure's General Purpose Robot: Why 2026 Changes Everything](https://metatrends.substack.com/p/figures-general-purpose-robot-why) ### ch12-12: TRUE - Speaker: Brett Adcock - Claim: Figure's robots need to charge approximately every four to five hours of operation. - TLDR: Figure's robots do run approximately 4-5 hours before needing to charge, consistent with official specs. - Explanation: Both Figure 02 and Figure 03 have a battery runtime of roughly 5 hours per charge according to Figure AI's own published specifications. Adcock's stated range of '4 hours or so, 5 hours' matches this data. - Sources: - [F.03 Battery Development](https://www.figure.ai/news/f-03-battery-development) - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) - [Figure 03 - Humanoid.guide](https://humanoid.guide/product/figure-03/) ### ch12-13: UNVERIFIABLE - Speaker: Brett Adcock - Claim: When a robot needs to charge, another robot swaps into its position and resumes work within 10 seconds. - TLDR: Figure AI's autonomous robot swap/handoff for charging is confirmed, but the specific 10-second timeframe cannot be independently verified. - Explanation: Multiple sources confirm that Figure AI robots autonomously hand off tasks to a second robot when one needs to charge, enabling 24/7 operation with no human intervention. However, the precise '10 seconds' figure is an internal operational metric stated only by Adcock himself in this interview, with no independent third-party source corroborating that specific timing. - Sources: - [Brett Adcock: 24/7 autonomous robot operation challenges persist](https://tradersunion.com/news/billionaires/show/1527685-autonomous-robot-operations/) - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) - ["I Fired Them": Brett Adcock on the OpenAI Split, Robot Self-Repair, and the Kid-Safety Test | Humanoids Daily](https://www.humanoidsdaily.com/news/i-fired-them-brett-adcock-on-the-openai-split-robot-self-repair-and-the-kid-safety-test) ### ch12-14: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure runs a logistics use case at their office in 24/7 shifts where robots move packages around. - TLDR: Figure AI's logistics use case involving package movement is publicly confirmed, but the specific claim of 24/7 shifts running internally at their office cannot be independently verified. - Explanation: Figure AI has publicly documented a logistics package-handling use case using their Helix AI system, confirming robots move and sort packages. However, the specific operational detail that this runs in 24/7 shifts at Figure's own office as a stress-test is an internal claim made by Adcock in first-person that no external source confirms or denies. - Sources: - [Helix Accelerating Real-World Logistics](https://www.figure.ai/news/helix-logistics) - [Scaling Helix: a New State of the Art in Humanoid Logistics](https://www.figure.ai/news/scaling-helix-logistics) ### ch12-15: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Most current robot failures are software-related, occurring when the robot reaches a position where it feels unsafe and stops. - TLDR: This is an internal operational observation about Figure AI's own robots that cannot be independently verified through public sources. - Explanation: Brett Adcock is describing proprietary failure-rate data from Figure AI's internal 24/7 tests. While public sources confirm Figure AI runs robots continuously and that software edge cases are a known challenge in the industry, no external source breaks down Figure AI's specific failure modes as predominantly software-related. This is a first-person account of private operational data. - Sources: - [Figure's Humanoid Robots Now Work 24/7, Never Calling in Sick | RoboHorizon Robot Magazine](https://robohorizon.com/en-us/news/2026/02/figures-humanoid-robots-now-work-247-never-calling-in-sick/) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch12-16: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure counts it as a failure if a robot is not back on the line within a couple of minutes. - TLDR: This is an internal operational standard described by Figure's CEO in a first-person account. No public source confirms or denies this specific metric. - Explanation: Brett Adcock is describing Figure's internal definition of a robot failure during continuous operation testing. Such internal performance thresholds are not publicly documented, and no indexed source corroborates or contradicts the specific 'couple of minutes' standard. As a first-person account of a private company process, it cannot be independently verified. ### ch12-17: TRUE - Speaker: Brett Adcock - Claim: Figure has greeter and visitor robots running 24/7 shifts at their office, including nights, weekends, and holidays. - TLDR: Figure AI's 24/7 robot operation at their Sunnyvale HQ is independently confirmed, including nights, weekends, and Christmas Day. The greeter role is described by the CEO himself about his own company. - Explanation: Multiple sources confirm that Figure AI has deployed a fleet of Figure 03 robots running autonomously 24/7 at their headquarters, with Adcock himself stating they operate 'even at 2 am, on weekends, or on Christmas Day.' The specific greeter and visitor interaction functionality is not independently documented by third parties, but it is a first-person account from the company's founder and CEO describing his own office operations, consistent with the confirmed HQ deployment. - Sources: - [Figure's Humanoid Robots Now Work 24/7, Never Calling in Sick | RoboHorizon Robot Magazine](https://robohorizon.com/en-us/news/2026/02/figures-humanoid-robots-now-work-247-never-calling-in-sick/) - [Figure's 24/7 humanoid staff - Rundown AI](https://www.rundown.ai/articles/figures-24-7-humanoid-staff) ### ch15-1: INEXACT - Speaker: Brett Adcock - Claim: The Figure 3 humanoid robot weighs 135 pounds and stands 5 feet 6 inches tall. - TLDR: The weight (135 lbs / 61 kg) is correct, but Figure AI's official page lists the height as 5'8", not 5'6". - Explanation: According to Figure AI's own specifications page, Figure 03 weighs 61 kg (approximately 134.5 lbs, consistent with the 135 lb claim) and stands 5 feet 8 inches tall, not 5 feet 6 inches as stated. Third-party sources vary between 5'6" and 5'8", but the official source supports 5'8". - Sources: - [Figure 03 | Figure](https://www.figure.ai/figure) - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) ### ch15-2: TRUE - Speaker: Brett Adcock - Claim: The Figure 3 is the third generation of robots built by Figure. - TLDR: Figure 03 is confirmed as the third generation of humanoid robots from Figure AI, following Figure 01 (2023) and Figure 02 (2024). - Explanation: Figure AI has publicly released three robot generations: Figure 01 (unveiled March 2023), Figure 02 (August 2024), and Figure 03 (October 2025). Multiple sources, including Figure AI's own announcement and industry outlets, explicitly label Figure 03 as the company's third-generation robot. - Sources: - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) - [Figure AI releases Figure 03, its third-generation humanoid robot - Robotics 24/7](https://www.robotics247.com/article/figure_ai_releases_figure_03_its_third_generation_humanoid_robot) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch15-3: INEXACT - Speaker: Brett Adcock - Claim: The Figure 3 robot features 5th generation hands equipped with cameras, tactile sensors, and improved grip. - TLDR: Figure 3's hands do feature palm cameras, tactile sensors, and improved grip, but the official announcement does not describe them as '5th generation' hands. - Explanation: Figure AI's official Figure 03 announcement confirms embedded palm cameras in each hand, proprietary fingertip tactile sensors (detecting forces as small as 3 grams), and improved grip via softer adaptive fingertips. However, the official documentation does not use the '5th generation hands' designation. That label appears to be Adcock's internal framing and cannot be verified from public sources. - Sources: - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) - [Figure AI releases Figure 03, its third-generation humanoid robot - Robotics 24/7](https://www.robotics247.com/article/figure_ai_releases_figure_03_its_third_generation_humanoid_robot) ### ch15-4: TRUE - Speaker: Brett Adcock - Claim: The Figure 3 robot has additional onboard compute for running Figure's Helix neural network. - TLDR: Figure 03 does include additional onboard compute specifically for running the Helix neural network. It features dual embedded low-power GPUs dedicated to Helix inference. - Explanation: Figure AI's official announcement of Figure 03 and Helix confirms that Helix runs entirely on onboard embedded GPUs, with each robot equipped with dual low-power GPUs split between the high-level (S2) and low-level control (S1) systems. This is highlighted as a key differentiator, enabling real-time inference without cloud dependency. - Sources: - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) - [Helix: A Vision-Language-Action Model for Generalist Humanoid Control](https://www.figure.ai/news/helix) ### ch15-5: UNVERIFIABLE - Speaker: Brett Adcock - Claim: The Figure 3 robot has 40 joints, all driven by electric motors. - TLDR: No public source confirms exactly 40 joints for the Figure 3 robot. The predecessor Figure 02 had 35 degrees of freedom. - Explanation: Figure AI has not released a public spec sheet with an exact joint count for Figure 03. The official announcement page and Wikipedia only detail its camera, tactile, and AI systems, not a total joint number. The prior generation (Figure 02) had 35 DOF, so 40 for Figure 03 is plausible but cannot be independently confirmed. - Sources: - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) ### ch15-6: TRUE - Speaker: Brett Adcock - Claim: The robot's torso houses a battery, GPUs, a computer, and power distribution components. - TLDR: Figure AI's own documentation confirms the F.03 torso houses a battery pack, dual NVIDIA RTX GPU modules, and onboard compute, consistent with Adcock's description. - Explanation: Figure AI's official announcement for Figure 03 states the torso centralizes both the 2.25 kWh battery pack and dual GPU compute modules. Power distribution and battery management systems are also integral to the torso design. Adcock's description of battery, GPUs, computer, and power distribution components in the torso matches the published specifications. - Sources: - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) - [F.03 Battery Development](https://www.figure.ai/news/f-03-battery-development) ### ch15-7: INEXACT - Speaker: Brett Adcock - Claim: All of the Figure 3 robot's walking and movements are controlled by a neural network, with no traditional code involved. - TLDR: Figure AI does use a neural network as the primary controller for the robot's locomotion, but the claim that there is 'no code' is an oversimplification. - Explanation: Figure AI's robots, including the Figure 03, use an end-to-end neural network (trained via reinforcement learning) as the core locomotion controller, which is consistent with Adcock's claim. However, Figure AI's own technical documentation acknowledges that low-level traditional feedback control (kHz-rate closed-loop torque control) is also present to handle actuator modeling errors and sim-to-real transfer. So while the high-level walking policy is neural network-driven, the system is not entirely free of conventional code. - Sources: - [Natural Humanoid Walk Using Reinforcement Learning](https://www.figure.ai/news/reinforcement-learning-walking) - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) - [Introducing Helix 02: Full-Body Autonomy](https://www.figure.ai/news/helix-02) ### ch15-8: TRUE - Speaker: Brett Adcock - Claim: The Figure 3 robot has cameras in the palms of its hands and a tactile sensor in every single fingertip. - TLDR: Figure 3 does have cameras embedded in each palm and tactile sensors on the fingertips, as confirmed by Figure AI's official announcement. - Explanation: The official Figure AI page for Figure 03 states: 'Each hand now integrates an embedded palm camera with a wide field of view and low-latency sensing.' It also describes custom in-house fingertip tactile sensors capable of detecting forces as small as three grams, with the phrasing 'each fingertip sensor' indicating all fingertips are equipped. Adcock's description matches the official specifications. - Sources: - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) - [Figure 03: Humanoid robot debuts with wireless charging and ultra-sensitive tactile palms](https://www.notebookcheck.net/Figure-03-Humanoid-robot-debuts-with-wireless-charging-and-ultra-sensitive-tactile-palms.1134940.0.html) ### ch15-9: TRUE - Speaker: Brett Adcock - Claim: The Figure 3 robot can pick up 40-pound boxes off the floor and can also fold a t-shirt. - TLDR: Both claims are consistent with documented Figure 03 capabilities. Its ~20 kg (~44 lb) payload covers 40-lb boxes, and t-shirt folding has been publicly demonstrated. - Explanation: Multiple sources confirm the Figure 03 has a payload capacity of approximately 20 kg (~44 lbs), making 40-pound lifts well within spec. T-shirt folding has been demonstrated by the robot, as reported by Time and TechRadar, though sources note it still struggles with the task. The claim reflects real capabilities, even if t-shirt folding remains imperfect. - Sources: - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) - [Figure 03 Is The Robot in Your Kitchen](https://time.com/7324233/figure-03-robot-humanoid-reveal/) - [Figure 03 robot cleans folds, serves, and looks just like the future – and I can't decide if I'm excited or terrified](https://www.techradar.com/ai-platforms-assistants/figure-03-might-be-the-home-robot-that-changes-everything-if-it-ever-goes-on-sale) - [Figure 03 - Humanoid.guide](https://humanoid.guide/product/figure-03/) ### ch15-10: INEXACT - Speaker: Brett Adcock - Claim: The Figure 3 robot's battery lasts anywhere from 4 to 5 hours depending on the task. - TLDR: Figure AI officially specs the F.03 at ~5 hours of run time, not a range of 4 to 5 hours. Adcock's on-the-fly estimate is close but slightly undersells the spec. - Explanation: Figure AI's own battery development page and product specifications list the Figure 03 battery run time as approximately 5 hours on a 2.3 kWh pack. Brett Adcock's quoted range of '4 to 5 hours depending on the task' is a reasonable approximation, and task-dependent variation is plausible, but the published spec is a single figure of ~5 hours rather than a 4-to-5-hour range. - Sources: - [F.03 Battery Development](https://www.figure.ai/news/f-03-battery-development) - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) - [Figure 03 - Humanoid.guide](https://humanoid.guide/product/figure-03/) ### ch15-11: TRUE - Speaker: Brett Adcock - Claim: The Figure 3 robot takes about one hour to charge. - TLDR: Figure 03 charges at 2 kW with a 2.3 kWh battery, putting full charge time at roughly 1 hour. Runtime of ~5 hours is also confirmed. - Explanation: Official Figure AI specifications show a 2.3 kWh battery pack and a 2 kW inductive wireless charging system embedded in the robot's feet. Dividing capacity by charge rate gives approximately 1.15 hours to full charge, consistent with Adcock's 'about an hour' statement. The 4-5 hour runtime figure is likewise confirmed across multiple sources. - Sources: - [F.03 Battery Development](https://www.figure.ai/news/f-03-battery-development) - [Figure 03 - Humanoid.guide](https://humanoid.guide/product/figure-03/) - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) ### ch15-12: TRUE - Speaker: Brett Adcock - Claim: The Figure 3 robot charges wirelessly through inductive charging pads via its feet, at approximately 2 kilowatts total (about 1 kilowatt per foot). - TLDR: Confirmed. Figure 3 charges wirelessly via inductive coils in its feet at 2 kW total. - Explanation: Multiple sources, including Figure AI's official announcement and tech coverage, confirm that Figure 03 features inductive charging coils built into its feet, enabling wireless charging at 2 kW simply by stepping onto a charging pad. The total 2 kW figure (consistent with ~1 kW per foot) and the foot-based inductive mechanism match the claim exactly. - Sources: - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) - [Figure 03: Humanoid robot debuts with wireless charging and ultra-sensitive tactile palms](https://www.notebookcheck.net/Figure-03-Humanoid-robot-debuts-with-wireless-charging-and-ultra-sensitive-tactile-palms.1134940.0.html) ### ch15-13: INEXACT - Speaker: Brett Adcock - Claim: Figure's commercial customers include BMW, one of the largest logistics companies in the world, and Brookfield, described as one of the largest real estate companies in the world. - TLDR: BMW and Brookfield are confirmed Figure AI partners, but Brookfield is primarily an alternative asset manager/investor, not simply a real estate company, and is not a straightforward commercial customer. - Explanation: Figure AI's commercial agreement with BMW Manufacturing is well documented, including active robot deployments at BMW's Spartanburg, SC plant. Brookfield is confirmed as a strategic partner and Series C investor in Figure, with plans for data collection and potential robot deployment across Brookfield properties. However, Brookfield is most accurately described as one of the world's largest alternative asset managers (over $1 trillion AUM), not simply a real estate company, and its relationship with Figure is partly an investment, not purely a commercial customer arrangement. - Sources: - [Figure announces commercial agreement with BMW Manufacturing to bring general purpose robots into automotive production](https://www.prnewswire.com/news-releases/figure-announces-commercial-agreement-with-bmw-manufacturing-to-bring-general-purpose-robots-into-automotive-production-302036263.html) - [Figure Announces Strategic Partnership with Brookfield](https://humanoidroboticstechnology.com/industry-news/figure-announces-strategic-partnership-with-brookfield/) - [Figure AI partners with Brookfield to develop humanoid pre-training dataset - The Robot Report](https://www.therobotreport.com/brookfield-partners-figure-ai-develop-humanoid-pre-training-dataset/) ### ch15-14: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure has two additional customers to be announced within approximately 60 days. - TLDR: This is a forward-looking claim made on March 30, 2026. The 60-day window has not yet elapsed, so the announcement cannot be confirmed or denied. - Explanation: Adcock stated during the podcast that Figure has two additional customers to be announced within roughly 60 days (by late May 2026). As of March 31, 2026, no such announcements have been publicly reported. The existing customers he named (BMW, a major logistics company, Brookfield) are consistent with publicly known information, but the future customer claim cannot yet be verified. - Sources: - [News | Figure](https://www.figure.ai/news) - [Meet Figure AI: The company behind the humanoid robot hosted by Melania Trump](https://www.cnbc.com/2026/03/26/figure-ai-the-robotics-company-hosted-by-melania-trump.html) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch15-15: INEXACT - Speaker: Brett Adcock - Claim: Figure manufactures its robots on-site at its production facility called Baku, producing approximately one robot every 90 minutes. - TLDR: The facility is called 'BotQ,' not 'Baku' (a transcription error). The 90-minute production rate is the CEO's own claim and is not independently verified. - Explanation: Figure AI's on-site manufacturing facility is publicly named 'BotQ,' located on the company's San Jose campus. The transcript's 'Baku' is almost certainly an auto-transcription mishearing of 'BotQ.' BotQ's stated maximum capacity is up to 12,000 humanoids per year, but no public source independently confirms a current cycle time of one robot every 90 minutes. That figure comes solely from Brett Adcock's own statement in this interview. - Sources: - [BotQ: A High-Volume Manufacturing Facility for Humanoid Robots](https://www.figure.ai/news/botq) - [Figure AI unveils BotQ high-volume humanoid manufacturing facility - The Robot Report](https://www.therobotreport.com/figure-ai-unveils-botq-high-volume-humanoid-manufacturing-facility/) ### ch15-16: FALSE - Speaker: Brett Adcock - Claim: Figure's current manufacturing facility has a maximum capacity of approximately 40,000 to 50,000 robots per year. - TLDR: Figure AI's own announcement states BotQ's first-generation line capacity is 12,000 humanoids per year, not 40,000-50,000. - Explanation: Figure AI officially announced BotQ's first-generation manufacturing line at a capacity of up to 12,000 robots per year, with a goal of reaching 100,000 total over four years. No public source, including Figure AI's own communications, supports a current facility capacity of 40,000 to 50,000 units per year. The figure cited by Adcock is more than three times the publicly stated maximum for the existing line. - Sources: - [BotQ: A High-Volume Manufacturing Facility for Humanoid Robots](https://www.figure.ai/news/botq) - [Figure AI unveils BotQ high-volume humanoid manufacturing facility - The Robot Report](https://www.therobotreport.com/figure-ai-unveils-botq-high-volume-humanoid-manufacturing-facility/) ### ch15-17: UNSUBSTANTIATED - Speaker: Brett Adcock - Claim: Figure's goal is to reach one million robot units produced per year within the current decade. - TLDR: Adcock stated this goal in the podcast itself, but no official Figure AI documents or prior interviews corroborate a specific target of one million units per year by decade's end. - Explanation: Figure AI's published Master Plan contains no specific production volume targets. Other public statements by Adcock reference much lower near-term goals (50,000 units/year at full current facility capacity, 100,000 over several years). He has publicly described reaching 'a million robots in market' as a key competitive milestone (November 2025), but that referred to cumulative deployed units, not an annual production rate. The one-million-units-per-year figure within the decade appears to be an aspirational figure stated for the first time in this podcast, without corroboration from official company documents. - Sources: - [Master Plan | Figure](https://www.figure.ai/master-plan) - [Figure AI Plans to Ship 100,000 Humanoid Robots Over Next Few Years](https://www.iotworldtoday.com/robotics/figure-ai-plans-to-ship-100-000-humanoid-robots-over-next-few-years) - [Humanoid Robots – Awakening](https://alphatarget.com/blog/humanoid-robots-awakening/) ### ch15-18: TRUE - Speaker: Brett Adcock - Claim: Over one billion phones are sold globally per year. - TLDR: Global phone sales have consistently exceeded 1 billion units per year for well over a decade. - Explanation: Data from multiple sources confirms that global smartphone shipments alone have ranged from roughly 1.24 to 1.5 billion units annually in recent years, with all mobile phones combined reaching even higher figures. The claim that over a billion phones are sold globally per year is well supported. - Sources: - [Global Smartphone Production Reaches 1.25 Billion Units in 2025, with Apple and Samsung Tied for Top Spot, Says TrendForce](https://www.trendforce.com/presscenter/news/20260309-12956.html) - [Smartphone sales worldwide 2007-2023 | Statista](https://www.statista.com/statistics/263437/global-smartphone-sales-to-end-users-since-2007/) - [How Many Mobile Phones Are Sold Each Year 2009-2025](https://www.sellcell.com/how-many-mobile-phones-are-sold-each-year/) ### ch15-19: TRUE - Speaker: Brett Adcock - Claim: Figure achieved a walking robot within three years of the company's founding. - TLDR: Figure AI was founded in May 2022 and achieved a walking robot well within three years. Figure 01 first walked in May 2023 (~1 year in), and Figure 3 was demonstrated internally around year 3 (2025). - Explanation: Figure AI was founded in May 2022. Its first robot, Figure 01, took its first steps in approximately May 2023, roughly 1 year after founding. Figure 03, the robot being toured in this podcast episode, was publicly announced in October 2025 (~3.5 years after founding), with internal walking demonstrations consistent with Adcock's claim of 'the week of year 3' (around May 2025). The core claim that Figure achieved a walking robot within three years is accurate. - Sources: - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) - [Figure AI raises whopping $675M to commercialize humanoids](https://www.therobotreport.com/figure-ai-raises-675m-to-commercialize-humanoids/) ### ch15-20: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure has been developing its robot hands for close to four years. - TLDR: Figure was founded in 2022, making 'close to 4 years' of hand development plausible, but the exact start date of hand R&D is an internal detail not publicly documented. - Explanation: The claim is a first-person assertion about an internal development timeline. Figure AI was founded in 2022, and the video was published in March 2026 (roughly 4 years later), so the timeframe is consistent with the company's existence. However, no public source confirms specifically when Figure began its hand development program, making the precise claim unverifiable. - Sources: - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) ### ch15-21: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure's next-generation hand has as many joints as a human hand, which Brett Adcock described as achieving full human-level dexterity. - TLDR: No indexed source confirms Figure's next-gen hand has as many joints as a human hand (27). Adcock himself hedges on the 'full human-level dexterity' claim in the same breath. - Explanation: The human hand has 27 joints, per well-established anatomy. Figure 02's hands have 16 degrees of freedom, and while a next-gen hand has been mentioned (Adcock references a 7th iteration), no publicly indexed source confirms its joint count matches a human hand's 27. Adcock's own language in the transcript is tentative ('I think, I think it's full human level dexterity...there's still a lot of work to go do'), making even the 'full dexterity' assertion self-qualified rather than a firm factual claim. - Sources: - [Hand - Wikipedia](https://en.wikipedia.org/wiki/Hand) - [News | Figure](https://www.figure.ai/news) - [Figure 02 Review (2026): Specs, Price & Performance](https://blog.robozaps.com/b/figure-02-review) ### ch15-22: UNVERIFIABLE - Speaker: Brett Adcock - Claim: The Figure 3 robot can balance on one leg better than a human can. - TLDR: This is an unverified promotional claim by Figure AI's CEO with no independent testing or demonstration found to confirm or deny it. - Explanation: Brett Adcock's assertion that Figure 3 balances on one leg better than a human is a first-party claim made during a promotional interview. No independent benchmark, peer-reviewed test, or third-party demonstration of this specific capability was found. The official Figure 03 product announcement does not mention one-leg balance or any human comparison for balance. - Sources: - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) - [Introducing Helix 02: Full-Body Autonomy](https://www.figure.ai/news/helix-02) ### ch13-1: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure is working on deep memory systems for robots so they know who they are talking to. - TLDR: Adcock is describing his own company's internal R&D, which cannot be independently verified beyond his own public statements. - Explanation: Brett Adcock has publicly stated that memory AI and persistent, personalized voice agents are a key focus for Figure in 2026, which is consistent with this claim. However, the specific detail about robots identifying individual users (knowing if they are talking to Sean or Brett) is only attested by Adcock himself in this and other public appearances, with no independent third-party confirmation of the feature's development status. - Sources: - [Figure CEO predicts big for humanoids, eVTOLs, memory AI in 2026](https://interestingengineering.com/ai-robotics/science-fiction-become-reality-in-2026) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch13-2: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure's robots identify users and apply different permission levels based on who is interacting with them, with visitors only able to request basic things like coffee or water. - TLDR: Brett Adcock describes a user identification and tiered permission system for Figure's robots, but no independent public documentation confirms this specific feature exists. - Explanation: The claim is a first-party description by Figure's CEO of how their robots are designed to handle identity and permissions. No public technical documentation, press coverage, or third-party source found confirms that Figure's robots currently implement face/user recognition tied to tiered command permissions (e.g., visitors limited to coffee or water requests). The concept is consistent with Figure's home-use roadmap but cannot be independently verified. - Sources: - [Figure will start 'alpha testing' its humanoid robot in the home in 2025 | TechCrunch](https://techcrunch.com/2025/02/27/figure-will-start-alpha-testing-its-humanoid-robot-in-the-home-in-2025/) - [Figure's humanoid robot takes voice orders to help around the house | TechCrunch](https://techcrunch.com/2025/02/20/figures-humanoid-robot-takes-voice-orders-to-help-around-the-house/) - [Figure](https://www.figure.ai/) ### ch13-4: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Facial recognition is the primary authentication method Figure plans to use for robots, with fingerprint scanning also being a possibility. - TLDR: Adcock's statements in the podcast match the transcript, but Figure has no public documentation confirming facial recognition as a planned primary authentication method for its robots. - Explanation: No public announcements, press releases, or technical documentation from Figure AI confirm that facial recognition is the primary planned authentication method for their humanoid robots. The claim reflects Adcock's own forward-looking statements made in this interview about aspirational plans, which are not corroborated by any independently verifiable external source. - Sources: - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) ### ch13-5: TRUE - Speaker: Brett Adcock - Claim: Speech is the primary intended modality for interacting with Figure robots, and Figure is spending significant development time on it. - TLDR: Figure AI publicly confirms speech/language is a major interaction focus, backed by hardware upgrades in Figure 03. - Explanation: Figure 03's announcement confirms significant investment in speech interaction, featuring an upgraded audio system with a speaker four times more powerful and a repositioned microphone for 'real-time speech-to-speech.' Figure's Helix AI is described as a 'vision-language-action' model, with language as a core pillar. Brett Adcock's statement that 'language is a super important UI' and that they're 'spending a lot of time on speech' is consistent with Figure's public product direction. - Sources: - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch13-6: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Every Figure robot off the production line has 5G enabled by default. - TLDR: T-Mobile Ventures is confirmed as a Figure AI Series C investor, but the specific claim that every Figure robot ships with 5G enabled by default cannot be independently verified. - Explanation: Figure AI's Series C announcement confirms T-Mobile Ventures as an investor, supporting the context Brett Adcock provided. However, no public documentation, press release, or third-party source confirms the specific product specification that every Figure robot leaves the production line with 5G enabled by default. This is an internal product detail asserted only by the CEO himself, with no independent corroboration available. - Sources: - [Figure Exceeds $1B in Series C Funding at $39B Post-Money Valuation](https://www.figure.ai/news/series-c) - [Figure AI passes $1B with Series C funding toward humanoid robot development - The Robot Report](https://www.therobotreport.com/figure-ai-raises-1b-in-series-c-funding-toward-humanoid-robot-development/) ### ch13-7: UNSUBSTANTIATED - Speaker: Brett Adcock - Claim: T-Mobile is an investor in Figure, and every Figure robot has a T-Mobile 5G eSIM card. - TLDR: T-Mobile Ventures is confirmed as a Figure AI investor, but the claim that every Figure robot contains a T-Mobile 5G eSIM card cannot be verified from public sources. - Explanation: T-Mobile Ventures participated in Figure AI's Series C funding round (exceeding $1B at a $39B valuation), confirming the investor relationship. However, no public source independently confirms the specific technical detail that every Figure robot is equipped with a T-Mobile 5G eSIM card. The claim comes from Brett Adcock himself as CEO and is plausible given T-Mobile's stated strategy of powering physical AI through 5G infrastructure, but it remains unverifiable without third-party confirmation. - Sources: - [T-Mobile joins Bezos in backing '$39bn' robots biz Figure AI | TelcoTitans](https://www.telcotitans.com/deutsche-telekomwatch/t-mobile-joins-bezos-in-backing-39bn-robots-biz-figure-ai/9673.article) - [Figure AI passes $1B with Series C funding toward humanoid robot development - The Robot Report](https://www.therobotreport.com/figure-ai-raises-1b-in-series-c-funding-toward-humanoid-robot-development/) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch13-8: TRUE - Speaker: Brett Adcock - Claim: Figure uses 5G as the primary network for its systems to communicate with and send commands to robots. - TLDR: T-Mobile Ventures is a confirmed Series C investor in Figure AI, and Figure 03 robots include 5G mmWave connectivity (up to 10 Gbps) for fleet communication and data sync. - Explanation: Figure AI's Series C funding round (2025) lists T-Mobile Ventures as a named investor. Figure 03's published specs confirm integrated 5G mmWave connectivity, described as enabling high-bandwidth data transfer between robots and the platform. Brett Adcock's account of using 5G as the primary network and each robot having a T-Mobile eSIM is consistent with all publicly available evidence, though the eSIM detail specifically is not independently documented outside his own statements. - Sources: - [Figure Exceeds $1B in Series C Funding at $39B Post-Money Valuation](https://www.figure.ai/news/series-c) - [Figure 03: Price, Details, Review 2026 | Full Humanoid Specs](https://www.originofbots.com/robot/figure-03-by-figure-ai-details-specifications-rating) - [What is Figure 03: Humanoid robot for home and business](https://informatecdigital.com/en/What-is-Figure-03-Figure-AI-Humanoid-Robot-Guide/) ### ch13-9: TRUE - Speaker: Brett Adcock - Claim: Figure's robots can gain new capabilities through software updates without hardware changes, similar to downloading apps on a phone. - TLDR: Figure AI explicitly designs its robots so new capabilities are delivered via software (AI model) updates to existing hardware, which is publicly documented. - Explanation: Figure AI's Helix Vision-Language-Action model lets robots acquire new skills through neural network weight updates rather than physical modifications. Helix 02, released in January 2026, expanded full-body control on the same Figure 02 hardware, enabling tasks like dishwasher loading and laundry sorting. This software-first architecture directly matches Adcock's phone-app analogy. - Sources: - [Helix: A Vision-Language-Action Model for Generalist Humanoid Control](https://www.figure.ai/news/helix) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch13-10: TRUE - Speaker: Brett Adcock - Claim: Figure updates robot capabilities by training their Helix neural net and uploading new neural net weights to the robot, allowing the same robot to switch between tasks like towel folding and package logistics. - TLDR: Figure AI's Helix is a real neural network model whose weights are updated and uploaded to robots, enabling the same robot to switch between tasks like household chores and package logistics. - Explanation: Figure AI's own documentation confirms that Helix uses a single set of neural network weights (no task-specific fine-tuning) that runs fully onboard the robot. Figure has publicly described updating Helix for logistics work (scaling Helix for logistics) and household tasks, with the same weights enabling multiple task types. The 'uploading new weights' mechanism Adcock describes matches the company's published architecture and update process. - Sources: - [Helix: A Vision-Language-Action Model for Generalist Humanoid Control](https://www.figure.ai/news/helix) - [Scaling Helix: a New State of the Art in Humanoid Logistics](https://www.figure.ai/news/scaling-helix-logistics) - [Figure humanoid robots use Helix VLA model to demonstrate household chores - The Robot Report](https://www.therobotreport.com/figure-humanoid-robots-demonstrate-helix-model-household-chores/) ### ch13-11: TRUE - Speaker: Brett Adcock - Claim: Figure will deploy robots to businesses before consumers, because the engineering complexity required is proportional to the variability encountered on site. - TLDR: Figure AI does target business deployment before consumers. Adcock's reasoning about complexity scaling with variability is consistent with Figure's publicly stated strategy. - Explanation: Figure's official master plan confirms a B2B-first approach targeting manufacturing, logistics, warehousing, and retail before the home consumer market. The company frames this as starting in structured, repetitive environments before tackling more unpredictable ones, which aligns with Adcock's explanation that engineering complexity is proportional to on-site variability. Figure's partnership with BMW and stated goal of deploying in consumer homes only after commercial success further support the claim. - Sources: - [Master Plan | Figure](https://www.figure.ai/master-plan) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [Report: Figure Business Breakdown & Founding Story | Contrary Research](https://research.contrary.com/company/figure) ### ch13-13: TRUE - Speaker: Brett Adcock - Claim: Highway driving for autonomous vehicles happened sooner than city driving because the variability is lower, and the same dynamic applies to industrial robotics preceding home robotics. - TLDR: Both parts of the analogy are accurate and well-established. Highway AV deployment preceded urban deployment due to lower variability, and industrial robots have long preceded home robots for the same structural reason. - Explanation: Autonomous vehicle development has consistently tackled highway driving before urban driving because highways offer fewer unpredictable variables (no pedestrians, clearer lanes, more consistent speeds). Industrial robotics similarly advanced decades earlier than home robotics because factory environments are structured and low-variability, while home environments are chaotic and unstructured. Multiple industry and academic sources confirm both dynamics. - Sources: - [Autonomous Vehicles Factsheet | Center for Sustainable Systems](https://css.umich.edu/publications/factsheets/mobility/autonomous-vehicles-factsheet) - [How Industrial Robots Improve Production Stability Beyond Efficiency](https://www.iroboticplus.com/blog/How-Industrial-Robots-Improve-Production-Stability-Beyond-Efficiency_b20029) - [What are the next key advancements in robotics? Experts explain | World Economic Forum](https://www.weforum.org/stories/2026/03/advances-in-autonomous-robotics-what-comes-next/) ### ch13-17: INEXACT - Speaker: Brett Adcock - Claim: Approximately half of GDP, or slightly under half, comes from human labor. - TLDR: Labor's share of global GDP is actually slightly above half (around 52-54%), not slightly under half as claimed. The 'forty-something percent' figure Adcock cites is a notable underestimate. - Explanation: ILO data puts the global labour share of GDP at approximately 52-53% as of recent years, while the US figure is around 54% (a record low of 53.8% in Q3 2025). Adcock's framing of 'half, maybe a little under half' and 'forty-something percent' understates this figure. The core idea that human labor accounts for roughly half of GDP is in the right ballpark, but the direction of the imprecision is wrong. - Sources: - [The Global Labour Income Share and Distribution | International Labour Organization](https://www.ilo.org/publications/global-labour-income-share-and-distribution) - [U.S workers just took home their smallest share of capital since 1947, at least | Fortune](https://fortune.com/2026/01/13/us-workers-smallest-labor-share-gdp-on-record/) - [Share of Labour Compensation in GDP at Current National Prices for United States (LABSHPUSA156NRUG) | FRED | St. Louis Fed](https://fred.stlouisfed.org/series/LABSHPUSA156NRUG) ### ch13-18: FALSE - Speaker: Brett Adcock - Claim: About 3 billion humans in the workforce contribute to roughly 40 percent of GDP. - TLDR: The workforce size figure is roughly plausible, but labor's share of global GDP is approximately 52-53%, not 40%. - Explanation: The global workforce stands at roughly 3.5 billion employed people (World Bank, ILO), so "3 billion" is a modest undercount. However, according to the ILO and Our World in Data, labor's share of global GDP is approximately 52-53%, a figure that has been declining gradually since 2004 but remains well above the "40-something percent" stated in the claim. The 40% figure is not supported by any major institutional source. - Sources: - [Labor share of gross domestic product (GDP) - Our World in Data](https://ourworldindata.org/grapher/labor-share-of-gdp) - [THE GLOBAL LABOUR INCOME SHARE AND DISTRIBUTION - ILO](https://www.ilo.org/media/408221/download) - [Labor force, total | Data - World Bank](https://data.worldbank.org/indicator/SL.TLF.TOTL.IN) ### ch13-19: TRUE - Speaker: Brett Adcock - Claim: The commercial workforce represents the largest market in the world for humanoid robots. - TLDR: Market research consistently shows the commercial/workforce segment dominates the humanoid robot market, with home/consumer adoption still nascent. - Explanation: Multiple analyst reports (MarketsandMarkets, Fortune Business Insights, Grand View Research) confirm that enterprise and industrial applications drive current humanoid robot adoption, while the consumer home market remains in its infancy due to high costs and technical barriers. Adcock's framing that commercial workforce is the largest addressable market for humanoid robots is consistent with industry analysis. - Sources: - [Humanoid Robot Market Size, Share & Trends, 2025 To 2030](https://www.marketsandmarkets.com/Market-Reports/humanoid-robot-market-99567653.html) - [Humanoid Robot Market Size, Share, & Growth Report [2034]](https://www.fortunebusinessinsights.com/humanoid-robots-market-110188) - [Humanoid Robot Market Size & Share | Industry Report, 2030](https://www.grandviewresearch.com/industry-analysis/humanoid-robot-market-report) ### ch17-1: TRUE - Speaker: Brett Adcock - Claim: Figure AI has decided not to pursue military applications as of the time of the interview. - TLDR: Figure AI's official Master Plan explicitly states the company will not place humanoids in military or defense applications, consistent with Adcock's statement. - Explanation: Figure AI's published Master Plan reads: 'We will not place humanoids in military or defense applications, nor any roles that require inflicting harm on humans.' This directly confirms Adcock's on-air statement that Figure has deliberately chosen not to pursue military work, citing the difficulty of serving both commercial and military markets under one umbrella. - Sources: - [Master Plan | Figure](https://www.figure.ai/master-plan) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch17-2: TRUE - Speaker: Brett Adcock - Claim: Unlike a car, a humanoid robot can walk up stairs and open doors, enabling it to follow a person anywhere in a home. - TLDR: Humanoid robots can climb stairs and open doors, unlike cars. This is a well-documented, core capability of the humanoid form factor. - Explanation: Multiple sources confirm that stair climbing and door opening are defining capabilities of humanoid robots precisely because they are designed to operate in human-built environments. Robots such as Boston Dynamics Atlas, Unitree H1/G1, and 1X NEO have publicly demonstrated these abilities. The comparison to a car, which cannot navigate stairs or open doors, is straightforwardly accurate. - Sources: - [Innovative Humanoid Robots in 2025–2026 - Reality or Hype?](https://www.winssolutions.org/humanoid-robots-2025-2026-reality-hype/) - [The humanoid robot landscape of 2025 | GlobalSpec](https://insights.globalspec.com/article/24004/the-humanoid-robot-landscape-of-2025) - [Top 12 Humanoid Robots of 2026 - Humanoid Robotics Technology](https://humanoidroboticstechnology.com/articles/top-12-humanoid-robots-of-2026/) ### ch17-3: TRUE - Speaker: Brett Adcock - Claim: Some of the most dangerous military missions involve going into close quarters and houses. - TLDR: Close quarters battle (CQB) and house clearing are universally recognized as among the most dangerous military operations. - Explanation: Military doctrine and combat history from Iraq, Afghanistan, and other conflicts consistently identify room-clearing and close quarters battle as extremely high-risk tasks. Sources note that 'stacking up' to enter a room is 'one of the most dangerous tasks a soldier may face,' with split-second decisions, confined spaces, and proximity to armed enemies all compounding the danger. - Sources: - [Close-quarters battle - Wikipedia](https://en.wikipedia.org/wiki/Close-quarters_battle) - [CQB in Military Operations - Trango Systems](https://www.trango-sys.com/cqb-in-military-operations/) ### ch17-4: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Selling humanoid robots to major commercial customers requires CEO-level approval and a public announcement of the relationship with Figure. - TLDR: This is Brett Adcock's first-person account of his own company's sales process, which cannot be independently verified by third parties. - Explanation: Adcock is describing internal dynamics of Figure's commercial sales pipeline, including the need for CEO buy-in and public partnership announcements. While it is publicly observable that Figure's known deals (e.g., BMW) were indeed announced publicly, the specific requirement for CEO approval at the customer level is an internal process claim no external source can confirm or deny. This qualifies as a first-person anecdote about private business operations. - Sources: - [Is the CEO of the humanoid startup Figure AI exaggerating his startup's work with BMW? | Fortune](https://fortune.com/2025/04/06/figure-ai-bmw-humanoid-robot-partnership-details-reality-exaggeration/) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch17-5: INEXACT - Speaker: Shawn Ryan - Claim: Jack Dorsey let go of approximately 10,000 people, about half his personnel, because of AI. - TLDR: Dorsey cut Block's workforce by over 40% (roughly 4,000+ jobs), but the ~10,000 figure refers to Block's total headcount before cuts, not the number laid off. - Explanation: Block had roughly 10,000 employees before the February 2026 announcement; Dorsey reduced headcount to just under 6,000, a cut of more than 40%. The claim conflates the pre-layoff total workforce with the number of people let go. The 'because of AI' rationale and 'about half' proportion are broadly accurate. - Sources: - [Block lays off nearly half its staff because of AI. Its CEO said most companies will do the same | CNN Business](https://www.cnn.com/2026/02/26/business/block-layoffs-ai-jack-dorsey) - [Block CEO Jack Dorsey lays off nearly half of his staff because of AI and predicts most companies will make similar cuts in the next year | Fortune](https://fortune.com/2026/02/27/block-jack-dorsey-ceo-xyz-stock-square-4000-ai-layoffs/) - [Jack Dorsey's 4,000 Job Cuts at Block Arouse Suspicions of AI-Washing](https://www.bloomberg.com/news/articles/2026-03-01/jack-dorsey-s-4-000-job-cuts-at-block-arouse-suspicions-of-ai-washing) ### ch17-6: TRUE - Speaker: Brett Adcock - Claim: Jack Dorsey's company stock went up following the AI-driven layoffs. - TLDR: Block (Jack Dorsey's company) stock surged 20-24% after he announced AI-driven layoffs of nearly half the workforce in February 2026. - Explanation: Block announced the elimination of roughly 4,000 jobs (about 40% of its workforce) citing AI productivity gains. The market responded with immediate approval, with Block's stock (ticker XYZ) jumping as much as 24% on the day of the announcement, adding approximately $6 billion in market cap. - Sources: - [Block shares soar as much as 24% as company slashes workforce by nearly half](https://www.cnbc.com/2026/02/26/block-laying-off-about-4000-employees-nearly-half-of-its-workforce.html) - [Block Stock Surges 20% After Jack Dorsey's AI Layoff Bet — Buy, Hold, or Sell?](https://finance.yahoo.com/news/block-stock-surges-20-jack-192542114.html) - [Block lays off nearly half its staff because of AI. Its CEO said most companies will do the same | CNN Business](https://www.cnn.com/2026/02/26/business/block-layoffs-ai-jack-dorsey) ### ch16-1: UNVERIFIABLE - Speaker: Brett Adcock - Claim: OpenAI (led by Sam Altman) and Microsoft co-led Figure's Series B funding round. - TLDR: OpenAI and Microsoft both invested in Figure's ~$675M Series B, but no public source designates either as a co-lead. - Explanation: The official press release and news coverage confirm that the OpenAI Startup Fund and Microsoft were investors in Figure's Series B round of $675M (consistent with Brett's 'a little under $700 million'). However, no public source identifies a lead or co-lead investor for the round, making the 'co-led' designation impossible to independently verify or refute. As the founder, Adcock would have direct knowledge of the deal structure not necessarily disclosed publicly. - Sources: - [Figure Raises $675M at $2.6B Valuation and Signs Collaboration Agreement with OpenAI](https://www.prnewswire.com/news-releases/figure-raises-675m-at-2-6b-valuation-and-signs-collaboration-agreement-with-openai-302074897.html) - [Figure rides the humanoid robot hype wave to $2.6B valuation | TechCrunch](https://techcrunch.com/2024/02/29/figure-rides-the-humanoid-robot-hype-wave-to-2-6b-valuation-and-openai-collab/) ### ch16-2: INEXACT - Speaker: Brett Adcock - Claim: Figure raised just under $700 million in their Series B round. - TLDR: Figure's Series B raised $675 million, not 'just under $700 million.' The round was co-led by OpenAI and Microsoft as stated. - Explanation: Multiple sources confirm Figure AI raised exactly $675 million in its Series B round at a $2.6 billion valuation (announced February/March 2024). While $675M is technically under $700M, describing it as 'just under $700 million' overstates the amount by $25 million. The co-leadership by OpenAI and Microsoft is accurate. - Sources: - [Figure Raises $675M at $2.6B Valuation and Signs Collaboration Agreement with OpenAI](https://www.prnewswire.com/news-releases/figure-raises-675m-at-2-6b-valuation-and-signs-collaboration-agreement-with-openai-302074897.html) - [Figure rides the humanoid robot hype wave to $2.6B valuation | TechCrunch](https://techcrunch.com/2024/02/29/figure-rides-the-humanoid-robot-hype-wave-to-2-6b-valuation-and-openai-collab/) ### ch16-3: UNVERIFIABLE - Speaker: Brett Adcock - Claim: OpenAI joined Figure's board as part of the Series B investment. - TLDR: OpenAI did invest in Figure's $675M Series B, but no public source confirms or denies that OpenAI received a board seat as part of the deal. - Explanation: Press releases, Wikipedia, and news coverage confirm OpenAI's startup fund participated in Figure's February 2024 Series B alongside Microsoft, Nvidia, and others, and that a collaboration agreement was signed. However, no publicly available source mentions OpenAI receiving a board seat at Figure AI. The specific claim about board representation comes solely from Brett Adcock's own account in this interview and cannot be independently verified. - Sources: - [Figure Raises $675M at $2.6B Valuation and Signs Collaboration Agreement with OpenAI](https://www.prnewswire.com/news-releases/figure-raises-675m-at-2-6b-valuation-and-signs-collaboration-agreement-with-openai-302074897.html) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [Humanoid robot startup Figure AI valued at $2.6 billion as Bezos, OpenAI, Nvidia join funding](https://www.cnbc.com/2024/02/29/robot-startup-figure-valued-at-2point6-billion-by-bezos-amazon-nvidia.html) ### ch16-4: TRUE - Speaker: Brett Adcock - Claim: The goal of the OpenAI-Figure partnership was to advance AI models for humanoid robots together. - TLDR: The OpenAI-Figure partnership was explicitly announced as a deal to 'develop next generation AI models for humanoid robots,' matching Adcock's description. - Explanation: Multiple sources confirm the partnership's stated goal was developing next-generation AI models for humanoid robots, integrating OpenAI's language and multimodal capabilities into Figure's hardware. The collaboration involved joint work on AI models for robotics, consistent with Adcock's account. Figure later exited the deal in early 2025, choosing to build in-house models instead. - Sources: - [Humanoid robot-maker Figure partners with OpenAI and gets backing from Jeff Bezos](https://sfstandard.com/2024/02/29/figure-openai-humanoid-partnership/) - [Figure drops OpenAI in favor of in-house models | TechCrunch](https://techcrunch.com/2025/02/04/figure-drops-openai-in-favor-of-in-house-models/) - [OpenAI wants to implant its ChatGPT-style tech into humanoid robots. Just look at its big deal with robotics startup Figure | Fortune](https://fortune.com/2024/03/01/humanoid-robots-with-ai-figure-funding-openai-bezos-nvidia/) ### ch16-5: TRUE - Speaker: Brett Adcock - Claim: Brett Adcock ended the partnership with OpenAI approximately one year after it began. - TLDR: Figure and OpenAI signed their collaboration agreement in February 2024, and Adcock ended it on February 4, 2025, roughly one year later. - Explanation: The Figure-OpenAI collaboration agreement was announced alongside a $675M funding round in February 2024. Brett Adcock publicly announced the end of the partnership on February 4, 2025, citing a major in-house AI breakthrough. That is almost exactly one year, consistent with his claim. - Sources: - [Figure drops OpenAI in favor of in-house models | TechCrunch](https://techcrunch.com/2025/02/04/figure-drops-openai-in-favor-of-in-house-models/) - [Figure Raises $675M at $2.6B Valuation and Signs Collaboration Agreement with OpenAI](https://www.prnewswire.com/news-releases/figure-raises-675m-at-2-6b-valuation-and-signs-collaboration-agreement-with-openai-302074897.html) ### ch16-6: FALSE - Speaker: Brett Adcock - Claim: No one has previously put advanced language models into humanoid robot systems. - TLDR: Multiple teams and companies integrated advanced language models into robot systems before or alongside Figure AI's work with OpenAI. - Explanation: Google DeepMind's RT-2 (2023) and SayCan (2022) demonstrated LLMs and vision-language models for robot action control. Sanctuary AI's Phoenix humanoid also integrated generative AI. These are well-documented, published examples of advanced language models put into robotic systems prior to or concurrent with Figure AI's OpenAI partnership, directly contradicting Adcock's claim. - Sources: - [Humanoid Robots and Humanoid AI: Review, Perspectives and Directions](https://arxiv.org/html/2405.15775v2) - [Large Language Models for Robotics: A survey](https://arxiv.org/html/2311.07226v2) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch16-8: TRUE - Speaker: Brett Adcock - Claim: Figure's AI team members came from Google DeepMind and top AI programs. - TLDR: Figure AI's team was indeed recruited from Google DeepMind and other top AI/robotics programs. - Explanation: Multiple sources confirm Adcock built Figure's team from alumni of Google DeepMind, Boston Dynamics, Tesla, and Apple. Reports on the OpenAI split specifically describe Figure's internal AI team as 'primarily composed of veterans from Google DeepMind,' consistent with Adcock's claim in the podcast. - Sources: - [Figure drops OpenAI in favor of in-house models | TechCrunch](https://techcrunch.com/2025/02/04/figure-drops-openai-in-favor-of-in-house-models/) - ["I Fired Them": Brett Adcock on the OpenAI Split, Robot Self-Repair, and the Kid-Safety Test | Humanoids Daily](https://www.humanoidsdaily.com/news/i-fired-them-brett-adcock-on-the-openai-split-robot-self-repair-and-the-kid-safety-test) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch16-9: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure's AI (Helix) team had over 50 people. - TLDR: No public source confirms or denies the specific claim that Figure's Helix AI team had over 50 people. - Explanation: Figure AI's overall headcount is reported as 100+ employees, but no breakdown by team is publicly available. The OpenAI partnership split has been covered by multiple outlets, with references to Figure's strong internal AI team composed of DeepMind veterans, but none cite a specific headcount for the Helix group. This is an internal organizational detail that cannot be independently verified. - Sources: - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - ["I Fired Them": Brett Adcock on the OpenAI Split, Robot Self-Repair, and the Kid-Safety Test | Humanoids Daily](https://www.humanoidsdaily.com/news/i-fired-them-brett-adcock-on-the-openai-split-robot-self-repair-and-the-kid-safety-test) - [Report: Figure Business Breakdown & Founding Story | Contrary Research](https://research.contrary.com/company/figure) ### ch16-10: UNVERIFIABLE - Speaker: Brett Adcock - Claim: OpenAI told Figure they were considering starting their own internal robotics work. - TLDR: The private call Adcock describes cannot be verified, but OpenAI publicly did restart internal robotics work in May 2024, consistent with the claim's broader context. - Explanation: This is a first-person account of a private phone call, which no third party can confirm or deny. What is publicly known: OpenAI revived its robotics research team in May 2024 (after closing it in 2021), began discussions about building its own humanoid robot by late 2024, and Figure ended the OpenAI partnership around early 2025. The sequence of events is consistent with Adcock's account, but the specific private communication itself remains unverifiable. - Sources: - [Figure drops OpenAI in favor of in-house models | TechCrunch](https://techcrunch.com/2025/02/04/figure-drops-openai-in-favor-of-in-house-models/) - [OpenAI is restarting its robotics research group - The Robot Report](https://www.therobotreport.com/openai-is-restarting-its-robotics-research-group/) - [OpenAI 'considered' building a humanoid robot: Report | TechCrunch](https://techcrunch.com/2024/12/24/openai-considered-building-a-humanoid-robot-report/) ### ch16-11: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Sam Altman and several OpenAI co-founders visited Figure's offices before the partnership split. - TLDR: The Figure-OpenAI partnership and its breakdown are well-documented, but the specific claim of an in-person office visit by Sam Altman and co-founders is a private anecdote with no public corroboration. - Explanation: Public records confirm Figure and OpenAI had a collaboration agreement (announced February 2024) that Adcock terminated in February 2025, citing that Figure was effectively teaching OpenAI robot learning while getting little value. Coverage of the split references a phone call from Altman to Adcock about OpenAI's own robotics ambitions, but no source documents an in-person visit by Altman and co-founders to Figure's offices. As a private internal event recounted firsthand, it cannot be independently verified. - Sources: - ["I Fired Them": Brett Adcock on the OpenAI Split, Robot Self-Repair, and the Kid-Safety Test | Humanoids Daily](https://www.humanoidsdaily.com/news/i-fired-them-brett-adcock-on-the-openai-split-robot-self-repair-and-the-kid-safety-test) - [Figure drops OpenAI in favor of in-house models | TechCrunch](https://techcrunch.com/2025/02/04/figure-drops-openai-in-favor-of-in-house-models/) - [Brett Adcock on X](https://x.com/adcock_brett/status/1886860098980733197) ### ch16-12: UNVERIFIABLE - Speaker: Brett Adcock - Claim: The OpenAI partnership made it difficult for Figure to recruit AI talent. - TLDR: Adcock has publicly cited recruiting perception problems as a reason for ending the OpenAI deal, but this is solely his own account of an internal business challenge. - Explanation: The end of Figure's OpenAI partnership is well-documented, and Adcock has stated in other public contexts that the association created a false perception of outsourcing AI. However, the specific claim that this made recruiting AI talent harder is a first-person account of an internal experience that cannot be independently verified by third parties. - Sources: - [Figure drops OpenAI in favor of in-house models | TechCrunch](https://techcrunch.com/2025/02/04/figure-drops-openai-in-favor-of-in-house-models/) - ["I Fired Them": Brett Adcock on the OpenAI Split, Robot Self-Repair, and the Kid-Safety Test | Humanoids Daily](https://www.humanoidsdaily.com/news/i-fired-them-brett-adcock-on-the-openai-split-robot-self-repair-and-the-kid-safety-test) ### ch16-13: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Candidates Figure recruited perceived OpenAI as the entity building Figure's AI models, rather than Figure doing its own AI development. - TLDR: This is a first-person anecdote about private conversations with job candidates that cannot be independently verified. However, Adcock has publicly confirmed the broader context: the OpenAI partnership created an external perception that Figure was 'outsourcing its AI,' which hurt recruiting. - Explanation: The specific claim that candidates told Adcock 'you guys do the robot, OpenAI does the models' is a recollection of private conversations and is inherently unverifiable. That said, Adcock has publicly stated in multiple interviews that the OpenAI partnership damaged recruiting by fostering a false perception that Figure was outsourcing its AI development, which is consistent with and corroborates the substance of his anecdote. Figure ended the OpenAI partnership in February 2025, partly citing this recruiting and perception problem. - Sources: - [Figure drops OpenAI in favor of in-house models | TechCrunch](https://techcrunch.com/2025/02/04/figure-drops-openai-in-favor-of-in-house-models/) - [Figure CEO Escalates Critique of Tele-op Demos, Details Split from OpenAI | Humanoids Daily](https://www.humanoidsdaily.com/feed/figure-ceo-escalates-critique-of-tele-op-demos-details-split-from-openai) ### ch16-14: TRUE - Speaker: Brett Adcock - Claim: Information was being passed from Figure to OpenAI in a way that Brett believed was harmful to Figure's long-term competitive position. - TLDR: Brett Adcock has publicly confirmed this concern, stating Figure was effectively 'teaching' OpenAI robotics while receiving little in return, which he cited as a key reason for the split. - Explanation: Multiple credible sources report Adcock explicitly said the partnership created an information imbalance where Figure's progress was visible to OpenAI, which was simultaneously building its own robotics program. He described this as a major factor in his decision to end the collaboration agreement in early 2025. The concern about competitive harm from shared proprietary progress is well-documented in public statements. - Sources: - ["I Fired Them": Brett Adcock on the OpenAI Split, Robot Self-Repair, and the Kid-Safety Test | Humanoids Daily](https://www.humanoidsdaily.com/news/i-fired-them-brett-adcock-on-the-openai-split-robot-self-repair-and-the-kid-safety-test) - [Figure CEO Escalates Critique of Tele-op Demos, Details Split from OpenAI | Humanoids Daily](https://www.humanoidsdaily.com/feed/figure-ceo-escalates-critique-of-tele-op-demos-details-split-from-openai) - [Figure drops OpenAI in favor of in-house models | TechCrunch](https://techcrunch.com/2025/02/04/figure-drops-openai-in-favor-of-in-house-models/) ### ch16-15: TRUE - Speaker: Brett Adcock - Claim: OpenAI is now doing robotics work internally. - TLDR: OpenAI does have an active internal robotics team, confirmed by multiple credible sources from 2025-2026. - Explanation: OpenAI quietly reformed an internal robotics division in 2025 after having shut it down years earlier, aggressively hiring roboticists and engineers. The team focuses on general-purpose robotics tied to AGI goals, and the company announced expansion plans including a second robotics facility in late 2025. - Sources: - [OpenAI's Robotics Push Signals Its Ambitions for Artificial General Intelligence | Built In](https://builtin.com/articles/openai-robotics-research-agi) - [New OpenAI job listings reveal the company's robotics plans | TechCrunch](https://techcrunch.com/2025/01/10/new-openai-job-listings-reveal-its-robotics-plans/) - [OpenAI robotics leader resigns over concerns about Pentagon AI deal : NPR](https://www.npr.org/2026/03/08/nx-s1-5741779/openai-resigns-ai-pentagon-guardrails-military) ### ch16-16: INEXACT - Speaker: Brett Adcock - Claim: OpenAI originally started as a robotics program trying to solve AGI through robotics, with their first 3-4 years focused entirely on robots. - TLDR: OpenAI did significant robotics research from roughly 2016 to 2021, but it was never exclusively a robotics program. Language models and game AI were pursued in parallel from the start. - Explanation: OpenAI was founded in December 2015 as a broad AGI research organization, not as a robotics program. Robotics was one of several simultaneous research tracks alongside language models (GPT-1 debuted in 2018) and game AI (Dota). The robotics team was shut down in July 2021, giving it roughly 4-5 years of activity, which is broadly consistent with Brett's '3-4 years' figure. The characterization of OpenAI as having started as and being 'all in on' robotics overstates the case, but the general arc (years of robotics work followed by a pivot toward LLMs) is directionally accurate. - Sources: - [OpenAI disbands its robotics research team | VentureBeat](https://venturebeat.com/business/openai-disbands-its-robotics-research-team) - [Why OpenAI decided to abandon robotics research](https://www.therobotreport.com/openai-abandons-robotics-research/) - [OpenAI - Wikipedia](https://en.wikipedia.org/wiki/OpenAI) ### ch16-17: INEXACT - Speaker: Brett Adcock - Claim: OpenAI was active in robotics from around 2016-2017 for approximately 3-4 years before pivoting. - TLDR: OpenAI's robotics start date of 2016-2017 is correct, but the duration was closer to 4-5 years, not 3-4. - Explanation: OpenAI launched robotics work in 2016 (OpenAI Gym) and 2017 (Roboschool, dexterous robot hand research), confirming the start date. However, the team was not disbanded until summer 2021 (research halted October 2020), making the active period roughly 4-5 years rather than the 3-4 years stated. The core characterization is broadly accurate but slightly underestimates the duration. - Sources: - [OpenAI disbands its robotics research team | VentureBeat](https://venturebeat.com/business/openai-disbands-its-robotics-research-team) - [Why OpenAI decided to abandon robotics research](https://www.therobotreport.com/openai-abandons-robotics-research/) - [OpenAI relaunches robotics unit four years after shutting it down - SiliconANGLE](https://siliconangle.com/2024/05/31/openai-relaunches-robotics-unit-four-years-shutting/) ### ch16-18: INEXACT - Speaker: Brett Adcock - Claim: OpenAI pivoted from robotics to large language models around 2019-2021. - TLDR: OpenAI shut down its robotics team in 2020-2021, pivoting to LLMs. Adcock's "2019-2021" window is slightly early but broadly correct. - Explanation: OpenAI co-founder Wojciech Zaremba confirmed the robotics team was disbanded in mid-2021, citing a lack of training data, with the actual wind-down beginning around late 2020. The pivot toward large language models (GPT-3 and beyond) followed. Adcock's claim that robotics ran from roughly 2016-2018 and the pivot occurred around 2019-2021 is mostly right, but the pivot did not begin as early as 2019 -- it happened specifically in 2020-2021. - Sources: - [OpenAI disbands its robotics research team | VentureBeat](https://venturebeat.com/business/openai-disbands-its-robotics-research-team) - [OpenAI shuts down robotics team because it doesn't have enough data yet • The Register](https://www.theregister.com/2021/07/18/in_brief_ai/) - [Why OpenAI decided to abandon robotics research](https://www.therobotreport.com/openai-abandons-robotics-research/) ### ch16-20: TRUE - Speaker: Brett Adcock - Claim: Figure was approaching its 4th anniversary at the time of the recording, with the company founded approximately 3.5 years prior and the anniversary falling around late May. - TLDR: Figure AI was founded in May 2022, making Brett Adcock's timeline accurate. At the March 2026 recording, the company was ~3.5 years old with its 4th anniversary in late May. - Explanation: Figure AI's founding is publicly dated to May 2022, with Brett Adcock's own Master Plan document dated May 20, 2022. The podcast was recorded on March 30, 2026, placing the company at roughly 3 years and 10 months old at that time. Both the '3.5 years ago' reference and the 'end of May' anniversary align with the verified founding date. - Sources: - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [Master Plan | Figure](https://www.figure.ai/master-plan) - [Report: Figure Business Breakdown & Founding Story | Contrary Research](https://research.contrary.com/company/figure) ### ch16-21: FALSE - Speaker: Brett Adcock - Claim: Figure's humanoid robot is performing 24/7 commercial work in homes. - TLDR: Figure robots run 24/7 autonomously in Figure's own Sunnyvale facility, not in residential homes. Adcock himself says home deployment is not yet ready. - Explanation: Multiple sources confirm Figure robots operate 24/7 at the company's Sunnyvale facility (lights-out shifts, wireless charging), but this is an industrial/commercial test environment, not residential homes. Adcock stated he still 'babysits' robots around his children and 'we're still not at that stage yet where I feel comfortable enough to let loose.' The Brookfield residential partnership is a future deployment plan, not a current rollout. - Sources: - ["I Fired Them": Brett Adcock on the OpenAI Split, Robot Self-Repair, and the Kid-Safety Test | Humanoids Daily](https://www.humanoidsdaily.com/news/i-fired-them-brett-adcock-on-the-openai-split-robot-self-repair-and-the-kid-safety-test) - [The End of C++: Brett Adcock on Helix 02 and Figure's Path to "Room-Scale" Autonomy | Humanoids Daily](https://www.humanoidsdaily.com/news/the-end-of-c-brett-adcock-on-helix-02-and-figure-s-path-to-room-scale-autonomy) - [Figure Announces Strategic Partnership with Brookfield](https://www.figure.ai/news/figure-announces-strategic-partnership-with-brookfield) ### ch16-22: TRUE - Speaker: Brett Adcock - Claim: Figure's robot is driven by a neural network. - TLDR: Figure's robots are indeed neural network-driven. Their Helix system is a Vision-Language-Action (VLA) neural network, and Helix 02 replaced over 100,000 lines of hand-written code with a single neural network. - Explanation: Figure AI's Helix model is a neural network that controls perception and full-body motion, confirmed across multiple sources. Helix 02, released in January 2026, uses a single neural network to control walking, manipulation, and balance directly from raw sensor data, running entirely onboard embedded GPUs for commercial deployment. - Sources: - [Helix: A Vision-Language-Action Model for Generalist Humanoid Control](https://www.figure.ai/news/helix) - [Introducing Helix 02: Full-Body Autonomy](https://www.figure.ai/news/helix-02) - [Figure robot gets AI brain that enables human-like full-body control](https://interestingengineering.com/ai-robotics/figure-helix02-upgrades-humanoid-robot-control) ### ch16-23: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure can produce one robot every 90 minutes when their production lines are operating. - TLDR: Adcock's claim comes solely from his own statement in this interview, with no independent third-party verification of the production rate. - Explanation: The only sources reporting the "one robot every 90 minutes" figure are Brett Adcock himself in this podcast and a Humanoids Daily recap of the same interview. Figure AI's official BotQ announcement cites a capacity of "up to 12,000 humanoids per year," which at 24/7 operation would yield one robot every ~44 minutes, not 90. A 90-minute cycle would produce roughly 5,800 per year, plausible as a current ramp-up rate, but the actual production throughput is an internal operational metric that has not been independently audited or confirmed. - Sources: - [BotQ: A High-Volume Manufacturing Facility for Humanoid Robots](https://www.figure.ai/news/botq) - ["I Fired Them": Brett Adcock on the OpenAI Split, Robot Self-Repair, and the Kid-Safety Test | Humanoids Daily](https://www.humanoidsdaily.com/news/i-fired-them-brett-adcock-on-the-openai-split-robot-self-repair-and-the-kid-safety-test) ### ch19-1: INEXACT - Speaker: Brett Adcock - Claim: School shooting events in the US increased from 30-40 events per year to 300 events per year. - TLDR: The ~300 figure and the dramatic rise are real, but depend entirely on which definition of 'school shooting' is used, and the increase spans decades rather than just 10 years. - Explanation: The K-12 School Shooting Database, using the broadest definition (any gun brandished, fired, or bullet hitting school property for any reason), recorded 300+ incidents in both 2022 and 2023. However, narrower definitions yield far lower figures (e.g., 18-83 per year). The increase is also a multi-decade trend, not strictly a 10-year one, and the 30-40 starting figure is not precisely documented for a specific prior 10-year baseline. - Sources: - [K-12 School Shooting Database - online](https://k12ssdb.org/all-shootings) - [School Shooting Statistics and Youth Gun Violence](https://www.omnilert.com/blog/school-shooting-statistics) - [Study Quantifies Dramatic Rise in School Shootings and Related Fatalities Since 1970 | ACS](https://www.facs.org/media-center/press-releases/2024/study-quantifies-dramatic-rise-in-school-shootings-and-related-fatalities-since-1970/) - [Number of school shootings by active shooter status in the US - Statista](https://www.statista.com/statistics/971473/number-k-12-school-shootings-us/) ### ch19-2: INEXACT - Speaker: Brett Adcock - Claim: The increase in school shooting events happened over a span of 10 years and was mostly a US phenomenon, not seen much internationally. - TLDR: The US-centric nature of school shootings is strongly supported (57x more than other G7 nations combined 2009-2018). The "10-year" rise is a simplification of a much longer trend. - Explanation: Data confirms the US is a massive outlier: 288 school shootings from 2009-2018, 57 times more than all other G7 countries combined, with Mexico second at only 8. The dramatic surge to 300+ annual incidents is real but reflects a longer trajectory dating back decades (from 20 incidents in 1970 to 251 in 2021), not strictly a 10-year phenomenon. The claim's international comparison holds firmly, but the 10-year framing oversimplifies a multi-decade trend. - Sources: - [The US has had 57 times as many school shootings as the other major industrialized nations combined | CNN](https://www.cnn.com/2018/05/21/us/school-shooting-us-versus-world-trnd) - [Study Quantifies Dramatic Rise in School Shootings and Related Fatalities Since 1970 | ACS](https://www.facs.org/media-center/press-releases/2024/study-quantifies-dramatic-rise-in-school-shootings-and-related-fatalities-since-1970/) - [School Shootings by Country 2026](https://worldpopulationreview.com/country-rankings/school-shootings-by-country) - [School shooting - Wikipedia](https://en.wikipedia.org/wiki/School_shooting) ### ch19-3: INEXACT - Speaker: Brett Adcock - Claim: Terahertz technology, also called millimeter wave technology, operates at radio frequencies in the 200 to 400 gigahertz range. - TLDR: Terahertz and millimeter wave are adjacent but distinct bands, and the 200-400 GHz range straddles their boundary. Calling them interchangeable oversimplifies the distinction. - Explanation: Millimeter wave is conventionally defined as 30-300 GHz, while terahertz begins at roughly 100-300 GHz and extends to 10 THz (10,000 GHz). The 200-400 GHz range Adcock cites does cover the upper edge of millimeter wave and the lower edge of terahertz, so the frequencies are roughly correct for security-scanning applications. However, describing terahertz as simply 'also called' millimeter wave conflates two distinct (though overlapping) frequency bands. - Sources: - [Terahertz radiation - Wikipedia](https://en.wikipedia.org/wiki/Terahertz_radiation) - [Extremely high frequency - Wikipedia](https://en.wikipedia.org/wiki/Extremely_high_frequency) - [Terahertz and Millimeter Wave Sensing and Applications - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC9786721/) ### ch19-4: TRUE - Speaker: Brett Adcock - Claim: Airport body scanning systems operate at a distance of a couple feet and can detect concealed items such as knives, guns, and vape pens. - TLDR: Airport millimeter wave scanners operate at roughly 2 feet from the subject and are designed to detect metallic and nonmetallic threats including knives, guns, and items like vape pens. - Explanation: The scanner mast in a standard millimeter wave portal sits approximately 63 cm (just over 2 feet) from the center, consistent with 'a couple feet.' TSA explicitly states these systems detect both metallic and nonmetallic concealed items including firearms and knives. Vape pens, which contain metal components, are also flagged as anomalies by such scanners. - Sources: - [Millimeter wave scanner - Wikipedia](https://en.wikipedia.org/wiki/Millimeter_wave_scanner) - [Security Screening | Transportation Security Administration](https://www.tsa.gov/travel/security-screening) - [3-D Body Holographic (millimeter wave) Scanner | PNNL](https://www.pnnl.gov/available-technologies/3-d-body-holographic-millimeter-wave-scanner) ### ch19-5: FALSE - Speaker: Brett Adcock - Claim: The majority of school shootings are unplanned events where students habitually bring guns into school, get into a fight, and shoot. - TLDR: Research consistently shows school shootings are predominantly premeditated, not unplanned. Multiple authoritative sources directly contradict this claim. - Explanation: The U.S. Secret Service, FBI, and multiple academic studies unanimously find that targeted school shootings are rarely impulsive acts. In ~80-94% of cases, attackers planned in advance and communicated their intentions to others beforehand. While dispute-related gun incidents do exist (roughly 13-31% of all school gun incidents depending on the study), they do not constitute the majority of school shootings, and the overall research consensus directly contradicts the claim that 'the majority are unplanned.' - Sources: - ['Not sudden, impulsive acts': School shooters showed warning signs, Secret Service finds](https://www.nbcnews.com/politics/politics-news/these-are-not-sudden-impulsive-acts-school-shooters-showed-warning-n1078351) - [The School Shooter: A THREAT ASSESSMENT PERSPECTIVE](https://www.fbi.gov/file-repository/stats-services-publications-school-shooter-school-shooter) - [The American School Shooting Study (TASSS)](https://www.dcjs.virginia.gov/sites/dcjs.virginia.gov/files/the_american_school_shooting_study_tasss.pdf) - [K-12 Education: Characteristics of School Shootings](https://www.gao.gov/products/gao-20-455) ### ch19-6: FALSE - Speaker: Brett Adcock - Claim: Planned school shooting events, such as those featuring a shooter with an automatic weapon, occur approximately 1 to 2 times per year. - TLDR: High-profile planned school attacks average more than 1–2 per year by any credible measure, and the 'automatic weapon' characterization is factually wrong (shooters use semi-automatic weapons, not machine guns). - Explanation: NCES data records approximately 50 active shooter incidents at K-12 schools from 2000–2022, averaging about 2.3 per year, and multiple high-profile planned attacks occurred in the same year in 2018, 2022, and 2023. The 'machine gun or automatic weapon' description is also inaccurate: handguns are used in roughly 84% of school shootings and rifles (semi-automatic, not automatic) in about 7%, while true fully automatic weapons are virtually never used. Both the frequency estimate and the weapon characterization are contradicted by evidence. - Sources: - [COE - Violent Deaths at School and Away From School, and Active Shooter Incidents](https://nces.ed.gov/programs/coe/indicator/a01/violent-deaths-and-shootings) - [K-12 School Shooting Database](https://k12ssdb.org/) - [School Shooting Statistics and Youth Gun Violence](https://www.omnilert.com/blog/school-shooting-statistics) - [Gunfire on School Grounds in the United States](https://everytownresearch.org/maps/gunfire-on-school-grounds/) ### ch19-7: UNSUBSTANTIATED - Speaker: Brett Adcock - Claim: Approximately 90-plus percent of all school gun events are unplanned. - TLDR: No established source confirms a 90%+ figure for unplanned school gun events. The closest proxy (GAO 2020) puts non-targeted incidents at roughly 86%, and Secret Service research finds most targeted attacks are actually premeditated. - Explanation: A 2020 GAO study found that only about 14% of gun violence on K-12 school grounds is 'school-targeted,' suggesting roughly 86% are disputes, accidents, or incidental, which partially supports the directional claim but falls short of 90%+ and does not map cleanly onto 'unplanned.' Conversely, U.S. Secret Service research found that targeted school attacks are 'rarely sudden, impulsive acts' and that 85% of shooters engaged in advance planning. No recognized study or database uses the specific 90-plus percent unplanned framing Adcock cites. - Sources: - [The long, shameful list of gunfire on school grounds in America.](https://everytownresearch.org/maps/gunfire-on-school-grounds/) - [School Shooting Statistics and Youth Gun Violence](https://www.omnilert.com/blog/school-shooting-statistics) - [COE - Violent Deaths at School and Away From School, and Active Shooter Incidents](https://nces.ed.gov/programs/coe/indicator/a01/violent-deaths-and-shootings) ### ch19-8: INEXACT - Speaker: Brett Adcock - Claim: Terahertz imaging can be performed passively at standoff distances of 10 to 30 meters at a high frame rate. - TLDR: Passive THz standoff imaging at 10-30m is documented in research, but 'high frame rate' is an overstatement for purely passive systems, which typically achieve 1-10 Hz. - Explanation: Multiple peer-reviewed studies confirm passive terahertz imaging at standoff distances in this range: NIST/VTT demonstrated passive real-time THz at 8m (5 fps), the Safe VISITOR project achieved up to 10 Hz passively, and vehicle-mounted systems have been designed for 30m detection. However, purely passive THz systems trade off frame rate against range and sensitivity, with most passive standoff systems operating at 1-10 Hz. Whether that qualifies as 'high frame rate' is debatable, and achieving all three properties (passive, 10-30m, high frame rate) simultaneously remains a significant engineering challenge. - Sources: - [Passive terahertz camera for standoff security screening - PubMed](https://pubmed.ncbi.nlm.nih.gov/20648113/) - [Passive stand-off Terahertz imaging with 1 Hertz frame rate](https://www.spiedigitallibrary.org/conference-proceedings-of-spie/6949/1/Passive-stand-off-terahertz-imaging-with-1-hertz-frame-rate/10.1117/12.777952.short) - [Real-time terahertz imaging over a standoff distance (>25 meters) | Request PDF](https://www.researchgate.net/publication/252863361_Real-time_terahertz_imaging_over_a_standoff_distance_25_meters) - [Progress report on Safe VISITOR: approaching a practical instrument for terahertz security screening](https://www.spiedigitallibrary.org/conference-proceedings-of-spie/7670/1/Progress-report-on-Safe-VISITOR--approaching-a-practical-instrument/10.1117/12.852558.short) ### ch19-9: INEXACT - Speaker: Brett Adcock - Claim: Terahertz scanning produces a 3D point cloud image using radio frequency, similar in appearance to an optical camera image. - TLDR: Terahertz imaging does produce 3D point cloud outputs resembling optical images, but terahertz is not strictly radio frequency -- it sits at the boundary between RF/microwave and infrared. - Explanation: Multiple peer-reviewed sources confirm that terahertz systems can generate 3D point cloud images (via SAR-ICP and FMCW radar techniques) at standoff distances, and the output is visually comparable to a 3D camera image. However, terahertz radiation (300 GHz to 3 THz) occupies a band between microwave/RF and infrared, so labeling it purely 'radio frequency' is a simplification. In practice, THz technology uses principles from both RF and photonics, making the characterization technically imprecise but not fundamentally wrong. - Sources: - [Terahertz 3D point cloud imaging for complex targets](https://opg.optica.org/ao/abstract.cfm?uri=ao-62-22-5976) - [3D terahertz incoherent point-cloud imaging for complex objects - ScienceDirect](https://www.sciencedirect.com/science/article/abs/pii/S0030401821005204) - [High-throughput terahertz imaging: progress and challenges | Light: Science & Applications](https://www.nature.com/articles/s41377-023-01278-0) - [2 Basic Operation of Systems and Phenomenology | Assessment of Millimeter-Wave and Terahertz Technology for Detection and Identification of Concealed Explosives and Weapons | The National Academies Press](https://nap.nationalacademies.org/read/11826/chapter/4) ### ch19-10: UNSUBSTANTIATED - Speaker: Brett Adcock - Claim: Most guns brought into schools are carried in a pocket, waistband, or backpack. - TLDR: Backpacks and waistbands are consistently cited as common concealment methods, but no study provides a precise breakdown confirming these three locations account for 'most' cases. - Explanation: Multiple sources (Brady United, Giffords, PMC research) identify backpacks and waistbands as among the most frequently documented ways guns are brought into schools, and handguns dominate due to their concealability. However, no specific study quantifies what percentage of all school firearms are concealed in pockets, waistbands, or backpacks specifically, so the 'most' qualifier cannot be directly verified. The claim is plausible but lacks a precise statistical source to confirm it. - Sources: - [Student Firearm Carrying in Schools | Brady United](https://www.bradyunited.org/resources/research/analysis-student-firearm-carrying-schools) - [Characteristics and Obtainment Methods of Firearms Used in Adolescent School Shootings - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC10682938/) - [Guns are seized in U.S. schools each day. The numbers are soaring.](https://www.washingtonpost.com/education/2023/10/10/guns-schools-us-increased-prevention-violence/) ### ch19-11: INEXACT - Speaker: Brett Adcock - Claim: Researchers at NASA's Jet Propulsion Laboratory developed the terahertz standoff detection technology for use in the Iraq and Afghanistan wars. - TLDR: JPL did develop terahertz standoff detection technology with U.S. military/defense funding for detecting person-borne IEDs, consistent with Iraq and Afghanistan-era needs, but public sources do not explicitly name those wars as the stated purpose. - Explanation: Public records confirm JPL's Microdevices Laboratory developed terahertz imaging radar for standoff personnel screening, funded by DoD programs targeting person-borne improvised explosive devices (PBIEDs), a threat central to the Iraq and Afghanistan wars. The DoD's own site lists JPL as a key partner in the Explosive Detection Equipment (EDE) Program for PBIED detection. However, no publicly available source explicitly states the technology was developed specifically 'for the Iraq and Afghanistan wars' as framed in Adcock's account, which is based on a private conversation with JPL researchers. The core claim is broadly consistent with the documentary record. - Sources: - [THz Imaging Radar Allows for a Remote "Pat Down" in Hostile Environments | NASA JPL Microdevices Laboratory](https://microdevices.jpl.nasa.gov/capabilities/submillimeter-devices/radar-concealed-weapons.php) - [Long Range Terahertz (THz) Imaging Radar | DoD](https://www.acq.osd.mil/ncbdp/nm/pseag/capabilityareas/P/LRTHzIR.html) - [A new startup from Figure's founder is licensing NASA tech in a bid to curb school shootings | TechCrunch](https://techcrunch.com/2024/06/21/a-new-startup-from-figures-founder-is-licensing-nasa-tech-in-a-bid-to-curb-school-shootings/) ### ch19-12: UNVERIFIABLE - Speaker: Brett Adcock - Claim: The JPL terahertz detection technology was funded by the US government, developed over 10 years, and its funding dropped to zero when the wars ended. - TLDR: JPL's terahertz weapon detection technology and its US government funding are confirmed, but the specific Iraq/Afghanistan war link, exact 10-year timeline, and funding dropping to zero when wars ended cannot be verified from public records. - Explanation: Public JPL documentation confirms the Microdevices Laboratory developed terahertz standoff weapon detection technology funded by the US government (specifically Navy and DoD), including for detecting person-borne IEDs, which is consistent with Iraq/Afghanistan-era defense priorities. However, the specific claims that it was explicitly funded for those wars, developed over exactly 10 years, and that funding dropped to zero when the wars ended come from a private conversation Adcock had with JPL researchers and are not documented in any publicly accessible source found. - Sources: - [THz Imaging Radar Allows for a Remote "Pat Down" in Hostile Environments | Microdevices Laboratory | NASA JPL](https://microdevices.jpl.nasa.gov/capabilities/submillimeter-devices/radar-concealed-weapons.php) - [Active Submillimeter-Wave Imaging | Microdevices Laboratory | NASA JPL](https://microdevices.jpl.nasa.gov/capabilities/submillimeter-devices/active-submil-wave-imaging.php) - [Long Range Terahertz (THz) Imaging Radar | DoD NCBDP](https://www.acq.osd.mil/ncbdp/nm/pseag/capabilityareas/P/LRTHzIR.html) ### ch19-13: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Brett visited JPL and saw a working terahertz weapon detection prototype in 2017 or 2018. - TLDR: Brett's personal visit to JPL is a private anecdote that cannot be verified, but JPL's terahertz weapon detection prototypes are well-documented and existed well before 2017-2018. - Explanation: JPL has a published research program on terahertz imaging radar for concealed weapons detection, with prototypes operating at 0.60-0.67 THz capable of detecting concealed objects at 4-25 meter standoffs. By 2014, JPL had demonstrated portable real-time systems, so a working prototype being present in 2017-2018 is entirely plausible. However, Brett's specific private visit and what he personally witnessed cannot be independently confirmed. - Sources: - ["Pat" Down without Putting Down - Microdevices Laboratory](https://microdevices.jpl.nasa.gov/capabilities/submillimeter-devices/pat-down-without-putting-down/) - [Terahertz Tools Advance Imaging for Security, Industry | NASA Spinoff](https://spinoff.nasa.gov/Spinoff2010/ps_8.html) - [THz Imaging Radar for Standoff Personnel Screening | IEEE Journals & Magazine | IEEE Xplore](https://ieeexplore.ieee.org/document/6005328/) ### ch19-14: UNVERIFIABLE - Speaker: Brett Adcock - Claim: The JPL terahertz machine Brett saw had been built approximately 10 years before his visit. - TLDR: JPL's terahertz weapon detection program is real and well-documented, but the age of the specific machine Brett Adcock was shown during a private visit cannot be confirmed. - Explanation: Public records confirm JPL has developed terahertz imaging radar for concealed weapon detection, with research papers dating back to at least the early 2010s. However, the claim about the machine being built approximately 10 years before his personal visit is an internal detail from a private tour that no public source documents or contradicts. - Sources: - [Fast, high-resolution terahertz radar imaging at 25 meters - JPL Open Repository](https://trs.jpl.nasa.gov/handle/2014/44800) - [THz Imaging Radar for Standoff Personnel Screening | IEEE Xplore](https://ieeexplore.ieee.org/document/6005328/) ### ch19-15: INEXACT - Speaker: Brett Adcock - Claim: Brett spun the terahertz technology out of Jet Propulsion Lab at Caltech and founded Cover approximately 2 years ago. - TLDR: Cover was founded in October 2023 using technology licensed from NASA's JPL (managed by Caltech), not formally "spun out." The ~2 years timeline is approximately correct. - Explanation: Multiple sources confirm Brett Adcock founded Cover in October 2023, placing it roughly 2.5 years before the podcast aired, consistent with his "2 years ago" claim. The core technology is exclusively licensed from NASA's Jet Propulsion Laboratory, which is managed by Caltech. The imprecision lies in the phrase "spun out", as the arrangement is a technology license, not a formal spinout, and JPL is a NASA facility managed by Caltech rather than a Caltech lab per se. - Sources: - [A new startup from Figure's founder is licensing NASA tech in a bid to curb school shootings | TechCrunch](https://techcrunch.com/2024/06/21/a-new-startup-from-figures-founder-is-licensing-nasa-tech-in-a-bid-to-curb-school-shootings/) - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) ### ch19-16: TRUE - Speaker: Brett Adcock - Claim: Cover's main office is in Pasadena, right next to JPL. - TLDR: Cover is headquartered in Pasadena, CA, near NASA's Jet Propulsion Laboratory, which is also located in Pasadena. - Explanation: Multiple sources, including TechCrunch and Analytics India Magazine, confirm that Cover set up its office in Pasadena because of its proximity to JPL, and that several JPL employees joined the company. JPL is indeed situated in Pasadena, consistent with Adcock's description. - Sources: - [A new startup from Figure's founder is licensing NASA tech in a bid to curb school shootings | TechCrunch](https://techcrunch.com/2024/06/21/a-new-startup-from-figures-founder-is-licensing-nasa-tech-in-a-bid-to-curb-school-shootings/) - [Figure Founder Pumps $10 Mn into AI Hardware Project to Prevent School Shooting | Analytics India Magazine](https://analyticsindiamag.com/ai-news-updates/figure-founder-pumps-10-mn-into-ai-hardware-project-to-prevent-school-shooting/) ### ch19-17: TRUE - Speaker: Brett Adcock - Claim: Brett has been self-funding Cover since founding it. - TLDR: Brett Adcock has publicly confirmed he is personally self-funding Cover with $10 million, with no outside investors. - Explanation: Adcock posted on X and LinkedIn stating he is funding Cover with $10M of his own money. No external venture funding has been reported for the company, consistent with his claim in the podcast. - Sources: - [Brett Adcock on X](https://x.com/adcock_brett/status/1935738616564912142) - [Last year, I started an AI security company called Cover](https://www.linkedin.com/posts/brettadcock_last-year-i-started-an-ai-security-company-activity-7271234223829594113-8QjW) ### ch19-18: INEXACT - Speaker: Brett Adcock - Claim: There are 130,000 K-12 schools in the US. - TLDR: The actual figure is roughly 128,000 to 133,000 K-12 schools in the US. Adcock's figure of 130,000 is a reasonable approximation that falls squarely within this range. - Explanation: Multiple sources, including NCES data, put total US K-12 schools (public and private) at approximately 128,961 to 133,250. Saying '130,000' is a rounded but accurate ballpark. The core claim is essentially correct, though not a precise figure. - Sources: - [Fast Facts: Educational institutions (84)](https://nces.ed.gov/fastfacts/display.asp?id=84) - [How Many Schools Are in the U.S.? | MDR Education](https://mdreducation.com/how-many-schools-are-in-the-u-s/) - [How Many Schools are in The U.S (Statistics & Facts) - 2026](https://admissionsly.com/how-many-schools-are-there/) ### ch19-19: FALSE - Speaker: Brett Adcock - Claim: There are approximately 60 to 80 million K-12 students in the US. - TLDR: Total K-12 enrollment in the US is approximately 54-55 million, not 60-80 million as claimed. - Explanation: According to NCES and IBISWorld data, public K-12 enrollment was about 49.6 million in fall 2022, and combined public and private enrollment totals roughly 54-55 million. Adcock's lower bound of 60 million already exceeds the actual figure by nearly 10%, and his upper bound of 80 million is roughly 45% too high. - Sources: - [COE - Public School Enrollment](https://nces.ed.gov/programs/coe/indicator/cga/public-school-enrollment) - [Fast Facts: Back-to-school statistics (372)](https://nces.ed.gov/fastfacts/display.asp?id=372) - [Number of K-12 Students - United States](https://www.ibisworld.com/us/bed/number-of-k-12-students/4251/) ### ch19-20: UNVERIFIABLE - Speaker: Brett Adcock - Claim: The original terahertz system Brett saw at JPL had certain components that cost $50,000 to $60,000 each. - TLDR: Adcock's claim about $50,000-$60,000 components in the original JPL terahertz system is a private, internal technical detail with no publicly available corroboration. - Explanation: Web searches confirm Cover AI's use of JPL-licensed terahertz technology and Adcock's goal of miniaturizing it onto a chip, but no public source documents the specific component costs of the original system he evaluated. This is a first-person account about proprietary engineering details that cannot be independently verified by third parties. - Sources: - [A new startup from Figure's founder is licensing NASA tech in a bid to curb school shootings | TechCrunch](https://techcrunch.com/2024/06/21/a-new-startup-from-figures-founder-is-licensing-nasa-tech-in-a-bid-to-curb-school-shootings/) - [Figure Founder Pumps $10 Mn into AI Hardware Project to Prevent School Shooting | Analytics India Magazine](https://analyticsindiamag.com/ai-news-updates/figure-founder-pumps-10-mn-into-ai-hardware-project-to-prevent-school-shooting/) ### ch19-21: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Cover moved the expensive $50,000 to $60,000 terahertz components onto custom chips that cost approximately $7 each. - TLDR: This is an internal company claim about proprietary chip costs that has no publicly available verification. - Explanation: No public sources confirm or deny the specific $7 chip cost figure or the $50,000-$60,000 original component cost for Cover's terahertz system. The claim describes an internal R&D effort at a private startup, making it inherently unverifiable by third parties. General context confirms terahertz components have historically been very expensive and that chip miniaturization can drastically reduce costs, but no source corroborates Cover's specific numbers. - Sources: - [A new startup from Figure's founder is licensing NASA tech in a bid to curb school shootings | TechCrunch](https://techcrunch.com/2024/06/21/a-new-startup-from-figures-founder-is-licensing-nasa-tech-in-a-bid-to-curb-school-shootings/) - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) ### ch19-22: TRUE - Speaker: Brett Adcock - Claim: Only a few groups in the world are capable of designing and fabricating these terahertz chips. - TLDR: Terahertz chip fabrication is a highly specialized capability limited to a small number of academic, government, and industrial groups worldwide. - Explanation: Research confirms that THz chip design and fabrication is notoriously difficult due to silicon material constraints, non-standard CMOS design rules, and power/temperature challenges. Only a handful of groups have demonstrated this capability, including MIT (with Intel), NTU Singapore, Hebrew University, and national labs like NASA JPL (whose technology Cover licensed). Commercial vendors exist but are rare. The characterization of THz chip fabrication as something only a few groups can do is consistent with the scientific literature. - Sources: - [Chip-based system for terahertz waves could enable more efficient, sensitive electronics | MIT News](https://news.mit.edu/2025/chip-based-system-could-enable-more-efficient-sensitive-electronics-0220) - [A new startup from Figure's founder is licensing NASA tech in a bid to curb school shootings | TechCrunch](https://techcrunch.com/2024/06/21/a-new-startup-from-figures-founder-is-licensing-nasa-tech-in-a-bid-to-curb-school-shootings/) - [Taming terahertz: MIT's new chip design breaks through silicon barriers](https://interestingengineering.com/science/mit-chip-unleashes-terahertz-waves) ### ch19-23: TRUE - Speaker: Brett Adcock - Claim: Schools are being subsidized at the federal and municipal level and are currently installing security measures including CCTVs and ballistic chalkboards. - TLDR: Both parts of the claim check out. Federal and municipal grants fund school security upgrades, and ballistic whiteboards/chalkboards are real products actively marketed and installed in schools. - Explanation: Multiple federal programs (STOP School Violence Act, SVPP, NSGP, HSGP) provide tens of millions of dollars annually to schools for security upgrades including CCTV systems, covering up to 75-100% of costs. Ballistic whiteboards and chalkboards are commercially available products (e.g., RTS Tactical Ballistic Whiteboard, Defenshield) specifically designed for classroom installation, confirmed by multiple vendor sites and security industry sources. - Sources: - [K-12 School Safety Grants and Federal Funding](https://raptortech.com/school-safety-grants-and-federal-funding/) - [School Safety Grants | SchoolSafety.gov](https://www.schoolsafety.gov/grants-finder-tool) - [RTS Tactical Ballistic Armor Whiteboard Panel Level III+ | RTS Tactical](https://www.rtstactical.com/products/rts-ballistic-armor-whiteboard-panel) - [Ballistic Defense Equipment for Schools & Universities](https://defenshield.com/solutions/schools-universities/) ### ch19-24: TRUE - Speaker: Brett Adcock - Claim: Cover aimed to decrease the bill of materials cost by approximately 90% through the chip development work done over the past year. - TLDR: Confirmed. Brett Adcock publicly stated Cover cut its projected system (bill of materials) cost by over 90% through its second-generation hardware development. - Explanation: Multiple public sources, including Adcock's own posts on X and LinkedIn, confirm that Cover achieved a greater than 90% reduction in projected system cost as part of its second-generation hardware work. This cost reduction was explicitly cited as critical to scaling deployment across 130,000 U.S. K-12 schools. The claim accurately reflects Adcock's public statements about the company's cost reduction efforts. - Sources: - [Brett Adcock on X (Cover update)](https://x.com/adcock_brett/status/1935738616564912142) - [A new startup from Figure's founder is licensing NASA tech in a bid to curb school shootings | TechCrunch](https://techcrunch.com/2024/06/21/a-new-startup-from-figures-founder-is-licensing-nasa-tech-in-a-bid-to-curb-school-shootings/) ### ch23-1: TRUE - Speaker: Brett Adcock - Claim: Brett Adcock is 39 years old. - TLDR: Brett Adcock was born on April 6, 1986, making him 39 years old at the time of this podcast (March 30, 2026). - Explanation: Public records confirm Adcock's birth date as April 6, 1986. Since the podcast was published on March 30, 2026, before his 40th birthday, he is indeed 39 years old at the time of the statement. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [Bio | Brett Adcock Official](https://www.brettadcock.com/bio) ### ch23-6: INEXACT - Speaker: Brett Adcock - Claim: In venture, 95% of people fail. - TLDR: VC-backed startup failure rates are widely cited at 75-90%, not 95%. The core idea is correct but the figure is overstated. - Explanation: Harvard Business School research found roughly 75% of venture-backed companies never return cash to investors. Broader startup failure rate estimates range from 75% to 90% depending on definition and industry. The 95% figure cited by Adcock is higher than the commonly accepted data, though the underlying point about high failure rates in venture is well supported. - Sources: - [The Venture Capital Secret: 3 Out of 4 Start-Ups Fail](https://www.hbs.edu/news/Pages/item.aspx?num=487) - [Startup Failure Rate: How Many Startups Fail and Why in 2026?](https://www.failory.com/blog/startup-failure-rate) ### ch23-7: INEXACT - Speaker: Brett Adcock - Claim: Brett Adcock has been working as an entrepreneur for 20 years. - TLDR: Adcock began entrepreneurial work at age 16 (around 2002), making his tenure closer to 24 years by 2026, not 20. - Explanation: Brett Adcock was born on April 6, 1986, and is documented to have started building web and software companies from age 16, roughly around 2002. By the podcast's recording in early 2026, that is approximately 24 years of entrepreneurial activity. His claim of '20 years' is an undercount, though it may reflect a looser starting point such as founding Vettery (2012-2013). The long-tenure core of the statement is accurate, but the specific figure is imprecise. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [High-tech entrepreneur Brett Adcock on Figure, Archer, and early success](https://newatlas.com/remarkable-people/brett-adcock-history/) ### ch22-1: TRUE - Speaker: Shawn Ryan - Claim: Robert Williams was killed by a Ford industrial robot in 1979. - TLDR: Robert Williams was killed by an industrial robot arm at a Ford plant on January 25, 1979, making him the first known human fatality caused by a robot. - Explanation: Multiple authoritative sources, including Wikipedia, Guinness World Records, and Britannica, confirm that Robert Williams was struck and killed by a one-ton robot arm at Ford's casting plant in Flat Rock, Michigan on January 25, 1979. The robot was built by Litton Industries and was part of a parts retrieval system. His family was ultimately awarded $15 million in the subsequent lawsuit. - Sources: - [Robert Williams (robot fatality) - Wikipedia](https://en.wikipedia.org/wiki/Robert_Williams_(robot_fatality)) - [First human killed by a robot | Guinness World Records](https://www.guinnessworldrecords.com/world-records/first-human-to-be-killed-by-a-robot) - [Today in History January 25 | Man Killed by a Robot for the First Time | Britannica](https://www.britannica.com/today-in-history/January-25-man-killed-by-a-robot) ### ch22-2: TRUE - Speaker: Shawn Ryan - Claim: A 2025 Unitree H1 malfunction went viral and demonstrated how violently a humanoid system can lose control. - TLDR: A Unitree H1 robot did go viral in 2025 after violently malfunctioning during a factory demo in China, thrashing its limbs and forcing handlers to scramble. - Explanation: Multiple credible outlets (New Atlas, Interesting Engineering, Fox News, Robotics and Automation News) covered the incident, which occurred around May 2025. The H1 was tethered during a test, causing a sensor feedback loop that made the robot flail uncontrollably, knocking over equipment. No one was seriously injured, but the footage spread widely and triggered a public safety debate. - Sources: - [Humanoid robot malfunctions in factory test: Unitree H1 flails near worker in viral video](https://roboticsandautomationnews.com/2025/05/08/ai-robot-attacks-worker-viral-video-shows-unitree-humanoid-going-berserk/90524/) - [Viral video shows humanoid robot malfunctioning at public event](https://newatlas.com/ai-humanoids/humanoid-robot-nearly-injures-handlers-unitree/) - [Humanoid robot malfunctions, sparks viral panic | Fox News](https://www.foxnews.com/tech/humanoid-robot-malfunctions-sparks-viral-panic) ### ch22-3: TRUE - Speaker: Shawn Ryan - Claim: Longstanding research has warned that robots in homes can create privacy and security vulnerabilities. - TLDR: Longstanding research, including a University of Washington study from 2009, has explicitly warned that household robots create privacy and security vulnerabilities. - Explanation: Multiple academic and institutional sources confirm this claim. A 2009 University of Washington study found that commercially available household robots had security weaknesses allowing interception of audio/video streams and unauthorized access. Subsequent research (arxiv, PMC journals) has continued documenting risks including malware infection, data theft, and remote takeover of home robots. - Sources: - [Household Robots Do Not Protect Users' Security And Privacy, Researchers Say | ScienceDaily](https://www.sciencedaily.com/releases/2009/10/091008161900.htm) - [Household robots do not protect users' security and privacy, researchers say | UW News](https://www.washington.edu/news/2009/10/08/household-robots-do-not-protect-users-security-and-privacy-researchers-say/) - [Securing the Future: Exploring Privacy Risks and Security Questions in Robotic Systems](https://arxiv.org/html/2409.09972v1) - [Robotics cyber security: vulnerabilities, attacks, countermeasures, and recommendations - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC7978470/) ### ch22-4: TRUE - Speaker: Shawn Ryan - Claim: There is an ongoing global debate over autonomous weapons. - TLDR: There is a well-documented, active global debate over autonomous weapons taking place in multiple international forums. - Explanation: Formal multilateral discussions on autonomous weapons systems have been underway since at least 2012, involving the UN Convention on Certain Conventional Weapons, the UN General Assembly (which passed a resolution in December 2024 with 166 votes in favor), and summits like REAIM. Major institutions including SIPRI, the Arms Control Association, and the ICRC have extensively documented this ongoing debate. - Sources: - [Dilemmas in the policy debate on autonomous weapon systems | SIPRI](https://www.sipri.org/commentary/topical-backgrounder/2025/dilemmas-policy-debate-autonomous-weapon-systems) - [Geopolitics and the Regulation of Autonomous Weapons Systems | Arms Control Association](https://www.armscontrol.org/act/2025-01/features/geopolitics-and-regulation-autonomous-weapons-systems) - [Diplomatic Debate Over Autonomous Weapons Heats Up | Arms Control Association](https://www.armscontrol.org/act/2024-04/news/diplomatic-debate-over-autonomous-weapons-heats) - [Understanding the Global Debate on Lethal Autonomous Weapons Systems: An Indian Perspective | Carnegie Endowment for International Peace](https://carnegieendowment.org/research/2024/08/understanding-the-global-debate-on-lethal-autonomous-weapons-systems-an-indian-perspective) ### ch22-5: TRUE - Speaker: Brett Adcock - Claim: AI systems that can be embodied and can use computers are currently being developed. - TLDR: Both embodied AI (humanoid robots) and AI systems that operate computers are actively being developed by numerous companies as of 2025-2026. - Explanation: Companies like Figure AI, Tesla, Unitree, and OpenAI are deploying or actively building humanoid robots (embodied AI), while Anthropic, OpenAI, and others have released AI agents capable of using computers. This is one of the most widely documented technology trends of 2025-2026, confirmed across industry reports, academic publications, and major news sources. - Sources: - [Physical AI in 2026: How Embodied Intelligence Is Redefining Industrial Operations, Healthcare, and Smart Cities | TechAhead](https://www.techaheadcorp.com/blog/how-embodied-intelligence-redefining-industrial-operation/) - [Breakthroughs in Embodied AI Shaping 2025](https://vertu.com/ai-tools/embodied-ai-breakthroughs-2025-robotics-healthcare-logistics/) - [Omdia Market Radar: General-purpose Embodied Intelligent Robots, 2026](https://omdia.tech.informa.com/om143809/omdia-market-radar-generalpurpose-embodied-intelligent-robots-2026) ### ch22-6: INEXACT - Speaker: Brett Adcock - Claim: Humanoid robots (synthetic humans) are being built at scale. - TLDR: Humanoid robot production is ramping up significantly but most companies are still in early commercial deployment, not true mass-scale production. - Explanation: As of early 2026, multiple companies (Tesla, Boston Dynamics, Figure AI, and over 140 Chinese manufacturers) are moving from prototypes to commercial production. Boston Dynamics plans 30,000 units/year at a new facility, Tesla targets 50,000 Optimus units in 2026, and Figure AI has Figure 03 in alpha pilots. However, most deployments remain limited or in testing phases, and 2026 is described as 'Mass Production Year Zero,' meaning true mass scale has not yet been fully achieved. - Sources: - [The Complete Guide to Humanoid Robots in 2026 | Tesla Optimus V3, Unitree H2, Boston Dynamics Atlas — The Full Picture of Mass Production Year Zero | TIMEWELL Inc.](https://timewell.jp/en/columns/humanoid-robot-2026) - [Humanoid Robots 2026: Tesla Optimus, Figure AI & Boston Dynamics Atlas](https://vfuturemedia.com/future-tech/humanoid-robots-enter-the-workforce-figure-boston-dynamics-and-tesla-optimus-2026/) - [Humanoids on the move: How 2025 became the breakthrough year for AI driven robotics – Ai Summit | Silicon Valley](https://techequity-ai.org/humanoids-on-the-move-how-2025-became-the-breakthrough-year-for-ai-driven-robotics/) ### ch22-7: TRUE - Speaker: Shawn Ryan - Claim: People are already seeking advice from AI tools like ChatGPT. - TLDR: People are widely documented to be using ChatGPT and similar AI tools for personal advice, including relationship and mental health guidance. - Explanation: Multiple sources confirm this trend. A Sentio University survey found 48.7% of AI users with mental health challenges use LLMs for therapeutic support, and OpenAI data shows non-work personal use reached 73% of conversations by mid-2025. A Stanford study (March 2026) specifically examined the dangers of people seeking interpersonal advice from AI chatbots, confirming the behavior is widespread. - Sources: - [ChatGPT May Be the Largest Mental Health Provider in the U.S. | Sentio University](https://sentio.org/ai-research/ai-survey) - [Stanford study outlines dangers of asking AI chatbots for personal advice | TechCrunch](https://techcrunch.com/2026/03/28/stanford-study-outlines-dangers-of-asking-ai-chatbots-for-personal-advice/) - [How people are using ChatGPT | OpenAI](https://openai.com/index/how-people-are-using-chatgpt/) ### ch21-2: TRUE - Speaker: Brett Adcock - Claim: Adcock's team has been working on humanoid AI for 4 years. - TLDR: Figure AI was founded by Adcock in 2022, making ~4 years of work on humanoid AI accurate as of the March 2026 recording. - Explanation: Multiple sources confirm Brett Adcock founded Figure AI in 2022. With the podcast published on March 30, 2026, the claim of working on humanoid AI for 4 years aligns precisely with the company's founding timeline. - Sources: - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) ### ch21-3: FALSE - Speaker: Brett Adcock - Claim: Current frontier AI chatbots like Gemini and ChatGPT lack persistent memory and cannot see what the user is doing. - TLDR: Both claims are incorrect. ChatGPT has had persistent memory since 2024 and screen sharing since December 2024, both well before this video aired in March 2026. - Explanation: ChatGPT's memory feature rolled out broadly in 2024 and was significantly expanded in April 2025 to reference all past conversations. OpenAI also launched screen sharing and live video in Advanced Voice Mode in December 2024, enabling ChatGPT to see a user's screen in real time. Gemini similarly launched its own persistent memory and screen-awareness features. Both assertions Adcock makes about these limitations were already contradicted by publicly available product features at the time of the video. - Sources: - [Memory and new controls for ChatGPT | OpenAI](https://openai.com/index/memory-and-new-controls-for-chatgpt/) - [OpenAI rolls out video and screen sharing for ChatGPT voice mode](https://www.axios.com/2024/12/12/chatgpt-video-screen-sharing-voice-chat) - [ChatGPT gets screensharing and real-time video analysis, rivaling Gemini 2 | VentureBeat](https://venturebeat.com/ai/chatgpt-gets-screensharing-and-real-time-video-analysis-rivaling-gemini-2) - [Gemini's New Memory Feature Update | by Kai | Medium](https://kaiwritesornot.medium.com/geminis-new-memory-feature-update-58c2872689a6) ### ch21-4: FALSE - Speaker: Brett Adcock - Claim: Current AI chatbots cannot use tools effectively, use the internet poorly, and cannot take autonomous actions for users such as placing an order. - TLDR: By March 2026, AI systems could absolutely use tools, browse the web, and place orders. OpenAI Operator launched in January 2025 specifically to do tasks like ordering groceries and making restaurant reservations. - Explanation: OpenAI launched Operator on January 23, 2025, enabling autonomous web actions including placing food orders through DoorDash, Instacart, and others. By August 2025, it was fully integrated into ChatGPT as 'agent mode'. Adcock's claim that AI chatbots cannot use tools or place orders was already contradicted by widely available, well-documented products well before the March 2026 recording date. - Sources: - [OpenAI launches Operator—an agent that can use a computer for you | MIT Technology Review](https://www.technologyreview.com/2025/01/23/1110484/openai-launches-operator-an-agent-that-can-use-a-computer-for-you/) - [OpenAI introduces Operator to automate tasks such as vacation planning, restaurant reservations](https://www.cnbc.com/2025/01/23/openai-operator-ai-agent-can-automate-tasks-like-vacation-planning.html) - [Introducing Operator | OpenAI](https://openai.com/index/introducing-operator/) - [OpenAI Operator - Wikipedia](https://en.wikipedia.org/wiki/OpenAI_Operator) ### ch21-5: FALSE - Speaker: Brett Adcock - Claim: Current AI chatbots do not have access to users' accounts or personal information. - TLDR: Multiple major AI assistants already access personal accounts and user data. Adcock's claim does not reflect the current state of the technology. - Explanation: ChatGPT has persistent memory and third-party integrations, while Google Gemini integrates with Gmail, search history, and YouTube activity, and launched "Personal Intelligence" features in January 2026. Google even introduced tools in March 2026 to import personal context from rival chatbots directly into Gemini. These features directly contradict the assertion that AI chatbots lack access to accounts and personal information. - Sources: - [You can now transfer your chats and personal information from other chatbots directly into Gemini | TechCrunch](https://techcrunch.com/2026/03/26/you-can-now-transfer-your-chats-and-personal-information-from-other-chatbots-directly-into-gemini/) - ['Sum up what you know about me' — I tried Gemini's new AI memory import tools and it instantly felt less generic](https://www.techradar.com/ai-platforms-assistants/gemini/i-used-geminis-new-ai-memory-importing-feature-and-now-it-knows-as-much-about-me-as-chatgpt) - [The Truth About AI Chatbot Data Privacy: What ChatGPT, Gemini, and Claude Really Do With Your Conversations](https://medium.com/@aftab001x/the-truth-about-ai-chatbot-data-privacy-what-chatgpt-gemini-and-claude-really-do-with-your-a4b46bfb8294) ### ch21-7: TRUE - Speaker: Brett Adcock - Claim: Adcock started a new AI lab called HARC last summer. - TLDR: Brett Adcock did start an AI lab called Hark (transcribed as 'HARC') in stealth around summer 2025, publicly announced on March 24, 2026. - Explanation: Adcock confirmed on X that Hark had been operating in stealth for 8 months before its March 24, 2026 announcement, placing its founding around July-August 2025 (summer 2025). The lab is focused on building next-generation multimodal AI and new human-AI interfaces, matching Adcock's description in the transcript. 'HARC' is a transcription artifact for the actual name 'Hark'. - Sources: - [Hark Launches AI Lab Building Futuristic Interface to Artificial Intelligence](https://www.businesswire.com/news/home/20260324789327/en/Hark-Launches-AI-Lab-Building-Futuristic-Interface-to-Artificial-Intelligence) - [Exclusive: Figure CEO Brett Adcock Launches New AI Lab With $100 Million in Funding — The Information](https://www.theinformation.com/briefings/exclusive-figure-ceo-brett-adcock-launches-new-ai-lab-100-million-funding) - [Meet the former Apple designer building a new AI interface at Hark | TechCrunch](https://techcrunch.com/2026/03/24/meet-the-former-apple-designer-building-a-new-ai-interface-at-hark/) - [Brett Adcock on X](https://x.com/adcock_brett/status/2036461258443202810) ### ch21-8: TRUE - Speaker: Brett Adcock - Claim: HARC's goal is to design what comes after the iPhone for AI. - TLDR: Brett Adcock did launch an AI lab called Hark, publicly described as building the next-generation interface to AI beyond the smartphone. - Explanation: Hark was announced in March 2026 after eight months in stealth, funded by $100M of Adcock's personal capital. Its stated mission is to build vertically integrated AI models paired with new hardware, explicitly framed as a successor to the smartphone-era interface. The former Apple designer Abidur Chowdhury mentioned in the transcript is confirmed as Hark's head of design. - Sources: - [Meet the former Apple designer building a new AI interface at Hark | TechCrunch](https://techcrunch.com/2026/03/24/meet-the-former-apple-designer-building-a-new-ai-interface-at-hark/) - [Hark Launches AI Lab Building Futuristic Interface to Artificial Intelligence](https://www.businesswire.com/news/home/20260324789327/en/Hark-Launches-AI-Lab-Building-Futuristic-Interface-to-Artificial-Intelligence) - [Brett Adcock Launches Hark AI Lab with $100M Personal Investment for Integrated Personal Intelligence](https://mlq.ai/news/brett-adcock-launches-hark-ai-lab-with-100m-personal-investment-for-integrated-personal-intelligence/) ### ch21-9: TRUE - Speaker: Brett Adcock - Claim: HARC is developing new extremely multimodal AI models. - TLDR: HARC (Hark) is indeed an AI lab founded by Brett Adcock that is developing multimodal AI models, publicly announced on March 24, 2026. - Explanation: Multiple sources confirm Adcock launched Hark as an AI lab focused on building multimodal AI models (combining speech, text, vision, and contextual awareness) alongside new hardware. The company was announced publicly just days before this podcast aired, aligning with Adcock's claim of starting the lab 'last summer.' - Sources: - [Hark Launches AI Lab Building Futuristic Interface to Artificial Intelligence](https://www.businesswire.com/news/home/20260324789327/en/Hark-Launches-AI-Lab-Building-Futuristic-Interface-to-Artificial-Intelligence) - [Meet the former Apple designer building a new AI interface at Hark | TechCrunch](https://techcrunch.com/2026/03/24/meet-the-former-apple-designer-building-a-new-ai-interface-at-hark/) - [Hark develops multimodal models and hardware, promises seamless personal intelligence | Ukraine news - #Mezha](https://mezha.net/eng/bukvy/hark_develops_multimodal/) ### ch21-10: INEXACT - Speaker: Brett Adcock - Claim: The lead designer from the iPhone, Abidur, who designed iPhone 15, 16, and 17, is on the HARC team. - TLDR: Abidur Chowdhury, an Apple iPhone designer, is indeed on the Hark team, but sources credit him with the iPhone Air specifically, not iPhones 15, 16, and 17. The company name is 'Hark,' not 'HARC' (transcript error). - Explanation: Multiple credible sources confirm that Abidur Chowdhury, a lead Apple industrial designer, joined Brett Adcock's AI company Hark as Head of Design. However, sources describe him as the lead designer behind the iPhone Air (announced in early 2025), not iPhones 15, 16, and 17 as stated in the claim. He joined Apple in 2019, so involvement in iPhone 15 is plausible, but the specific model attribution is not supported by available evidence. - Sources: - [Meet the former Apple designer building a new AI interface at Hark | TechCrunch](https://techcrunch.com/2026/03/24/meet-the-former-apple-designer-building-a-new-ai-interface-at-hark/) - [Figure AI Founder and iPhone Air Designer Team Up on Mystery AI Product](https://gizmodo.com/figure-ai-founder-and-iphone-air-designer-team-up-on-ai-mystery-product-2000737198) - [Hark Launches AI Lab Building Futuristic Interface to Artificial Intelligence](https://www.businesswire.com/news/home/20260324789327/en/Hark-Launches-AI-Lab-Building-Futuristic-Interface-to-Artificial-Intelligence) - [Mystery AI lab that poached iPhone Air designer revealed - 9to5Mac](https://9to5mac.com/2026/01/09/iphone-air-designer-left-for-hark-ai-lab/) ### ch21-11: TRUE - Speaker: Brett Adcock - Claim: HARC will produce a family of devices rather than a single product. - TLDR: Adcock confirmed Hark (transcribed as 'HARC') will produce a family of AI devices, not a single product. - Explanation: Brett Adcock announced Hark on March 24, 2026. Bloomberg directly quotes him: 'We're working on a family of AI devices both for yourself and for the home.' Multiple outlets confirmed this family-of-devices strategy. The transcript's 'HARC' is simply an auto-transcription error for 'Hark'. - Sources: - [Figure AI Founder's New Startup Hark Is Latest to Plan Family of AI Devices - Bloomberg](https://www.bloomberg.com/news/articles/2026-03-24/figure-ai-founder-s-new-startup-hark-is-latest-to-plan-family-of-ai-devices) - [Figure AI Founder Bets on 'Family' of AI Devices with New Venture Hark](https://www.eweek.com/news/brett-adcock-hark-ai-devices/) - [Hark Launches AI Lab Building Futuristic Interface to Artificial Intelligence](https://www.businesswire.com/news/home/20260324789327/en/Hark-Launches-AI-Lab-Building-Futuristic-Interface-to-Artificial-Intelligence) ### ch21-12: TRUE - Speaker: Brett Adcock - Claim: HARC's devices are designed to replace the phone and computer. - TLDR: Adcock's AI hardware startup Hark (transcribed as 'HARC') is explicitly designed to replace phones and computers with native AI devices. - Explanation: Brett Adcock launched Hark in March 2026, self-funding it with $100M. The company is building a 'family of AI devices' described as distinct from existing handsets, wearables, and smart glasses, with the stated goal of replacing pre-AI devices like phones and computers. The transcript's 'HARC' is an auto-generated transcription error for 'Hark'. - Sources: - [Figure AI Founder's New Startup Hark Is Latest to Plan Family of AI Devices - Bloomberg](https://www.bloomberg.com/news/articles/2026-03-24/figure-ai-founder-s-new-startup-hark-is-latest-to-plan-family-of-ai-devices) - [Figure AI Founder Bets on 'Family' of AI Devices with New Venture Hark](https://www.eweek.com/news/brett-adcock-hark-ai-devices/) - [Meet the former Apple designer building a new AI interface at Hark | TechCrunch](https://techcrunch.com/2026/03/24/meet-the-former-apple-designer-building-a-new-ai-interface-at-hark/) ### ch21-13: TRUE - Speaker: Brett Adcock - Claim: HARC is currently self-funded by Adcock. - TLDR: Hark (transcribed as 'HARC') is confirmed to be self-funded by Brett Adcock with $100 million of his personal capital. - Explanation: Multiple sources, including Adcock's own X announcement and reporting from TechCrunch and The Information, confirm that Hark AI Lab was entirely self-funded by Adcock. The lab operated in stealth for 8 months before coming out publicly around the time of this podcast recording. The auto-transcription rendered the name as 'HARC' instead of 'Hark'. - Sources: - [Brett Adcock on X: "Today I'm excited to introduce Hark..."](https://x.com/adcock_brett/status/2036461258443202810) - [Meet the former Apple designer building a new AI interface at Hark | TechCrunch](https://techcrunch.com/2026/03/24/meet-the-former-apple-designer-building-a-new-ai-interface-at-hark/) - [Exclusive: Figure CEO Brett Adcock Launches New AI Lab With $100 Million in Funding — The Information](https://www.theinformation.com/briefings/exclusive-figure-ceo-brett-adcock-launches-new-ai-lab-100-million-funding) ### ch21-14: TRUE - Speaker: Brett Adcock - Claim: Adcock was born and raised on a farm. - TLDR: Adcock was born and raised on a third-generation corn and soybean farm in Moweaqua, Illinois. - Explanation: Multiple sources, including his official bio and Wikipedia, confirm he grew up on a third-generation agricultural farm outside Moweaqua, Illinois. The claim is well-documented across independent outlets. - Sources: - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) - [Bio | Brett Adcock Official](https://www.brettadcock.com/bio) ### ch21-15: TRUE - Speaker: Brett Adcock - Claim: Adcock has billions of dollars in funding behind his companies. - TLDR: Figure AI alone has raised approximately $1.9 billion in funding, and Archer Aviation secured a $1.5 billion United Airlines partnership, confirming Adcock has billions in backing. - Explanation: Figure AI raised over $1.9 billion across multiple rounds (including a $675M Series B in 2024 and a $1B+ Series C in 2025). Archer Aviation separately secured a $1.5 billion partnership with United Airlines. Across his ventures, Adcock's claim of 'billions behind it' is clearly accurate. - Sources: - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [Figure Exceeds $1B in Series C Funding at $39B Post-Money Valuation – Intel Capital](https://www.intelcapital.com/figure-exceeds-1b-in-series-c-funding-at-39b-post-money-valuation/) - [Brett Adcock - Wikipedia](https://en.wikipedia.org/wiki/Brett_Adcock) ### ch18-1: TRUE - Speaker: Brett Adcock - Claim: Figure robots run 24/7 shifts and communicate with each other during operations. - TLDR: Both claims are verified. Figure robots run 24/7 autonomous shifts and coordinate with each other for battery-swap handoffs. - Explanation: Brett Adcock has publicly described Figure robots running 24/7 unattended shifts at Figure's own facility, including a logistics use case and office greeter robots operating all day, all night, and on weekends. The inter-robot communication is specifically demonstrated by the battery handoff behavior: when one robot's charge drops low, another robot is alerted, walks over behind it, and is ready to swap in within seconds, exactly as described in the transcript. - Sources: - [Brett Adcock: 24/7 autonomous robot operation challenges persist](https://tradersunion.com/news/billionaires/show/1527685-autonomous-robot-operations/) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [High-tech renaissance man Brett Adcock on Figure's humanoid robots](https://newatlas.com/robotics/brett-adcock-interview-figure-robot/) ### ch18-2: UNVERIFIABLE - Speaker: Brett Adcock - Claim: When a Figure robot's battery reaches about 10%, another robot prepares to substitute by walking over and waiting behind it, then takes over when the low-battery robot backs away. - TLDR: Brett Adcock describes an internal robot-to-robot battery handoff process at Figure. No public sources confirm or deny this specific operational detail. - Explanation: Public information confirms Figure 03 supports autonomous wireless inductive charging and docking, and that Figure robots have been deployed in multi-robot fleet operations. However, the specific behavior described (10% battery threshold triggering a second robot to walk over and wait, then take over when the low-battery robot backs away) is an internal operational claim with no independent third-party documentation found. - Sources: - [Introducing Figure 03](https://www.figure.ai/news/introducing-figure-03) - [F.02 Contributed to the Production of 30,000 Cars at BMW](https://www.figure.ai/news/production-at-bmw) ### ch18-3: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure robots with hardware or software problems autonomously navigate to a designated hospital area in the facility, and a healthy robot moves in to take over the work station. - TLDR: Adcock publicly describes this system in the same interview, and external reporting confirms the 'hospital' routing concept, but the internal operational claim cannot be independently verified. - Explanation: A Humanoids Daily article covering the same Shawn Ryan Show interview confirms Adcock described robots autonomously 'limping to the hospital' when they lose a joint or communications link, as part of Figure's lights-out Sunnyvale facility. However, the specific detail about a healthy replacement robot proactively moving in to take over the workstation is drawn solely from Adcock's own account and is not corroborated by any independent third-party source or demonstration. As an internal operational process at a private facility, it cannot be verified externally. - Sources: - ["I Fired Them": Brett Adcock on the OpenAI Split, Robot Self-Repair, and the Kid-Safety Test | Humanoids Daily](https://www.humanoidsdaily.com/news/i-fired-them-brett-adcock-on-the-openai-split-robot-self-repair-and-the-kid-safety-test) ### ch18-4: UNVERIFIABLE - Speaker: Brett Adcock - Claim: All coordination between Figure robots, including shift changes and malfunction responses, happens through robot-to-robot communication. - TLDR: Brett Adcock is describing Figure AI's internal robot operations technology, which has not been independently documented or verified publicly. - Explanation: No public sources confirm or deny the specific claim that Figure robots use robot-to-robot communication to manage shift changes and malfunction responses. Figure AI's news page covers product announcements and general capabilities but does not document this multi-robot coordination system. As a first-person claim about a proprietary internal operational system, it cannot be confirmed or denied by third-party sources. - Sources: - [News | Figure](https://www.figure.ai/news) ### ch18-5: UNVERIFIABLE - Speaker: Brett Adcock - Claim: A year or two ago, losing communications with a motor such as a knee joint would cause a Figure robot to fall. - TLDR: This is an internal engineering claim about Figure's own robot development history that has no public documentation. - Explanation: Adcock is describing a private, internal engineering milestone: that early Figure robots would fall when losing knee-joint communications, and that this has since been fixed. No public technical disclosures, papers, or news coverage document this specific historical behavior of Figure's robots. General research on fault-tolerant locomotion confirms that joint communication loss is a known challenge in humanoid/legged robots, but whether Figure's specific past robots fell under that condition cannot be confirmed or denied from any available public source. ### ch18-6: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure robots can now lose full communications with a knee motor, stiffen the joint, and limp autonomously to the hospital area without falling. - TLDR: This is an internal technical capability claim made solely by Figure's CEO with no independent verification. No public demo or third-party confirmation exists yet. - Explanation: Brett Adcock describes this fault-tolerance behavior as an internal achievement, and he himself noted during the interview that he would post a public demonstration 'in the next week.' The Humanoids Daily article covering the claim simply recaps his statements from this same interview without any independent corroboration, test footage, or technical documentation. Until Figure releases a public demo or a third party verifies it, the claim cannot be confirmed or denied. - Sources: - ["I Fired Them": Brett Adcock on the OpenAI Split, Robot Self-Repair, and the Kid-Safety Test | Humanoids Daily](https://www.humanoidsdaily.com/news/i-fired-them-brett-adcock-on-the-openai-split-robot-self-repair-and-the-kid-safety-test) ### ch18-7: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure robots operate autonomously 24/7 and can execute robot-to-robot coordination, including malfunction responses, without human involvement at any hour including 3am. - TLDR: Figure's autonomous and robot-to-robot coordination capabilities are publicly documented, but the specific operational claim of fully unattended 24/7 running with zero human involvement is an internal company assertion that cannot be independently verified. - Explanation: Figure AI's Helix 02 system demonstrates implicit error recovery across 60+ sequential actions and robot-to-robot coordination without human resets, confirming the underlying technical capabilities. However, Adcock is describing what is happening inside his own facilities at night, which is a first-person account of internal operations. No independent source confirms that Figure's production deployments run completely without any human oversight at all hours, making the full claim unverifiable by third parties. - Sources: - [Introducing Helix 02: Full-Body Autonomy](https://www.figure.ai/news/helix-02) - [Helix: A Vision-Language-Action Model for Generalist Humanoid Control](https://www.figure.ai/news/helix) - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) ### ch18-8: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Brett Adcock witnessed Figure's autonomous robot-to-robot operation within the last few months, describing it as a current reality rather than a future capability. - TLDR: This is a first-person account of an internal company observation at Figure AI. It cannot be independently confirmed or denied by third parties. - Explanation: Adcock's claim that he personally witnessed autonomous robot-to-robot coordination at Figure within the last few months is an internal event with no public documentation. Public sources confirm Figure AI has been advancing autonomous robot capabilities (Helix AI system, Figure 03, BMW deployments), but no external source corroborates the specific robot-to-robot behavior he describes witnessing. This falls squarely in the category of a private, first-person observation that is inherently unverifiable. - Sources: - [Figure AI - Wikipedia](https://en.wikipedia.org/wiki/Figure_AI) - [Figure Unveils Next-Gen Conversational Humanoid Robot With 3x AI Computing for Fully Autonomous Tasks | NVIDIA Blog](https://blogs.nvidia.com/blog/figure-humanoid-robot-autonomous/) ### ch18-9: TRUE - Speaker: Brett Adcock - Claim: Asian electronics manufacturers have been running complex, multi-station manufacturing lines for several decades. - TLDR: Asian electronics manufacturers have indeed been running complex manufacturing lines for several decades, with Japan pioneering from the postwar era and others following from the 1960s onward. - Explanation: Japan established complex electronics assembly lines in the postwar decades, Taiwan entered semiconductor assembly in the 1960s, South Korea scaled up in the 1980s, and China became a dominant assembler in the 1990s-2000s. This gives the region a multi-decade history of sophisticated, high-rate manufacturing lines, consistent with Adcock's claim. - Sources: - [Learning and technological progress in the East Asian electronics industry | CEPR](https://cepr.org/voxeu/columns/learning-and-technological-progress-east-asian-electronics-industry) - [Asia's Role in the Four Industrial Revolutions - Association for Asian Studies](https://www.asianstudies.org/publications/eaa/archives/asias-role-in-the-four-industrial-revolutions/) - [Understanding Asian Manufacturing | Blog | Kingstec](https://kingstec.com/the-evolution-of-asian-manufacturing-innovation-and-global-dominance/) ### ch18-10: TRUE - Speaker: Brett Adcock - Claim: Figure has a campus in the Bay Area where it currently manufactures humanoid robots. - TLDR: Figure AI is headquartered in Sunnyvale, CA (Bay Area) and manufactures humanoid robots there, with a new expanded campus in San Jose also under development. - Explanation: Figure AI's main campus is in Sunnyvale, California, which is in the Bay Area, and the company has explicitly announced in-house humanoid robot manufacturing at that location. Brett Adcock has publicly discussed plans to launch new Bay Area facilities every 90 days to scale production, and Figure introduced its BotQ high-volume manufacturing facility for humanoids. All of this corroborates the claim. - Sources: - [BotQ: A High-Volume Manufacturing Facility for Humanoid Robots](https://www.figure.ai/news/botq) - [Figure AI unveils BotQ high-volume humanoid manufacturing facility - The Robot Report](https://www.therobotreport.com/figure-ai-unveils-botq-high-volume-humanoid-manufacturing-facility/) - ['New Buildings Launching Every 90 Days': Figure's New Campus for Producing Humanoids | Mike Kalil](https://mikekalil.com/blog/figure-ai-campus/) ### ch18-11: UNVERIFIABLE - Speaker: Brett Adcock - Claim: 7 Figure robots perform end-of-line checkout autonomously for about an hour and a half after coming off the production line. - TLDR: Figure's California BotQ facility does use autonomous robot testing on the production line, but the specific details (7 robots, 1.5 hours) are internal operational metrics not found in any public source. - Explanation: Figure AI's BotQ facility page confirms automated quality control and testing processes exist within their California manufacturing operation, including robots participating in their own production line. However, no public source documents the specific number of 7 robots performing end-of-line checkout or the 1.5-hour duration mentioned by Adcock. These are granular internal production details that cannot be independently confirmed or denied. - Sources: - [BotQ: A High-Volume Manufacturing Facility for Humanoid Robots](https://www.figure.ai/news/botq) ### ch18-12: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure robots perform their own burn-ins and end-of-line checks, including self-calibration and physical movements like burpees, to verify they are functional after manufacturing. - TLDR: Figure's public documentation confirms manufacturing quality checks and reliability testing at BotQ, but the specific details (robots doing 'burpees,' self-inspection of each other) are internal process descriptions with no independent public record. - Explanation: Figure's official BotQ factory announcement describes accelerated lifecycle testing, a Manufacturing Execution System for quality traceability, and self-calibration via visual proprioception. However, the specific end-of-line sequence Adcock describes, including robots performing physical movements like 'burpees' and inspecting each other, is an internal operational detail not documented in any publicly accessible source. As a first-person account of proprietary manufacturing processes, it cannot be independently confirmed or denied. - Sources: - [BotQ: A High-Volume Manufacturing Facility for Humanoid Robots](https://www.figure.ai/news/botq) - [Figure AI unveils BotQ high-volume humanoid manufacturing facility - The Robot Report](https://www.therobotreport.com/figure-ai-unveils-botq-high-volume-humanoid-manufacturing-facility/) ### ch18-13: TRUE - Speaker: Brett Adcock - Claim: With traditional code, folding laundry was impossible for robots because compliant materials that move when touched cannot be modeled. - TLDR: Robotics researchers broadly confirm that traditional code-based programming cannot handle deformable/compliant objects like laundry. Neural network approaches have now broken through this barrier. - Explanation: Multiple credible sources (NPR, Berkeley News, Knowable Magazine, IEEE) confirm that existing computer-vision and rule-based programming techniques were developed for rigid objects and are inadequate for deformable materials, whose infinite configuration space cannot be reliably modeled. Figure AI's own Helix system is cited as the first humanoid to fold laundry autonomously using an end-to-end neural network, validating the claim that the shift away from traditional code was the key enabler. - Sources: - [Need laundry folded? Don't ask a robot](https://knowablemagazine.org/content/article/technology/2025/why-robots-cant-fold-laundry) - [Helix Learns to Fold Laundry](https://www.figure.ai/news/helix-learns-to-fold-laundry) - [The fastest ever laundry-folding robot is here. And it's likely still slower than you](https://www.npr.org/2022/10/22/1130552239/robot-folding-laundry) - [Researchers develop a robot that folds towels - Berkeley News](https://news.berkeley.edu/2010/04/02/robot/) ### ch18-14: INEXACT - Speaker: Brett Adcock - Claim: The reason package logistics has not been widely automated is that soft, compliant bags are too hard to model with traditional robotics code. - TLDR: Soft, deformable packages are a well-documented robotics challenge, but they are one of several reasons logistics automation has lagged, not the sole cause. - Explanation: Multiple industry and academic sources confirm that soft, compliant bags (polybags, mailers) are genuinely difficult for traditional robotics to handle because their deformable, non-rigid nature makes modeling and gripping extremely challenging. Even Boston Dynamics' Stretch robot reportedly cannot pick up soft-sided bags. However, the sources also point to other barriers such as high upfront costs, integration complexity, and sensor/vision limitations, making Adcock's framing of it as THE reason an oversimplification of a multi-factor problem. - Sources: - [Soft Robotics Solves Costly Reverse Logistics Problem in E-commerce | Business Wire](https://www.businesswire.com/news/home/20200304005331/en/Soft-Robotics-Solves-Costly-Reverse-Logistics-Problem-in-E-commerce) - [Robots Aim to Tackle the Hardest Job in Warehousing](https://www.eweek.com/news/robotics-automated-loading-unloading-warehouse/) - [ZHU et al.: CHALLENGES AND OUTLOOKS IN DEFORMABLE OBJECT MANIPULATION](https://arm.robotics.umich.edu/download.php?p=103) - [Robotics in Industry 4.0 – Five Major Challenges for the Packaging Industry | Automation World](https://www.automationworld.com/factory/robotics/article/13319394/robotics-in-industry-40-five-major-challenges-for-the-packaging-industry) ### ch18-15: UNVERIFIABLE - Speaker: Brett Adcock - Claim: When Figure applied neural nets to compliant material handling tasks, the robots worked almost instantly. - TLDR: Figure AI's use of neural nets for compliant package handling is publicly documented and successful, but the claim that results came 'basically instantly' is an internal development anecdote that cannot be independently verified. - Explanation: Public sources confirm Figure AI applied neural networks (its Helix model) to handle deformable, compliant packages and achieved strong results, including adaptive grasping of soft mailers and near-human-level speed. However, the specific assertion that robots worked 'basically instantly' upon switching to neural nets describes an internal engineering timeline and experience, which no third-party source can confirm or deny. - Sources: - [Helix Accelerating Real-World Logistics](https://www.figure.ai/news/helix-logistics) - [Scaling Helix: a New State of the Art in Humanoid Logistics](https://www.figure.ai/news/scaling-helix-logistics) ### ch18-16: UNVERIFIABLE - Speaker: Brett Adcock - Claim: Figure's logistics customer required soft packages, some hard inside and some squishy, to be located, barcoded, and placed in the middle of a conveyor every 3 seconds. - TLDR: The general task description (variable hard/squishy packages, barcode scanning, conveyor placement) is corroborated by public Figure AI materials, but the specific '3 seconds' customer requirement is a private business detail that cannot be independently verified. - Explanation: Figure AI's published logistics documentation confirms robots handle packages with varying rigidity (rigid boxes to deformable bags), scan barcodes, and place items on a moving conveyor, matching the task description Adcock gave. However, the specific customer-mandated throughput target of one package every 3 seconds comes from a private business negotiation and is not publicly documented. Publicly reported performance metrics show Figure's robots achieving roughly 4 to 5 seconds per package in demos, which is close but not the same as the claimed 3-second requirement. - Sources: - [Helix Accelerating Real-World Logistics](https://www.figure.ai/news/helix-logistics) - [Scaling Helix: a New State of the Art in Humanoid Logistics](https://www.figure.ai/news/scaling-helix-logistics) - [Figure 02 Humanoid Robot Sorts Packages for Full Hour: 95% Barcode Accuracy in New Demo - Gear Musk](https://gearmusk.com/2025/06/09/figure-02-robot-sorts-packages/) ### ch18-17: UNVERIFIABLE - Speaker: Brett Adcock - Claim: After collecting data and training a neural net policy for the soft package task, Figure's robot successfully handled it right away. - TLDR: This is a first-person account of an internal Figure AI experiment that cannot be independently verified. Publicly available information confirms Figure uses neural net policies for logistics tasks, but not whether the robot succeeded immediately on a specific soft-package test. - Explanation: Brett Adcock is recounting a private internal milestone at Figure AI, specifically that a trained neural net policy worked on a soft-package logistics task on the first attempt. While Figure AI has publicly documented its Helix neural network achieving strong logistics performance (including handling soft packages), there is no public record confirming or denying that this particular task succeeded 'right away' after initial training. First-person accounts of internal company experiments are inherently unverifiable. - Sources: - [Helix Accelerating Real-World Logistics](https://www.figure.ai/news/helix-logistics) - [Scaling Helix: a New State of the Art in Humanoid Logistics](https://www.figure.ai/news/scaling-helix-logistics) ### ch18-18: TRUE - Speaker: Brett Adcock - Claim: Neural nets perform extremely well in high-variability, diverse environments and can learn representations across a wider distribution than traditional code. - TLDR: Neural networks outperforming hand-coded systems in high-variability environments is a well-established principle in machine learning and robotics research. - Explanation: The claim reflects a foundational concept in deep learning: neural networks learn abstract, compact representations from data that allow them to generalize across wide input distributions, something that explicit rule-based or traditional code cannot easily replicate. Multiple academic sources and surveys confirm that learning-based approaches excel precisely in complex, unstructured, high-variability environments where hand-engineering all possible cases is infeasible. Adcock's statement is a standard characterization of representation learning's advantage in robotics. - Sources: - [Representations and generalization in artificial and brain neural networks - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC11228472/) - [Generalization in neural networks: a broad survey](https://arxiv.org/html/2209.01610v3) ### ch18-19: TRUE - Speaker: Brett Adcock - Claim: Deep learning works on humanoid hardware for tasks like folding t-shirts, towels, and handling packages, executing real-time replanning as objects move. - TLDR: Figure AI's humanoid robots have demonstrated real-time deep learning for folding towels, t-shirts, and sorting packages using their Helix model. - Explanation: Figure AI's Helix model is a Vision-Language-Action end-to-end deep learning system that enables Figure 02 and Figure 03 robots to fold laundry (including towels and t-shirts) and sort packages autonomously in real time, adapting to deformable objects as they move. Multiple sources confirm the system runs on embedded hardware without explicit object representations, matching Adcock's description. One source notes t-shirt folding is still imperfect on Figure 03, but the core claim holds. - Sources: - [Helix Learns to Fold Laundry](https://www.figure.ai/news/helix-learns-to-fold-laundry) - [Figure humanoid robot uses Helix AI brain to fold laundry smoothly](https://interestingengineering.com/innovation/humanoid-robot-uses-helix-ai-to-fold-towels) - [Wild Video Shows Humanoid Robot Effortlessly Folding Laundry](https://futurism.com/video-humanoid-robot-laundry) ### ch20-1: TRUE - Speaker: Shawn Ryan - Claim: The Covenant School shooter in Nashville had previously attended the school but was not enrolled there at the time of the shooting. - TLDR: The Covenant School shooter, Audrey/Aiden Hale, had attended the school as a child (roughly 2001-2006) but was 28 years old and not enrolled at the time of the March 2023 shooting. - Explanation: Multiple sources confirm the shooter was a former student of The Covenant School who attended in elementary school years before the attack. At the time of the shooting, the perpetrator was 28 years old and had no current enrollment at the school. Shawn Ryan's statement accurately reflects this publicly reported fact. - Sources: - [2023 Nashville school shooting - Wikipedia](https://en.wikipedia.org/wiki/2023_Nashville_school_shooting) - [Nashville school shooting: What we know about the shooter and six victims at The Covenant School - CBS News](https://www.cbsnews.com/news/nashville-school-shooting-details-victims-investigation-latest/) ### ch20-2: UNSUBSTANTIATED - Speaker: Brett Adcock - Claim: Tens of thousands of guns are brought into US schools, across 130,000 schools, every year. - TLDR: The 130,000 schools figure is accurate, but 'tens of thousands of guns' brought into schools annually is Adcock's own unverified estimate, not an established statistic. - Explanation: NCES data confirms roughly 130,000 K-12 schools in the US. However, confirmed gun seizures in schools run at around 1,150 per year (Washington Post, 2022-23), with federal reporting going back to ~1,576 in 2015-16. Experts universally agree these figures undercount the real total due to poor reporting, but no credible source has established a figure in the 'tens of thousands.' Adcock himself frames this as his company's hypothesis ('we actually think... perhaps'), and no independent data corroborates that specific scale. - Sources: - [The number of guns found in U.S. schools has spiked, The Post found](https://www.washingtonpost.com/education/2023/10/10/guns-schools-us-increased-prevention-violence/) - [Nobody knows how many kids get caught with guns in school. Here's why.](https://www.pbs.org/newshour/nation/nobody-knows-how-many-kids-get-caught-with-guns-in-school-heres-why) - [Fast Facts: Educational institutions (84)](https://nces.ed.gov/fastfacts/display.asp?id=84) ### ch20-3: UNVERIFIABLE - Speaker: Brett Adcock - Claim: A very small percentage of guns brought into US schools are found. - TLDR: No reliable detection rate exists for guns brought to US schools, but available data is broadly consistent with the claim. - Explanation: By definition, undetected guns cannot be counted, so no authoritative statistic exists for the percentage of school firearms that go undetected. However, the gap between self-reported carrying (about 3% of high school students in 2021, per NCES/YRBSS) and official school detections (approximately 10 per 100,000 students, or 0.01%) is enormous, broadly consistent with a very low detection rate. Experts and investigations consistently acknowledge that official figures represent only a small fraction of actual incidents, but a precise, validated detection rate cannot be established. - Sources: - [COE - Students Carrying Weapons and Students' Access to Firearms](https://nces.ed.gov/programs/coe/indicator/a13/student-weapons-firearms) - [The number of guns found in U.S. schools has spiked, The Post found - The Washington Post](https://www.washingtonpost.com/education/2023/10/10/guns-schools-us-increased-prevention-violence/) - [Nobody knows how many kids get caught with guns in school. Here's why. | PBS News](https://www.pbs.org/newshour/nation/nobody-knows-how-many-kids-get-caught-with-guns-in-school-heres-why) ### ch20-4: UNSUBSTANTIATED - Speaker: Brett Adcock - Claim: A similarly small percentage of guns found in US schools are actually reported to authorities. - TLDR: There is documented underreporting of guns found in schools, but no specific data confirms that only a 'small percentage' of found guns are reported to authorities. - Explanation: Evidence confirms significant underreporting exists: a Washington Post investigation found 58% of gun seizures in the 51 largest school districts were never publicly reported by media, and schools are often not legally required to report incidents to law enforcement. However, this data concerns media coverage rather than reporting to authorities, and no specific statistic supports Adcock's claim that only a 'similarly small percentage' of found guns are reported to law enforcement. The assertion appears to reflect Adcock's perspective from building Cover, not a verified, sourced statistic. - Sources: - [The number of guns found in U.S. schools has spiked, The Post found - The Washington Post](https://www.washingtonpost.com/education/2023/10/10/guns-schools-us-increased-prevention-violence/) - [More than 6 guns were seized daily in US schools last year, report says](https://www.ksat.com/news/2023/10/10/more-than-6-guns-were-seized-daily-in-us-schools-last-year-report-says/) - [How Can We Prevent Gun Violence in American Schools?](https://everytownresearch.org/report/how-can-we-prevent-gun-violence-in-schools/) ### ch20-5: UNSUBSTANTIATED - Speaker: Brett Adcock - Claim: Potentially hundreds of thousands of guns are being brought into US schools every year. - TLDR: Official seizure data shows roughly 1,150+ guns confiscated from US schools annually, but the total including undetected guns is unknown. Survey data hints at much higher numbers but cannot confirm "hundreds of thousands." - Explanation: The Washington Post (2022-23) documented over 1,150 gun seizures at K-12 schools, while noting that 58% of seizures in major districts went unreported. CDC/YRBS survey data shows ~3% of high school students reported carrying a firearm to school in any given month, which could theoretically support estimates in the hundreds of thousands, but this figure covers weapons broadly and is self-reported. No authoritative study has estimated the annual total of all guns (detected and undetected) entering US schools, making Adcock's upper-bound figure of "hundreds of thousands" speculative and unverifiable. - Sources: - [The number of guns found in U.S. schools has spiked, The Post found](https://www.washingtonpost.com/education/2023/10/10/guns-schools-us-increased-prevention-violence/) - [Nobody knows how many kids get caught with guns in school. Here's why.](https://www.pbs.org/newshour/nation/nobody-knows-how-many-kids-get-caught-with-guns-in-school-heres-why) - [COE - Students Carrying Weapons and Students' Access to Firearms](https://nces.ed.gov/programs/coe/indicator/a13/student-weapons-firearms) - [Percent of high school students who carried a weapon, by demographics | Office of Juvenile Justice and Delinquency Prevention](https://ojjdp.ojp.gov/statistical-briefing-book/offending-by-youth/faqs/qa03504) ### ch20-6: INEXACT - Speaker: Brett Adcock - Claim: The Cover system operates at radio frequencies of around 200 to 300 gigahertz and 600 gigahertz, with FCC restrictions or atmospheric attenuation limiting use in the bands between those ranges. - TLDR: The technology does operate in two terahertz windows near those frequencies, and atmospheric attenuation plus FCC rules do limit in-between bands, but the specific numbers are approximate. - Explanation: Established terahertz imaging literature identifies operational windows at roughly 220 GHz and 340 GHz (not a single '200-300 GHz' band) and around 650 GHz (not exactly 600 GHz), with atmospheric absorption from water vapor and oxygen creating attenuation gaps between them. ETSI standards specifically recommend 300-400 GHz and 600-700 GHz for terahertz imaging devices, and the FCC does regulate which frequencies can be used. The core description of two usable windows separated by an atmospheric attenuation gap, with FCC constraints also in play, is well-supported by the technical literature. - Sources: - [Assessment of Millimeter-Wave and Terahertz Technology for Detection and Identification of Concealed Explosives and Weapons | The National Academies Press](https://www.nationalacademies.org/read/11826/chapter/4) - [ETSI TR 104 096 V1.1.1 (2025-06) System Reference document (SRdoc)](https://www.etsi.org/deliver/etsi_tr/104000_104099/104096/01.01.01_60/tr_104096v010101p.pdf) - [Cambridge Terahertz Inc. Spectrum Horizons Experimental License](https://apps.fcc.gov/els/GetAtt.html?id=340810&x=.) ### ch20-8: FALSE - Speaker: Brett Adcock - Claim: There is no established industry for passive weapon detection technology aimed at preventing school shootings, as experts in the terahertz space are focused on weather and space applications rather than security. - TLDR: Multiple companies actively work on passive and terahertz-based weapon detection for schools and security. The industry exists and is growing. - Explanation: Evolv Technology explicitly markets AI-driven weapon detection to schools. Cambridge Terahertz (founded 2021, NSF-funded) targets schools and transit. ThruVision (UK) has commercialized passive terahertz imaging for security for over a decade. TeraSense, Sequestim, and Lassen Peak also operate in terahertz-based security screening. The terahertz security market is projected to grow at 15.8% CAGR to $1.71B by 2030, directly contradicting both the claim of no established industry and the claim that terahertz experts focus only on weather and space. - Sources: - [Terahertz Imaging Poised to Shake Up Weapons Detection | Security Info Watch](https://www.securityinfowatch.com/perimeter-security/threat-detection-imaging-inspection/article/55275469/terahertz-imaging-poised-to-shake-up-weapons-detection) - [Home | Cambridge Terahertz | Concealed Weapons Detection](https://www.thzcorp.com/) - [Evolv Technology | Advanced Weapons Detection Solutions](https://evolv.com/) - [Terahertz security body scanner | TeraSense](https://terasense.com/products/body-scanner/) - [Lassen Peak Awarded Two Foundational Patents for World's First Handheld Terahertz Concealed Weapon Detection System](https://www.prnewswire.com/news-releases/lassen-peak-awarded-two-foundational-patents-for-worlds-first-handheld-terahertz-concealed-weapon-detection-system-302661717.html) - [Terahertz Technology Market Size, Share, Trends and Growth Analysis 2032](https://www.marketsandmarkets.com/Market-Reports/terahertz-technology-market-71182197.html) ### ch20-9: TRUE - Speaker: Brett Adcock - Claim: The Cover system works on the same principle as traditional radar, emitting radio frequency electromagnetic waves and analyzing the return signal to detect objects. - TLDR: Cover's system uses terahertz (THz) imaging radar, which does emit radio frequency electromagnetic waves and analyze the return signal, exactly as traditional radar does. - Explanation: NASA JPL, Cover's technology partner, developed THz imaging radar for concealed weapons detection that works by emitting high-frequency radio waves that bounce off concealed objects and are then processed by AI. Multiple sources confirm Cover's system is an active radar-based scanner, consistent with Adcock's description of it emitting radio frequency waves and analyzing the reflected signal. The technology is a higher-frequency variant of conventional radar, not a fundamentally different principle. - Sources: - [THz Imaging Radar for Concealed Weapons Detection | NASA Jet Propulsion Laboratory Microdevices Laboratory](https://microdevices.jpl.nasa.gov/capabilities/submillimeter-devices/radar-concealed-weapons.php) - [Cover](https://www.cover.ai/) - [Figure Founder Pumps $10 Mn into AI Hardware Project to Prevent School Shooting | Analytics India Magazine](https://analyticsindiamag.com/ai-news-updates/figure-founder-pumps-10-mn-into-ai-hardware-project-to-prevent-school-shooting/) ### ch20-10: INEXACT - Speaker: Brett Adcock - Claim: The Cover system operates at around 300 gigahertz, approximately 50 to 100 times the frequency of Wi-Fi. - TLDR: Cover does use ~300 GHz (terahertz) technology, confirmed via NASA JPL licensing. The 50-100x Wi-Fi comparison is roughly right for 5 GHz Wi-Fi but understates the ratio vs. the common 2.4 GHz band (which is ~125x). - Explanation: Cover's technology is based on terahertz imaging licensed from NASA's Jet Propulsion Laboratory, with 300 GHz sitting at the lower boundary of the terahertz spectrum. Wi-Fi operates at 2.4 GHz, 5 GHz, or 6 GHz depending on the standard. At 5 GHz the ratio is 60x (within the stated 50-100x range), but against the most common 2.4 GHz band it is approximately 125x, exceeding the claimed range. The 300 GHz figure itself is accurate, but the multiplier is only correct if the 5 or 6 GHz Wi-Fi bands are used as the reference. - Sources: - [A new startup from Figure's founder is licensing NASA tech in a bid to curb school shootings | TechCrunch](https://techcrunch.com/2024/06/21/a-new-startup-from-figures-founder-is-licensing-nasa-tech-in-a-bid-to-curb-school-shootings/) - [2.4 GHz vs. 5 GHz vs. 6 GHz: What's the Difference? - Intel](https://www.intel.com/content/www/us/en/products/docs/wireless/2-4-vs-5ghz.html) ### ch20-11: INEXACT - Speaker: Brett Adcock - Claim: The Cover system uses beamforming techniques and emits non-ionizing radiation, making it safe to be around, similar to Wi-Fi. - TLDR: Cover's use of non-ionizing radiation and its safety profile are confirmed by its NASA JPL-licensed millimeter wave technology. Beamforming is a plausible technique for such systems but is not explicitly documented for Cover. - Explanation: Cover licenses millimeter wave RF technology from NASA JPL. Millimeter wave signals are explicitly described as non-ionizing and harmless to human tissues at low power levels, making the safety comparison to Wi-Fi broadly accurate (both are non-ionizing radio waves, though at different frequencies). Beamforming is a standard signal processing technique in millimeter wave and radar imaging systems, but no publicly available source explicitly confirms its use in Cover's specific implementation. - Sources: - [A new startup from Figure's founder is licensing NASA tech in a bid to curb school shootings](https://techcrunch.com/2024/06/21/a-new-startup-from-figures-founder-is-licensing-nasa-tech-in-a-bid-to-curb-school-shootings/) - [RF Device for Acquiring Images of the Human Body - Medical Design Briefs](https://www.medicaldesignbriefs.com/component/content/article/7877-npo-42662) - [Millimeter wave scanner - Wikipedia](https://en.wikipedia.org/wiki/Millimeter_wave_scanner) ### ch20-12: TRUE - Speaker: Brett Adcock - Claim: The Cover system can produce both a 2D image and a 3D point cloud from the radar return signals. - TLDR: Cover's radar scanner does produce both a 2D image and a 3D point cloud from its radar return signals. - Explanation: Multiple sources confirm Cover AI's active terahertz radar system generates both a 2D image and a 3D point cloud from radar returns. The 3D point cloud is used by an AI neural network to detect concealed weapons by analyzing the depth differences in return signals, which is consistent with Adcock's description in the podcast. - Sources: - [Brett Adcock on X (Cover update)](https://x.com/adcock_brett/status/1935738616564912142) - [A new startup from Figure's founder is licensing NASA tech in a bid to curb school shootings | TechCrunch](https://techcrunch.com/2024/06/21/a-new-startup-from-figures-founder-is-licensing-nasa-tech-in-a-bid-to-curb-school-shootings/) - [Master Plan - Cover AI](https://www.cover.ai/master-plan) ### ch20-13: TRUE - Speaker: Brett Adcock - Claim: Water in human skin attenuates the Cover system's radio frequency signals, causing a weapon on the body's surface to return its signal faster than the body itself, enabling detection via 3D point cloud reconstruction. - TLDR: The physics described is accurate. Water in human tissue does attenuate RF/microwave signals, metal weapons on the body surface reflect signals faster (both due to shorter range and lower tissue absorption), and 3D point cloud reconstruction is a standard technique in such systems. - Explanation: Established radar physics confirms that high water content in human tissue causes strong dielectric losses at microwave frequencies, limiting penetration depth and attenuating return signals. Metal objects on the body surface reflect RF signals strongly and at shorter range, producing faster time-of-flight returns compared to attenuated returns from water-rich tissue. UWB and FMCW radar systems using 3D point cloud reconstruction for concealed weapon detection are well-documented in the scientific literature, and Cover's use of this approach in partnership with NASA JPL is confirmed. - Sources: - [Real-time Concealed Weapon Detection on 3D Radar Images for Walk-through](https://openaccess.thecvf.com/content/WACV2023/papers/Khan_Real-Time_Concealed_Weapon_Detection_on_3D_Radar_Images_for_Walk-Through_WACV_2023_paper.pdf) - [Advances in Microwave Near-Field Imaging: Prototypes, Systems, and Applications - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC8221233/) - [Three-Dimensional Near-Field Microwave Holography for Tissue Imaging - PMC](https://pmc.ncbi.nlm.nih.gov/articles/PMC3328955/) - [Cover](https://www.cover.ai/) ### ch20-14: TRUE - Speaker: Brett Adcock - Claim: The Cover system can detect weapons through materials such as backpacks, clothing, and jackets. - TLDR: Cover's terahertz imaging system is explicitly designed to detect weapons through backpacks, clothing, and jackets. - Explanation: Brett Adcock has publicly confirmed that Cover's hardware detects weapons hidden under clothes or inside bags. The system uses active terahertz radar (310-350 GHz) that sees through common concealment materials, and Cover's own website states it targets weapons in backpacks, pockets, and waistbands. - Sources: - [Brett Adcock on X: Cover's gen-2 hardware can now detect weapons hidden under clothes or inside bags](https://x.com/adcock_brett/status/1936100934880538903) - [Master Plan - Cover AI](https://www.cover.ai/master-plan) ### ch20-15: UNSUBSTANTIATED - Speaker: Brett Adcock - Claim: Most guns entering schools are concealed in waistbands, pockets, or backpacks. - TLDR: Waistbands, pockets, and backpacks are among documented concealment methods, but no public dataset confirms these account for the majority of guns entering schools. - Explanation: Available reports (Washington Post, Campus Safety Magazine, Brady United) confirm that guns seized at schools have been found in backpacks, waistbands, pockets, purses, lockers, cars, and other locations. These sources do not provide a percentage breakdown showing that the three categories named by Adcock account for a majority of cases. The claim is intuitive and directionally consistent with evidence, but the specific 'most' quantifier has no publicly available statistical backing. Adcock appears to be relying on internal proprietary data from a data scientist working with Cover, which cannot be independently verified. - Sources: - [Guns are seized in U.S. schools each day. The numbers are soaring.](https://www.washingtonpost.com/education/2023/10/10/guns-schools-us-increased-prevention-violence/) - [More than 6 Guns Per Day Were Seized at U.S. Schools Last Year - Campus Safety Magazine](https://www.campussafetymagazine.com/news/more-than-6-guns-per-day-were-seized-at-u-s-schools-last-year/128563/) - [Student Firearm Carrying in Schools | Brady United](https://www.bradyunited.org/resources/research/analysis-student-firearm-carrying-schools) ### ch20-16: UNVERIFIABLE - Speaker: Brett Adcock - Claim: A data scientist who has been doing daily school shooting analytics for 5 years is working with Cover. - TLDR: This claim describes a private internal collaboration between Cover and an unnamed data scientist, which cannot be confirmed or denied through public sources. - Explanation: No public record identifies a specific data scientist working with Cover on daily school shooting analytics. The claim is a first-person anecdote about a private business relationship, and nothing in Cover's public communications, press coverage, or Adcock's own posts names or corroborates such an individual. - Sources: - [We're here to prevent school shootings - Cover](https://www.cover.ai/culture) - [A new startup from Figure's founder is licensing NASA tech in a bid to curb school shootings | TechCrunch](https://techcrunch.com/2024/06/21/a-new-startup-from-figures-founder-is-licensing-nasa-tech-in-a-bid-to-curb-school-shootings/) ### ch20-17: UNSUBSTANTIATED - Speaker: Brett Adcock - Claim: There were approximately 200 knife stabbings in schools last year. - TLDR: The best available data shows roughly 75 knife stabbings in U.S. K-12 schools in 2023, not ~200. No source supports the 200 figure. - Explanation: The K-12 School Shooting Database, the most comprehensive independent tracker of such incidents, recorded approximately 75 knife stabbings in U.S. schools during 2023. No official federal database or independent tracker supports a figure of ~200 school knife stabbings in any recent year. Complete 2024 and 2025 data are not yet centrally available, but the 200 figure is nearly three times the documented 2023 count with no corroborating source found. - Sources: - [Guns vs. Knives: Five times (5x) more shootings at schools in 2023](https://riedmanreport.substack.com/p/guns-vs-knives-five-times-5x-more) - [Fast Facts: School crime (49)](https://nces.ed.gov/fastfacts/display.asp?id=49) ### ch20-18: TRUE - Speaker: Brett Adcock - Claim: Different metallic materials produce different radar signatures when scanned by the Cover system. - TLDR: Different metals do produce different radar return signatures, a well-established principle in RF/radar physics. - Explanation: Radar cross section (RCS) varies based on a material's electrical conductivity, geometry, and resonance frequencies. Research on millimeter-wave concealed weapons detection confirms that metallic objects produce distinct electromagnetic fingerprints based on their size, shape, and physical composition, and that systems can discriminate between threat objects and innocuous metal items. Adcock's simplified claim accurately reflects this underlying physics. - Sources: - [Radar cross section - Wikipedia](https://en.wikipedia.org/wiki/Radar_cross_section) - [Hands-off Frisking: High-Tech Concealed Weapons Detection | National Institute of Justice](https://nij.ojp.gov/library/publications/hands-frisking-high-tech-concealed-weapons-detection) - [A Radio-frequency Measurement System for Metallic Objects](https://www.ndt.net/article/wcndt2008/papers/509.pdf) ### ch20-19: INEXACT - Speaker: Brett Adcock - Claim: There are over 300 school shootings per year at K-12 schools in the US, roughly one per school day, across 130,000 K-12 schools, not including colleges. - TLDR: The 300+ figure comes from the broadest possible definition of school shootings and applies to 2024 data. Narrower definitions yield far lower counts (39 incidents with casualties in 2024). The 130,000 K-12 schools figure is approximately correct. - Explanation: The K-12 School Shooting Database (K-12 SSDB) recorded 314 incidents in 2024 and roughly 330 per school year in 2021-22 through 2023-24, supporting the '300+' claim only when using a definition that includes any time a gun is brandished, fired, or a bullet hits school property (including accidents, gang violence, and after-hours events). Education Week, using a narrower injuries/deaths-only definition, counted just 39 such incidents in 2024. The 130,000 K-12 schools figure is reasonable, with NCES data placing the total between approximately 129,000 and 133,000. Adcock's figure is technically defensible under one methodology but significantly overstated under more commonly used definitions of 'school shooting.' - Sources: - [K-12 School Shooting Database](https://k12ssdb.org/) - [School Shootings in 2025: How Many and Where](https://www.edweek.org/leadership/school-shootings-this-year-how-many-and-where/2025/01) - [After 3-consecutive-year high, school shootings drop 23% in 2024-25 | K-12 Dive](https://www.k12dive.com/news/school-shootings-2024-25-school-year-decrease-gun-violence/752853/) - [Fast Facts: Educational institutions (84) - NCES](https://nces.ed.gov/fastfacts/display.asp?id=84)