PodcastyNaukaThe Gradient: Perspectives on AI

The Gradient: Perspectives on AI

Daniel Bashir
The Gradient: Perspectives on AI
Najnowszy odcinek

149 odcinków

  • The Gradient: Perspectives on AI

    2025 in AI, with Nathan Benaich

    22.01.2026 | 1 godz. 1 min.
    Episode 144
    Happy New Year! This is one of my favorite episodes of the year — for the fourth time, Nathan Benaich and I did our yearly roundup of AI news and advancements, including selections from this year’s State of AI Report.
    If you’ve stuck around and continue to listen, I’m really thankful you’re here. I love hearing from you.
    You can find Nathan and Air Street Press here on Substack and on Twitter, LinkedIn, and his personal site. Check out his writing at press.airstreet.com.
    Find me on Twitter (or LinkedIn if you want…) for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.
    Outline
    * (00:00) Intro
    * (00:44) Air Street Capital and Nathan world
    * Nathan’s path from cancer research and bioinformatics to AI investing
    * The “evergreen thesis” of AI from niche to ubiquitous
    * Portfolio highlights: Eleven Labs, Synthesia, Crusoe
    * (03:44) Geographic flexibility: Europe vs. the US
    * Why SF isn’t always the best place for original decisions
    * Industry diversity in New York vs. San Francisco
    * The Munich Security Conference and Europe’s defense pivot
    * Playing macro games from a European vantage point
    * (07:55) VC investment styles and the “solo GP” approach
    * Taste as the determinant of investments
    * SF as a momentum game with small information asymmetry
    * Portfolio diversity: defense (Delian), embodied AI (Syriact), protein engineering
    * Finding entrepreneurs who “can’t do anything else”
    * (10:44) State of AI progress in 2025
    * Momentous progress in writing, research, computer use, image, and video
    * We’re in the “instruction manual” phase
    * The scale of investment: private markets, public markets, and nation states
    * (13:21) Range of outcomes and what “going bad” looks like
    * Today’s systems are genuinely useful—worst case is a valuation problem
    * Financialization of AI buildouts and GPUs
    * (14:55) DeepSeek and China closing the capability gap
    * Seven-month lag analysis (Epoch AI)
    * Benchmark skepticism and consumer preferences (”Coca-Cola vs. Pepsi”)
    * Hedonic adaptation: humans reset expectations extremely quickly
    * Bifurcation of model companies toward specific product bets
    * (18:29) Export controls and the “evolutionary pressure” argument
    * Selective pressure breeds innovation
    * Chinese companies rushing to public markets (Minimax, ZAI)
    * (21:30) Reasoning models and test-time compute
    * Chain of thought faithfulness questions
    * Monitorability tax: does observability reduce quality?
    * User confusion about when models should “think”
    * AI for science: literature agents, hypothesis generation
    * (23:53) Chain of thought interpretability and safety
    * Anthropomorphization concerns
    * Alignment faking and self-preservation behaviors
    * Cybersecurity as a bigger risk than existential risk
    * Models as payloads injected into critical systems
    * (27:26) Commercial traction and AI adoption data
    * Ramp data: 44% of US businesses paying for AI (up from 5% in early 2023)
    * Average contract values up to $530K from $39K
    * State of AI survey: 92% report productivity gains
    * The “slow takeoff” consensus and human inertia
    * Use cases: meeting notes, content generation, brainstorming, coding, financial analysis
    * (32:53) The industrial era of AI
    * Stargate and XAI data centers
    * Energy infrastructure: gas turbines and grid investment
    * Labs need to own models, data, compute, and power
    * Poolside’s approach to owning infrastructure
    * (35:40) Venture capital in the age of massive GPU capex
    * The GP lives in the present, the entrepreneur in the future, the LP in the past
    * Generality vs. specialism narratives
    * “Two or 20”: management fees vs. carried interest
    * Scaling funds to match entrepreneur ambitions
    * (40:10) NVIDIA challengers and returns analysis
    * Chinese challengers: 6x return vs. 26x on NVIDIA
    * US challengers: 2x return vs. 12x on NVIDIA
    * Grok acquired for $20B; Samba Nova markdown to $1.6B
    * “The tide is lifting all boats”—demand exceeds supply
    * (44:06) The hardware lottery and architecture convergence
    * Transformer dominance and custom ASICs making a comeback
    * NVIDIA still 90–95% of published AI research
    * (45:49) AI regulation: Trump agenda and the EU AI Act
    * Domain-specific regulators vs. blanket AI policy
    * State-level experimentation creates stochasticity
    * EU AI Act: “born before GPT-4, takes effect in a world shaped by GPT-7”
    * Only three EU member states compliant by late 2025
    * (50:14) Sovereign AI: what it really means
    * True sovereignty requires energy, compute, data, talent, chip design, and manufacturing
    * The US is sovereign; the UK by itself is not
    * Form alliances or become world-class at one level of the stack
    * ASML and the Netherlands as an example
    * (52:33) Open weight safety and containment
    * Three paths: model-based safeguards, scaffolding/ecosystem, procedural/governance
    * “Pandora’s box is open”—containment on distribution, not weights
    * Leak risk: the most vulnerable link is often human
    * Developer–policymaker communication and regulator upskilling
    * (55:43) China’s AI safety approach
    * Matt Sheehan’s work on Chinese AI regulation
    * Safety summits and China’s participation
    * New Chinese policies: minor modes, mental health intervention, data governance
    * UK’s rebrand from “safety” to “security” institutes
    * (58:34) Prior predictions and patterns
    * Hits on regulatory/political areas; misses on semiconductor consolidation, AI video games
    * (59:43) 2026 Predictions
    * A Chinese lab overtaking US on frontier (likely ZAI or DeepSeek, on scientific reasoning)
    * Data center NIMBYism influencing midterm politics
    * (01:01:01) Closing
    Links and Resources
    Nathan / Air Street Capital
    * Air Street Capital
    * State of AI Report 2025
    * Air Street Press — essays, analysis, and the Guide to AI newsletter
    * Nathan on Substack
    * Nathan on Twitter/X
    * Nathan on LinkedIn
    From Air Street Press (mentioned in episode)
    * Is the EU AI Act Actually Useful? — by Max Cutler and Nathan Benaich
    * China Has No Place at the UK AI Safety Summit (2023) — by Alex Chalmers and Nathan Benaich
    Research & Analysis
    * Epoch AI: Chinese AI Models Lag US by 7 Months — the analysis referenced on the US-China capability gap
    * Sara Hooker: The Hardware Lottery — the essay on how hardware determines which research ideas succeed
    * Matt Sheehan: China’s AI Regulations and How They Get Made — Carnegie Endowment
    Companies Mentioned
    * Eleven Labs — AI voice synthesis (Air Street portfolio)
    * Synthesia — AI video generation (Air Street portfolio)
    * Crusoe — clean compute infrastructure (Air Street portfolio)
    * Poolside — AI for code (Air Street portfolio)
    * DeepSeek — Chinese AI lab
    * Minimax — Chinese AI company
    * ASML — semiconductor equipment
    Other Resources
    * Search Engine Podcast: Data Centers (Part 1 & 2) — PJ Vogt’s two-part series on XAI data centers and the AI financing boom
    * RAAIS Foundation — Nathan’s AI research and education charity


    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • The Gradient: Perspectives on AI

    Iason Gabriel: Value Alignment and the Ethics of Advanced AI Systems

    26.11.2025 | 58 min.
    Episode 143
    I spoke with Iason Gabriel about:
    * Value alignment
    * Technology and worldmaking
    * How AI systems affect individuals and the social world
    Iason is a philosopher and Senior Staff Research Scientist at Google DeepMind. His work focuses on the ethics of artificial intelligence, including questions about AI value alignment, distributive justice, language ethics and human rights.
    You can find him on his website and Twitter/X.
    Find me on Twitter (or LinkedIn if you want…) for updates, and reach me at [email protected] for feedback, ideas, guest suggestions.
    Outline
    * (00:00) Intro
    * (01:18) Iason’s intellectual development
    * (04:28) Aligning language models with human values, democratic civility and agonism
    * (08:20) Overlapping consensus, differing norms, procedures for identifying norms
    * (13:27) Rawls’ theory of justice, the justificatory and stability problems
    * (19:18) Aligning LLMs and cooperation, speech acts, justification and discourse norms, literacy
    * (23:45) Actor Network Theory and alignment
    * (27:25) Value alignment and Iason’s starting points
    * (33:10) The Ethics of Advanced AI Assistants, AI’s impacts on social processes and users, personalization
    * (37:50) AGI systems and social power
    * (39:00) Displays of care and compassion, Machine Love (Joel Lehman)
    * (41:30) Virtue ethics, morality and language, virtue in AI systems vs. MacIntyre’s conception in After Virtue
    * (45:00) The Challenge of Value Alignment
    * (45:25) Technologists as worldmakers
    * (51:30) Technological determinism, collective action problems
    * (55:25) Iason’s goals with his work
    * (58:32) Outro
    Links
    Papers:
    * AI, Values, and Alignment (2020)
    * Aligning LMs with Human Values (2023)
    * Toward a Theory of Justice for AI (2023)
    * The Ethics of Advanced AI Assistants (2024)
    * A matter of principle? AI alignment as the fair treatment of claims (2025)


    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • The Gradient: Perspectives on AI

    2024 in AI, with Nathan Benaich

    26.12.2024 | 1 godz. 48 min.
    Episode 142
    Happy holidays! This is one of my favorite episodes of the year — for the third time, Nathan Benaich and I did our yearly roundup of all the AI news and advancements you need to know. This includes selections from this year’s State of AI Report, some early takes on o3, a few minutes LARPing as China Guys………
    If you’ve stuck around and continue to listen, I’m really thankful you’re here. I love hearing from you.
    You can find Nathan and Air Street Press here on Substack and on Twitter, LinkedIn, and his personal site. Check out his writing at press.airstreet.com.
    Find me on Twitter (or LinkedIn if you want…) for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.

    Outline
    * (00:00) Intro
    * (01:00) o3 and model capabilities + reasoning capabilities
    * (05:30) Economics of frontier models
    * (09:24) Air Street’s year and industry shifts: product-market fit in AI, major developments in science/biology, "vibe shifts" in defense and robotics
    * (16:00) Investment strategies in generative AI, how to evaluate and invest in AI companies
    * (19:00) Future of BioML and scientific progress: on AlphaFold 3, evaluation challenges, and the need for cross-disciplinary collaboration
    * (32:00) The AGI question and technology diffusion: Nathan’s take on AGI and timelines, technology adoption, the gap between capabilities and real-world impact
    * (39:00) Differential economic impacts from AI, tech diffusion
    * (43:00) Market dynamics and competition
    * (50:00) DeepSeek and global AI innovation
    * (59:50) A robotics renaissance? robotics coming back into focus + advances in vision-language models and real-world applications
    * (1:05:00) Compute Infrastructure: NVIDIA’s dominance, GPU availability, the competitive landscape in AI compute
    * (1:12:00) Industry consolidation: partnerships, acquisitions, regulatory concerns in AI
    * (1:27:00) Global AI politics and regulation: international AI governance and varying approaches
    * (1:35:00) The regulatory landscape
    * (1:43:00) 2025 predictions
    * (1:48:00) Closing
    Links and Resources
    From Air Street Press:
    * The State of AI Report
    * The State of Chinese AI
    * Open-endedness is all we’ll need
    * There is no scaling wall: in discussion with Eiso Kant (Poolside)
    * Alchemy doesn’t scale: the economics of general intelligence
    * Chips all the way down
    * The AI energy wars will get worse before they get better
    Other highlights/resources:
    * Deepseek: The Quiet Giant Leading China’s AI Race — an interview with DeepSeek CEO Liang Wenfeng via ChinaTalk, translated by Jordan Schneider, Angela Shen, Irene Zhang and others
    * A great position paper on open-endedness by Minqi Jiang, Tim Rocktäschel, and Ed Grefenstette — Minqi also wrote a blog post on this for us!
    * for China Guys only: China’s AI Regulations and How They Get Made by Matt Sheehan (+ an interview I did with Matt in 2022!)
    * The Simple Macroeconomics of AI by Daron Acemoglu + a critique by Maxwell Tabarrok (more links in the Report)
    * AI Nationalism by Ian Hogarth (from 2018)
    * Some analysis on the EU AI Act + regulation from Lawfare


    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • The Gradient: Perspectives on AI

    Philip Goff: Panpsychism as a Theory of Consciousness

    12.12.2024 | 1 godz.
    Episode 141
    I spoke with Professor Philip Goff about:
    * What a “post-Galilean” science of consciousness looks like
    * How panpsychism helps explain consciousness and the hybrid cosmopsychist view
    Enjoy!
    Philip Goff is a British author, idealist philosopher, and professor at Durham University whose research focuses on philosophy of mind and consciousness. Specifically, it focuses on how consciousness can be part of the scientific worldview. He is the author of multiple books including Consciousness and Fundamental Reality, Galileo's Error: Foundations for a New Science of Consciousness and Why? The Purpose of the Universe.
    Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions.
    Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
    Outline:
    * (00:00) Intro
    * (01:05) Goff vs. Carroll on the Knowledge Arguments and explanation
    * (08:00) Preferences for theories
    * (12:55) Curiosity (Grounding, Essence) and the Knowledge Argument
    * (14:40) Phenomenal transparency and physicalism vs. anti-physicalism
    * (29:00) How Exactly does Panpsychism Help Explain Consciousness
    * (30:05) The argument for hybrid cosmopsychism
    * (36:35) “Bare” subjects / subjects before inheriting phenomenal properties
    * (40:35) Bundle theories of the self
    * (43:35) Fundamental properties and new subjects as causal powers
    * (50:00) Integrated Information Theory
    * (55:00) Fundamental assumptions in hybrid cosmopsychism
    * (1:00:00) Outro
    Links:
    * Philip’s homepage and Twitter
    * Papers
    * Putting Consciousness First
    * Curiosity (Grounding, Essence) and the Knowledge Argument


    Get full access to The Gradient at thegradientpub.substack.com/subscribe
  • The Gradient: Perspectives on AI

    Some Changes at The Gradient

    21.11.2024 | 34 min.
    Hi everyone!
    If you’re a new subscriber or listener, welcome.
    If you’re not new, you’ve probably noticed that things have slowed down from us a bit recently. Hugh Zhang, Andrey Kurenkov and I sat down to recap some of The Gradient’s history, where we are now, and how things will look going forward.
    To summarize and give some context:
    The Gradient has been around for around 6 years now – we began as an online magazine, and began producing our own newsletter and podcast about 4 years ago. With a team of volunteers — we take in a bit of money through Substack that we use for subscriptions to tools we need and try to pay ourselves a bit — we’ve been able to keep this going for quite some time.
    Our team has less bandwidth than we’d like right now (and I’ll admit that at least some of us are running on fumes…) — we’ll be making a few changes:
    * Magazine: We’re going to be scaling down our editing work on the magazine. While we won’t be accepting pitches for unwritten drafts for now, if you have a full piece that you’d like to pitch to us, we’ll consider posting it. If you’ve reached out about writing and haven’t heard from us, we’re really sorry. We’ve tried a few different arrangements to manage the pipeline of articles we have, but it’s been difficult to make it work. We still want this to be a place to promote good work and writing from the ML community, so we intend to continue using this Substack for that purpose. If we have more editing bandwidth on our team in the future, we want to continue doing that work.
    * Newsletter: We’ll aim to continue the newsletter as before, but with a “Best from the Community” section highlighting posts. We’ll have a way for you to send articles you want to be featured, but for now you can reach us at our [email protected].
    * Podcast: I’ll be continuing this (at a slower pace), but eventually transition it away from The Gradient given the expanded range. If you’re interested in following, it might be worth subscribing on another player like Apple Podcasts, Spotify, or using the RSS feed.
    * Sigmoid Social: We’ll keep this alive as long as there’s financial support for it.
    If you like what we do and/or want to help us out in any way, do reach out to [email protected]. We love hearing from you.
    Timestamps
    * (0:00) Intro
    * (01:55) How The Gradient began
    * (03:23) Changes and announcements
    * (10:10) More Gradient history! On our involvement, favorite articles, and some plugs
    Some of our favorite articles!
    There are so many, so this is very much a non-exhaustive list:
    * NLP’s ImageNet moment has arrived
    * The State of Machine Learning Frameworks in 2019
    * Why transformative artificial intelligence is really, really hard to achieve
    * An Introduction to AI Story Generation
    * The Artificiality of Alignment (I didn’t mention this one in the episode, but it should be here)
    Places you can find us!
    Hugh:
    * Twitter
    * Personal site
    * Papers/things mentioned!
    * A Careful Examination of LLM Performance on Grade School Arithmetic (GSM1k)
    * Planning in Natural Language Improves LLM Search for Code Generation
    * Humanity’s Last Exam
    Andrey:
    * Twitter
    * Personal site
    * Last Week in AI Podcast
    Daniel:
    * Twitter
    * Substack blog
    * Personal site (under construction)


    Get full access to The Gradient at thegradientpub.substack.com/subscribe

Więcej Nauka podcastów

O The Gradient: Perspectives on AI

Deeply researched, technical interviews with experts thinking about AI and technology. thegradientpub.substack.com
Strona internetowa podcastu

Słuchaj The Gradient: Perspectives on AI, Ologies with Alie Ward i wielu innych podcastów z całego świata dzięki aplikacji radio.pl

Uzyskaj bezpłatną aplikację radio.pl

  • Stacje i podcasty do zakładek
  • Strumieniuj przez Wi-Fi lub Bluetooth
  • Obsługuje Carplay & Android Auto
  • Jeszcze więcej funkcjonalności
Media spoecznościowe
v8.4.0 | © 2007-2026 radio.de GmbH
Generated: 2/5/2026 - 6:29:43 AM