Powered by RND

Eye On A.I.

Craig S. Smith
Eye On A.I.
Najnowszy odcinek

Dostępne odcinki

5 z 250
  • #252 Sid Sheth: How d-Matrix is Disrupting AI Inference in 2025
    This episode is sponsored by the DFINITY Foundation.  DFINITY Foundation's mission is to develop and contribute technology that enables the Internet Computer (ICP) blockchain and its ecosystem, aiming to shift cloud computing into a fully decentralized state. Find out more at https://internetcomputer.org/     In this episode of Eye on AI, we sit down with Sid Sheth, CEO and Co-Founder of d-Matrix, to explore how his company is revolutionizing AI inference hardware and taking on industry giants like NVIDIA.   Sid shares his journey from building multi-billion-dollar businesses in semiconductors to founding d-Matrix—a startup focused on generative AI inference, chiplet-based architecture, and ultra-low latency AI acceleration.   We break down: Why the future of AI lies in inference, not training How d-Matrix’s Corsair PCIe accelerator outperforms NVIDIA's H200 The role of in-memory compute and high bandwidth memory in next-gen AI chips How d-Matrix integrates seamlessly into hyperscaler and enterprise cloud environments Why AI infrastructure is becoming heterogeneous and what that means for developers The global outlook on inference chips—from the US to APAC and beyond How Sid plans to build the next NVIDIA-level company from the ground up.   Whether you're building in AI infrastructure, investing in semiconductors, or just curious about the future of generative AI at scale, this episode is packed with value.     Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI   (00:00) Intro (02:46) Introducing Sid Sheth (05:27) Why He Started d-Matrix (07:28) Lessons from Building a $2.5B Chip Business (11:52) How d-Matrix Prototypes New Chips (15:06) Working with Hyperscalers Like Google & Amazon (17:27) What’s Inside the Corsair AI Accelerator (21:12) How d-Matrix Beats NVIDIA on Chip Efficiency (24:10) The Memory Bandwidth Advantage Explained (26:27) Running Massive AI Models at High Speed (30:20) Why Inference Isn’t One-Size-Fits-All (32:40) The Future of AI Hardware (36:28) Supporting Llama 3 and Other Open Models (40:16) Is the Inference Market Big Enough? (43:21) Why the US Is Still the Key Market (46:39) Can India Compete in the AI Chip Race? (49:09) Will China Catch Up on AI Hardware?  
    --------  
    54:32
  • #250 Pedro Domingos on the Real Path to AGI
    This episode is sponsored by Thuma. Thuma is a modern design company that specializes in timeless home essentials that are mindfully made with premium materials and intentional details. To get $100 towards your first bed purchase, go to http://thuma.co/eyeonai   Can AI Ever Reach AGI? Pedro Domingos Explains the Missing Link In this episode of Eye on AI, renowned computer scientist and author of The Master Algorithm, Pedro Domingos, breaks down what’s still missing in our race toward Artificial General Intelligence (AGI) — and why the path forward requires a radical unification of AI's five foundational paradigms: Symbolists, Connectionists, Bayesians, Evolutionaries, and Analogizers.   Topics covered: Why deep learning alone won’t achieve AGI How reasoning by analogy could unlock true machine creativity The role of evolutionary algorithms in building intelligent systems Why transformers like GPT-4 are impressive—but incomplete The danger of hype from tech leaders vs. the real science behind AGI What the Master Algorithm truly means — and why we haven’t found it yet Pedro argues that creativity is easy, reliability is hard, and that reasoning by analogy — not just scaling LLMs — may be the key to Einstein-level breakthroughs in AI.   Whether you're an AI researcher, machine learning engineer, or just curious about the future of artificial intelligence, this is one of the most important conversations on how to actually reach AGI.     📚 About Pedro Domingos: Pedro is a professor at the University of Washington and author of the bestselling book The Master Algorithm, which explores how the unification of AI's "five tribes" could produce the ultimate learning algorithm.     Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI   (00:00) The Five Tribes of AI Explained (02:23) The Origins of The Master Algorithm (08:22) Designing with Bit Strings: Radios, Robots & More (10:46) Fitness Functions vs Reward Functions in AI (15:51) What Is Reasoning by Analogy in AI? (18:38) Kernel Machines and Support Vector Machines Explained (22:23) Case-Based Reasoning and Real-World Use Cases (27:38) Are AI Tribes Still Siloed or Finally Collaborating? (32:42) Why AI Needs a Deeply Unified Master Algorithm (36:40) Creativity vs Reliability in AI (39:14) Can AI Achieve Scientific Breakthroughs? (41:26) Why Reasoning by Analogy Is AI’s Missing Link (45:10) Evolutionaries: The Most Distant Tribe in AI (48:41) Will Quantum Computing Help AI Reach AGI? (53:15) Are We Close to the Master Algorithm? (57:44) Tech Leaders, Hype & the Reality of AGI (01:04:06) The AGI Spectrum: Where We Are & What’s Missing (01:06:18) Pedro’s Research Focus  
    --------  
    1:08:12
  • #249 Brice Challamel: How Moderna is Using AI to Disrupt Modern Healthcare
    This episode is sponsored by Oracle. OCI is the next-generation cloud designed for every workload – where you can run any application, including any AI projects, faster and more securely for less.  On average, OCI costs 50% less for compute, 70% less for storage, and 80% less for networking. Join Modal, Skydance Animation, and today’s innovative AI tech companies who upgraded to OCI…and saved.    Offer only for new US customers with a minimum financial commitment. See if you qualify for half off at http://oracle.com/eyeonai     In this episode of Eye on AI, Craig Smith sits down with Brice Challamel, Head of AI Products and Innovation at Moderna, to explore how one of the world’s leading biotech companies is embedding artificial intelligence across every layer of its business—from drug discovery to regulatory approval.   Brice breaks down how Moderna treats AI not just as a tool, but as a utility—much like electricity or the internet—designed to empower every employee and drive innovation at scale. With over 1,800 GPTs in production and thousands of AI solutions running on internal platforms like Compute and MChat, Moderna is redefining what it means to be an AI-native company.   Key topics covered in this episode: How Moderna operationalizes AI at scale GenAI as the new interface for machine learning AI’s role in speeding up drug approvals and clinical trials The future of personalized cancer treatment (INT) Moderna’s platform mindset: AI + mRNA = next-gen medicine Collaborating with the FDA using AI-powered systems   Don’t forget to like, comment, and subscribe for more interviews at the intersection of AI and innovation.     Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI     (00:00) Preview  (02:49) Brice Challamel’s Background and Role at Moderna (05:51) Why AI Is Treated as a Utility at Moderna (09:01) Moderna's AI Infrastructure (11:53) GenAI vs Traditional ML (14:59) Combining mRNA and AI as Dual Platforms (18:15) AI’s Impact on Regulatory & Clinical Acceleration (23:46) The Five Core Applications of AI at Moderna (26:33) How Teams Identify AI Use Cases Across the Business (29:01) Collaborating with the FDA Using AI Tools (33:55) How Moderna Is Personalizing Cancer Treatments (36:59) The Role of GenAI in Medical Care (40:10) Producing Personalized mRNA Medicines (42:33) Why Moderna Doesn’t Sell AI Tools (45:30) The Future: AI and Democratized Biotech
    --------  
    49:57
  • #248 Pedro Domingos: How Connectionism Is Reshaping the Future of Machine Learning
    This episode is sponsored by Indeed.  Stop struggling to get your job post seen on other job sites. Indeed's Sponsored Jobs help you stand out and hire fast. With Sponsored Jobs your post jumps to the top of the page for your relevant candidates, so you can reach the people you want faster. Get a $75 Sponsored Job Credit to boost your job’s visibility! Claim your offer now: https://www.indeed.com/EYEONAI     In this episode, renowned AI researcher Pedro Domingos, author of The Master Algorithm, takes us deep into the world of Connectionism—the AI tribe behind neural networks and the deep learning revolution.   From the birth of neural networks in the 1940s to the explosive rise of transformers and ChatGPT, Pedro unpacks the history, breakthroughs, and limitations of connectionist AI. Along the way, he explores how supervised learning continues to quietly power today’s most impressive AI systems—and why reinforcement learning and unsupervised learning are still lagging behind.   We also dive into: The tribal war between Connectionists and Symbolists The surprising origins of Backpropagation How transformers redefined machine translation Why GANs and generative models exploded (and then faded) The myth of modern reinforcement learning (DeepSeek, RLHF, etc.) The danger of AI research narrowing too soon around one dominant approach Whether you're an AI enthusiast, a machine learning practitioner, or just curious about where intelligence is headed, this episode offers a rare deep dive into the ideological foundations of AI—and what’s coming next. Don’t forget to subscribe for more episodes on AI, data, and the future of tech.     Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI     (00:00) What Are Generative Models? (03:02) AI Progress and the Local Optimum Trap (06:30) The Five Tribes of AI and Why They Matter (09:07) The Rise of Connectionism (11:14) Rosenblatt’s Perceptron and the First AI Hype Cycle (13:35) Backpropagation: The Algorithm That Changed Everything (19:39) How Backpropagation Actually Works (21:22) AlexNet and the Deep Learning Boom (23:22) Why the Vision Community Resisted Neural Nets (25:39) The Expansion of Deep Learning (28:48) NetTalk and the Baby Steps of Neural Speech (31:24) How Transformers (and Attention) Transformed AI (34:36) Why Attention Solved the Bottleneck in Translation (35:24) The Untold Story of Transformer Invention (38:35) LSTMs vs. Attention: Solving the Vanishing Gradient Problem (42:29) GANs: The Evolutionary Arms Race in AI (48:53) Reinforcement Learning Explained (52:46) Why RL Is Mostly Just Supervised Learning in Disguise (54:35) Where AI Research Should Go Next  
    --------  
    59:56
  • #247 Barr Moses: Why Reliable Data is Key to Building Good AI Systems
    This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.   NetSuite is offering a one-of-a-kind flexible financing program. Head to  https://netsuite.com/EYEONAI to know more.   In this episode of Eye on AI, Craig Smith sits down with Barr Moses, Co-Founder & CEO of Monte Carlo, the pioneer of data and AI observability. Together, they explore the hidden force behind every great AI system: reliable, trustworthy data. With AI adoption soaring across industries, companies now face a critical question: Can we trust the data feeding our models? Barr unpacks why data quality is more important than ever, how observability helps detect and resolve data issues, and why clean data—not access to GPT or Claude—is the real competitive moat in AI today.   What You’ll Learn in This Episode: Why access to AI models is no longer a competitive advantage How Monte Carlo helps teams monitor complex data estates in real-time The dangers of “data hallucinations” and how to prevent them Real-world examples of data failures and their impact on AI outputs The difference between data observability and explainability Why legacy methods of data review no longer work in an AI-first world Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI     (00:00) Intro (01:08) How Monte Carlo Fixed Broken Data   (03:08) What Is Data & AI Observability?   (05:00) Structured vs Unstructured Data Monitoring   (08:48) How Monte Carlo Integrates Across Data Stacks (13:35) Why Clean Data Is the New Competitive Advantage   (16:57) How Monte Carlo Uses AI Internally   (19:20) 4 Failure Points: Data, Systems, Code, Models   (23:08) Can Observability Detect Bias in Data?   (26:15) Why Data Quality Needs a Modern Definition   (29:22) Explosion of Data Tools & Monte Carlo’s 50+ Integrations   (33:18) Data Observability vs Explainability   (36:18) Human Evaluation vs Automated Monitoring   (39:23) What Monte Carlo Looks Like for Users   (46:03) How Fast Can You Deploy Monte Carlo?   (51:56) Why Manual Data Checks No Longer Work   (53:26) The Future of AI Depends on Trustworthy Data 
    --------  
    55:36

Więcej Technologia podcastów

O Eye On A.I.

Eye on A.I. is a biweekly podcast, hosted by longtime New York Times correspondent Craig S. Smith. In each episode, Craig will talk to people making a difference in artificial intelligence. The podcast aims to put incremental advances into a broader context and consider the global implications of the developing technology. AI is about to change your world, so pay attention.
Strona internetowa podcastu

Słuchaj Eye On A.I., AI CODZIENNIE - czyli co słychać w sztucznej inteligencji i wielu innych podcastów z całego świata dzięki aplikacji radio.pl

Uzyskaj bezpłatną aplikację radio.pl

  • Stacje i podcasty do zakładek
  • Strumieniuj przez Wi-Fi lub Bluetooth
  • Obsługuje Carplay & Android Auto
  • Jeszcze więcej funkcjonalności
Media spoecznościowe
v7.16.2 | © 2007-2025 radio.de GmbH
Generated: 5/1/2025 - 2:39:33 AM