Powered by RND
PodcastyTechnologiaHow AI Is Built

How AI Is Built

Nicolay Gerold
How AI Is Built
Najnowszy odcinek

Dostępne odcinki

5 z 59
  • #052 Don't Build Models, Build Systems That Build Models
    Nicolay here,Today I have the chance to talk to Charles from Modal, who went from doing a PhD on neural network optimization in the 2010s - when ML engineers could build models with a soldering iron and some sticks - to architecting serverless infrastructure for AI models. Modal is about removing barriers so anyone can spin up a hundred GPUs in seconds.The critical insight that stuck with me: "Don't build models, build systems that build models." Organizations often make the mistake of celebrating a one-time fine-tuned model that matches GPT-4 performance only to watch it become obsolete when the next foundation model arrives - typically three to six months down the road.Charles's approach to infrastructure is particularly unconventional. He argues that serverless isn't just about convenience - it fundamentally changes how ambitious you can be with scale. "There's so much that gets in the way of trying to spin up a hundred GPUs or a thousand CPU containers that people just don't think to do something big."The winning approach involves automated data pipelines with feedback collection, continuous evaluation against new foundation models, AB testing and canary deployments, and systematic error analysis and retraining.In the podcast, we also cover:Why inference, not training, is where the money is madeHow to rethink compute when moving from traditional cloud to serverlessThe economics of automated resource managementWhy task decomposition is the key ML engineering skillWhen to earn the right to fine-tune versus using foundation models*📶 Connect with Charles:*Twitter - https://twitter.com/charlesirl Modal Labs - https://modal.com Modal Slack Community - https://modal.com/slack *📶 Connect with Nicolay:*LinkedIn - https://linkedin.com/in/nicolay-gerold/ X / Twitter - https://x.com/nicolaygerold Bluesky - https://bsky.app/profile/nicolaygerold.com Website - https://nicolaygerold.com/ My Agency Aisbach - https://aisbach.com/  (for ai implementations / strategy)*⏱️ Important Moments*From CUDA to Serverless: [00:01:38] Charles's journey from PhD neural network optimization to building Modal's serverless infrastructure.Rethinking Scale Ambition: [00:01:38] "There's so much that gets in the way of trying to spin up a hundred GPUs that people just don't think to do something big."The Economics of Serverless: [00:04:09] How automated resource management changes the cattle vs pets paradigm for GPU workloads.Lambda vs Modal Philosophy: [00:04:20] Why Modal was designed for tasks that take bytes and emit megabytes, unlike Lambda's middleware focus.Inference Economics Reality: [00:10:16] "Almost nobody gets paid to make models - organizations get paid to make predictions."The Open Source Commoditization: [00:14:55] How foundation models are becoming undifferentiated capabilities like databases.Task Decomposition as Core Skill: [00:22:00] Why breaking down problems is equivalent to recognizing API boundaries in software engineering.Systems That Build Models: [00:33:31] The critical difference between delivering static weights versus repeatable model production systemsEarning the Right to Fine-Tune: [00:34:06] The infrastructure prerequisites needed before attempting model customization.Multi-Node Training Challenges: [00:52:24] How serverless platforms handle the contradiction of high-performance computing with spiky demand.*🛠️ Tools & Tech Mentioned*Modal - https://modal.com  (serverless GPU infrastructure) AWS Lambda - https://aws.amazon.com/lambda/  (traditional serverless)Kubernetes - https://kubernetes.io/  (container orchestration)Temporal - https://temporal.io/ (workflow orchestration)Weights & Biases - https://wandb.ai/ (experiment tracking)Hugging Face - https://huggingface.co/  (model repository)PyTorch Distributed - https://pytorch.org/tutorials/intermediate/ddp_tutorial.html  (multi-GPU training)Redis - https://redis.io/ (caching and queues)*📚 Recommended Resources*Full Stack Deep Learning - https://fullstackdeeplearning.com/ (deployment best practices)Modal Documentation - https://modal.com/docs (getting started guide)Deep Seek Paper - https://arxiv.org/abs/2401.02954 (disaggregated inference patterns)AI Engineer Summit - https://ai.engineer/ (community events)MLOps Community - https://mlops.community/ (best practices)💬 Join The ConversationFollow How AI Is Built on YouTube - https://youtube.com/@howaiisbuilt, Bluesky - https://bsky.app/profile/howaiisbuilt.fm, or Spotify - https://open.spotify.com/show/3hhSTyHSgKPVC4sw3H0NUc?_authfailed=1%29 If you have any suggestions for future guests, feel free to leave it in the comments or write me (Nicolay) directly on LinkedIn - https://linkedin.com/in/nicolay-gerold/, X - https://x.com/nicolaygerold, or Bluesky - https://bsky.app/profile/nicolaygerold.com. Or at [email protected]. I will be opening a Discord soon to get you guys more involved in the episodes! Stay tuned for that.
    --------  
    59:22
  • #051 Build systems that can be debugged at 4am by tired humans with no context
    Nicolay here,Today I have the chance to talk to Charity Majors, CEO and co-founder of Honeycomb, who recently has been writing about the cost crisis in observability."Your source of truth is production, not your IDE - and if you can't understand your code there, you're flying blind."The key insight is architecturally simple but operationally transformative: replace your 10-20 observability tools with wide structured events that capture everything about a request in one place. Most teams store the same request data across metrics, logs, traces, APM, and error tracking - creating a 20X cost multiplier while making debugging nearly impossible because you're reconstructing stories from fragments.Charity's approach flips this: instrument once with rich context, derive everything else from that single source. This isn't just about cost - it's about giving engineers the connective tissue to understand distributed systems. When you can correlate "all requests failing from Android version X in region Y using language pack Z," you find problems in minutes instead of days.The second is putting developers on call for their own code. This creates the tight feedback loop that makes engineers write more reliable software - because nobody wants to get paged at 3am for their own bugs.In the podcast, we also touch on:Why deploy time is the foundational feedback loop (15 minutes vs 15 hours changes everything)The controversial "developers on call" stance and why ops people rarely found companiesHow microservices made everything trace-shaped and killed traditional metrics approachesThe "normal engineer" philosophy - building for 4am debugging, not peak performanceAI making "code of unknown quality" the new normalProgressive deployment strategies (kibble → dogfood → production)and more💡 Core ConceptsWide Structured Events: Capturing all request context in one instrumentation event instead of scattered log lines - enables correlation analysis that's impossible with fragmented data.Observability 2.0: Moving from metrics-as-workhorse to structured-data-as-workhorse, where you instrument once and derive metrics/alerts/dashboards from the same rich dataset.SLO-based Alerting: Replacing symptom alerts (CPU, memory, disk) with customer-impact alerts that measure whether you're meeting promises to users.Progressive Deployment: Gradual rollout through staged environments (kibble → dogfood → production) that builds confidence without requiring 2X infrastructure.Trace-shaped Systems: Architecture pattern recognizing that distributed systems problems are fundamentally about correlating events across time and services, not isolated metrics.📶 Connect with Charity:LinkedInBlueskyPersonal BlogCompany📶 Connect with Nicolay:LinkedInX / TwitterWebsite⏱️ Important MomentsGateway Drug to Engineering: [01:04] How IRC and bash tab completion sparked Charity's fascination with Unix command line possibilitiesADHD and Incident Response: [01:54] Why high-pressure outages brought out her best work - getting "dead calm" when everything's brokenCode vs. Production Reality: [02:56] Evolution from focusing on code beauty to understanding performance, behavior, and maintenance over timeThe Alexander's Horse Principle: [04:49] Auto-deployment as daily practice - if you grow up deploying constantly, it feels natural by the time you scaleProduction as Source of Truth: [06:32] Why your IDE output doesn't matter if you can't understand your code's intersection with infrastructure and usersThe Logging Evolution: [08:03] Moving from debugger-style spam logs to fewer, wider structured events oriented around units of workBubble Up Anomaly Detection: [10:27] How correlating dimensions reveals that failures cluster around specific Android versions, regions, and feature combinationsEverything is Trace-Shaped: [12:45] Why microservices complexity is about locating problems in distributed systems, not just identifying themAI as Acceleration of Automation: [15:57] Most AI panic could be replaced with "automation" - it's the same pattern, just faster feedback loopsNon-determinism as Genuinely New: [16:51] The one aspect of AI that's actually novel in software systems, requiring new architectural patternsThe Cost Crisis: [22:30] How 10-20 observability tools create unsustainable cost multipliers as businesses scaleSLO Revolution: [28:40] Deleting 90% of alerts by focusing on customer impact instead of system symptomsShrinking Feedback Loops: [34:28] Keeping deploy-to-validation under one hour so engineers can connect actions to outcomesNormal Engineer Design: [38:12] Building systems that work for tired humans at 4am, not just heroes during business hoursThe Instrumentation Habit: [23:15] Always looking at your code in production after deployment to build informed instincts about system behaviorProgressive Deployment Strategy: [36:43] Kibble → Dog Food → Production pipeline for gradual confidence buildingReal Engineering Bar: [49:00] Discussion on what actually makes exceptional vs normal engineers🛠️ Tools & Tech MentionedHoneycomb - Observability platform for structured eventsOpenTelemetry - Vendor-neutral instrumentation frameworkIRC - Early gateway to computingParse - Mobile backend where Honeycomb's origin story began📚 Recommended Resources"In Praise of Normal Engineers" - Charity's blog post"How I Failed" by Tim O'Reilly"Looking at the Crux" by Richard Rumelt"Fluke" - Book about randomness in history"Engineering Management for the Rest of Us" by Sarah Dresner
    --------  
    1:05:51
  • #050 Bringing LLMs to Production: Delete Frameworks, Avoid Finetuning, Ship Faster
    Nicolay here,Most AI developers are drowning in frameworks and hype. This conversation is about cutting through the noise and actually getting something into production.Today I have the chance to talk to Paul Iusztin, who's spent 8 years in AI - from writing CUDA kernels in C++ to building modern LLM applications. He currently writes about production AI systems and is building his own AI writing assistant.His philosophy is refreshingly simple: stop overthinking, start building, and let patterns emerge through use.The key insight that stuck with me: "If you don't feel the algorithm - like have a strong intuition about how components should work together - you can't innovate, you just copy paste stuff." This hits hard because so much of current AI development is exactly that - copy-pasting from tutorials without understanding the why.Paul's approach to frameworks is particularly controversial. He uses LangChain and similar tools for quick prototyping - maybe an hour or two to validate an idea - then throws them away completely. "They're low-code tools," he says. "Not good frameworks to build on top of."Instead, he advocates for writing your own database layers and using industrial-grade orchestration tools. Yes, it's more work upfront. But when you need to debug or scale, you'll thank yourself.In the podcast, we also cover:Why fine-tuning is almost always the wrong choiceThe "just-in-time" learning approach for staying sane in AIBuilding writing assistants that actually preserve your voiceWhy robots, not chatbots, are the real endgame💡 Core ConceptsAgentic Patterns: These patterns seem complex but are actually straightforward to implement once you understand the core loop. React: Agents that Reason, Act, and Observe in a loopReflection: Agents that review and improve their own outputsFine-tuning vs Base Model + Prompting: Fine-tuning involves taking a pre-trained model and training it further on your specific data. The alternative is using base models with careful prompting and context engineering. Paul's take: "Fine-tuning adds so much complexity... if you add fine-tuning to create a new feature, it's just from one day to one week."RAG: A technique where you retrieve relevant documents/information and include them in the LLM's context to generate better responses. Paul's approach: "In the beginning I also want to avoid RAG and just introduce a more guided research approach. Like I say, hey, these are the resources that I want to use in this article."📶 Connect with Paul:LinkedInX / TwitterNewsletterGitHubBook📶 Connect with Nicolay:LinkedInX / TwitterBlueskyWebsiteMy Agency Aisbach (for ai implementations / strategy)⏱️ Important MomentsFrom CUDA to LLMs: [02:20] Paul's journey from writing CUDA kernels and 3D object detection to modern AI applications.AI Content Is Natural Evolution: [11:19] Why AI writing tools are like the internet transition for artists - tools change, creativity remains.The Framework Trap: [36:41] "I see them as no code or low code tools... not good frameworks to build on top of."Fine-Tuning Complexity Bomb: [27:41] How fine-tuning turns 1-day features into 1-week experiments.End-to-End First: [22:44] "I don't focus on accuracy, performance, or latency initially. I just want an end-to-end process that works."The Orchestration Solution: [40:04] Why Temporal, D-Boss, and Restate beat LLM-specific orchestrators.Hype Filtering System: [54:06] Paul's approach: read about new tools, wait 2-3 months, only adopt if still relevant.Just-in-Time vs Just-in-Case: [57:50] The crucial difference between learning for potential needs vs immediate application.Robot Vision: [50:29] Why LLMs are just stepping stones to embodied AI and the unsolved challenges ahead.🛠️ Tools & Tech MentionedLangGraph (for prototyping only)Temporal (durable execution)DBOS (simpler orchestration)Restate (developer-friendly orchestration)Ray (distributed compute)UV (Python packaging)Prefect (workflow orchestration)📚 Recommended ResourcesThe Economist Style Guide (for writing)Brandon Sanderson's Writing Approach (worldbuilding first)LangGraph Academy (free, covers agent patterns)Ray Documentation (Paul's next deep dive)🔮 What's NextNext week, we will take a detour and go into the networking behind voice AI with Russell D’Sa from Livekit.💬 Join The ConversationFollow How AI Is Built on YouTube, Bluesky, or Spotify.If you have any suggestions for future guests, feel free to leave it in the comments or write me (Nicolay) directly on LinkedIn, X, or Bluesky. Or at [email protected] will be opening a Discord soon to get you guys more involved in the episodes! Stay tuned for that.♻️ I am trying to build the new platform for engineers to share their experience that they have earned after building and deploying stuff into production. Pay it forward by sharing with one engineer who's facing similar challenges. That's the agreement - I deliver practical value, you help grow this resource for everyone. ♻️
    --------  
    1:06:57
  • #050 TAKEAWAYS Bringing LLMs to Production: Delete Frameworks, Avoid Finetuning, Ship Faster
    Nicolay here,Most AI developers are drowning in frameworks and hype. This conversation is about cutting through the noise and actually getting something into production.Today I have the chance to talk to Paul Iusztin, who's spent 8 years in AI - from writing CUDA kernels in C++ to building modern LLM applications. He currently writes about production AI systems and is building his own AI writing assistant.His philosophy is refreshingly simple: stop overthinking, start building, and let patterns emerge through use.The key insight that stuck with me: "If you don't feel the algorithm - like have a strong intuition about how components should work together - you can't innovate, you just copy paste stuff." This hits hard because so much of current AI development is exactly that - copy-pasting from tutorials without understanding the why.Paul's approach to frameworks is particularly controversial. He uses LangChain and similar tools for quick prototyping - maybe an hour or two to validate an idea - then throws them away completely. "They're low-code tools," he says. "Not good frameworks to build on top of."Instead, he advocates for writing your own database layers and using industrial-grade orchestration tools. Yes, it's more work upfront. But when you need to debug or scale, you'll thank yourself.In the podcast, we also cover:Why fine-tuning is almost always the wrong choiceThe "just-in-time" learning approach for staying sane in AIBuilding writing assistants that actually preserve your voiceWhy robots, not chatbots, are the real endgame💡 Core ConceptsAgentic Patterns: These patterns seem complex but are actually straightforward to implement once you understand the core loop. React: Agents that Reason, Act, and Observe in a loopReflection: Agents that review and improve their own outputsFine-tuning vs Base Model + Prompting: Fine-tuning involves taking a pre-trained model and training it further on your specific data. The alternative is using base models with careful prompting and context engineering. Paul's take: "Fine-tuning adds so much complexity... if you add fine-tuning to create a new feature, it's just from one day to one week."RAG: A technique where you retrieve relevant documents/information and include them in the LLM's context to generate better responses. Paul's approach: "In the beginning I also want to avoid RAG and just introduce a more guided research approach. Like I say, hey, these are the resources that I want to use in this article."📶 Connect with Paul:LinkedInX / TwitterNewsletterGitHubBook📶 Connect with Nicolay:LinkedInX / TwitterBlueskyWebsiteMy Agency Aisbach (for ai implementations / strategy)⏱️ Important MomentsFrom CUDA to LLMs: [02:20] Paul's journey from writing CUDA kernels and 3D object detection to modern AI applications.AI Content Is Natural Evolution: [11:19] Why AI writing tools are like the internet transition for artists - tools change, creativity remains.The Framework Trap: [36:41] "I see them as no code or low code tools... not good frameworks to build on top of."Fine-Tuning Complexity Bomb: [27:41] How fine-tuning turns 1-day features into 1-week experiments.End-to-End First: [22:44] "I don't focus on accuracy, performance, or latency initially. I just want an end-to-end process that works."The Orchestration Solution: [40:04] Why Temporal, D-Boss, and Restate beat LLM-specific orchestrators.Hype Filtering System: [54:06] Paul's approach: read about new tools, wait 2-3 months, only adopt if still relevant.Just-in-Time vs Just-in-Case: [57:50] The crucial difference between learning for potential needs vs immediate application.Robot Vision: [50:29] Why LLMs are just stepping stones to embodied AI and the unsolved challenges ahead.🛠️ Tools & Tech MentionedLangGraph (for prototyping only)Temporal (durable execution)DBOS (simpler orchestration)Restate (developer-friendly orchestration)Ray (distributed compute)UV (Python packaging)Prefect (workflow orchestration)📚 Recommended ResourcesThe Economist Style Guide (for writing)Brandon Sanderson's Writing Approach (worldbuilding first)LangGraph Academy (free, covers agent patterns)Ray Documentation (Paul's next deep dive)🔮 What's NextNext week, we will take a detour and go into the networking behind voice AI with Russell D’Sa from Livekit.💬 Join The ConversationFollow How AI Is Built on YouTube, Bluesky, or Spotify.If you have any suggestions for future guests, feel free to leave it in the comments or write me (Nicolay) directly on LinkedIn, X, or Bluesky. Or at [email protected] will be opening a Discord soon to get you guys more involved in the episodes! Stay tuned for that.♻️ I am trying to build the new platform for engineers to share their experience that they have earned after building and deploying stuff into production. Pay it forward by sharing with one engineer who's facing similar challenges. That's the agreement - I deliver practical value, you help grow this resource for everyone. ♻️
    --------  
    11:00
  • #049 BAML: The Programming Language That Turns LLMs into Predictable Functions
    Nicolay here,I think by now we are done with marveling at the latest benchmark scores of the models. It doesn’t tell us much anymore that the latest generation outscores the previous by a few basis points.If you don’t know how the LLM performs on your task, you are just duct taping LLMs into your systems.If your LLM-powered app can’t survive a malformed emoji, you’re shipping liability, not software.Today, I sat down with Vaibhav (co-founder of Boundary) to dissect BAML—a DSL that treats every LLM call as a typed function.It’s like swapping duct-taped Python scripts for a purpose-built compiler.Vaibhav advocates for building first principle based primitives.One principle stood out: LLMs are just functions; build like that from day 1. Wrap them, test them, and let a human only where it counts.Once you adopt that frame, reliability patterns fall into place: fallback heuristics, model swaps, classifiers—same playbook we already use for flaky APIs.We also cover:Why JSON constraints are the wrong hammer—and how Schema-Aligned Parsing fixes itWhether “durable” should be a first-class keyword (think async/await for crash-safety)Shipping multi-language AI pipelines without forcing a Python microserviceToken-bloat surgery, symbol tuning, and the myth of magic promptsHow to keep humans sharp when 98 % of agent outputs are already correct💡 Core ConceptsSchema-Aligned Parsing (SAP)Parse first, panic later. The model can handle Markdown, half-baked YAML, or rogue quotes—SAP puts it into your declared type or raises. No silent corruption.Symbol TuningLabels eat up tokens and often don’t help with your accuracy (in some cases they even hurt). Rename PasswordReset to C7, keep the description human-readable.Durable ExecutionDurable execution refers to a computing paradigm where program execution state persists despite failures, interruptions, or crashes. It ensures that operations resume exactly where they left off, maintaining progress even when systems go down.Prompt CompressionEvery extra token is latency, cost, and entropy. Axe filler words until the prompt reads like assembly. If output degrades, you cut too deep—back off one line.📶 Connect with Vaibhav:LinkedInX / TwitterBAML📶 Connect with Nicolay:NewsletterLinkedInX / TwitterBlueskyWebsiteMy Agency Aisbach (for ai implementations / strategy)⏱️ Important MomentsNew DSL vs. Python Glue [00:54]Why bolting yet another microservice onto your stack is cowardice; BAML compiles instead of copies.Three-Nines on Flaky Models [04:27]Designing retries, fallbacks, and human overrides when GPT eats dirt 5 % of the time.Native Go SDK & OpenAPI Fatigue [06:32]Killing thousand-line generated clients; typing go get instead.“LLM = Pure Function” Mental Model [15:58]Replace mysticism with f(input) → output; unit-test like any other function.Tool-Calling as a Switch Statement [18:19]Multi-tool orchestration boils down to switch(action) {…}—no cosmic “agent” needed.Sneak Peek—durable Keyword [24:49]Crash-safe workflows without shoving state into S3 and praying.Symbol Tuning Demo [31:35]Swapping verbose labels for C0,C1 slashes token cost and bias in one shot.Inside SAP Coercion Logic [47:31]Int arrays to ints, scalars to lists, bad casts raise—deterministic, no LLM in the loop.Frameworks vs. Primitives Rant [52:32]Why BAML ships primitives and leaves the “batteries” to you—less magic, more control.🛠️ Tools & Tech MentionedBAML DSL & PlaygroundTemporal • Prefect • DBOSoutlines • Instructor • LangChain📚 Recommended ResourcesBAML DocsSchema-Aligned Parsing (SAP)🔮 What's NextNext week, we will continue going more into getting generative AI into production talking to Paul Iusztin.💬 Join The ConversationFollow How AI Is Built on YouTube, Bluesky, or Spotify.If you have any suggestions for future guests, feel free to leave it in the comments or write me (Nicolay) directly on LinkedIn, X, or Bluesky. Or at [email protected] will be opening a Discord soon to get you guys more involved in the episodes! Stay tuned for that.♻️ Here's the deal: I'm committed to bringing you detailed, practical insights about AI development and implementation. In return, I have two simple requests:Hit subscribe right now to help me understand what content resonates with youIf you found value in this post, share it with one other developer or tech professional who's working with AIThat's our agreement - I deliver actionable AI insights, you help grow this. ♻️
    --------  
    1:02:38

Więcej Technologia podcastów

O How AI Is Built

Real engineers. Real deployments. Zero hype. We interview the top engineers who actually put AI in production. Learn what the best engineers have figured out through years of experience. Hosted by Nicolay Gerold, CEO of Aisbach and CTO at Proxdeal and Multiply Content.
Strona internetowa podcastu

Słuchaj How AI Is Built, Podcast o technologii i wielu innych podcastów z całego świata dzięki aplikacji radio.pl

Uzyskaj bezpłatną aplikację radio.pl

  • Stacje i podcasty do zakładek
  • Strumieniuj przez Wi-Fi lub Bluetooth
  • Obsługuje Carplay & Android Auto
  • Jeszcze więcej funkcjonalności
Media spoecznościowe
v7.20.1 | © 2007-2025 radio.de GmbH
Generated: 7/3/2025 - 3:03:11 PM