PodcastyTechnologiaIndustry40.tv

Industry40.tv

Kudzai Manditereza
Industry40.tv
Najnowszy odcinek

94 odcinków

  • Industry40.tv

    How to Build AI Solutions That Actually Work on the Factory Floor: Renan Devillieres - Founder & CEO, OSS Ventures

    01.04.2026 | 43 min.
    **Podcast Name:** AI in Manufacturing Podcast 

    **Episode Title:** How to Build AI Solutions That Actually Work on the Factory Floor

    **Guest:** Renan De Villiers, Founder & CEO, OSS Ventures

    **Host:** Kudzai Manditereza

     

    ---

     

    ## 1. Episode Summary

     

    This episode explores why only 5% of factories currently operate like tech companies — and what it will take to reach 50% within a decade. Renan De Villiers, founder and CEO of OSS Ventures, a Paris- and Boston-based venture builder with 22 spun-out companies live in 3,800 factories worldwide, shares hard-won lessons from visiting over 900 manufacturing sites and deploying AI across 100+ factories in the past two years. Drawing on his background as a former McKinsey consultant, factory director, and tech startup founder, De Villiers explains why most manufacturing AI initiatives fail, how to industrialize the discovery process, and why designing the human experience of managing AI agents is the most underestimated challenge in scaling industrial AI. Listeners will learn the concrete frameworks OSS Ventures uses to validate problems before building, the "10x test" for deciding what to pursue, and why the factory of the future requires fewer but far better-paid people. This episode is essential for anyone leading AI adoption in manufacturing or building software products for the factory floor.

     

    ---

     

    ## 2. Key Questions Answered in This Episode

     

    - **What does a tech-enabled factory look like compared to a traditional factory?**

    - **Why do 85% of manufacturing AI projects fail, and how can you beat those odds?**

    - **How do you identify the right AI use cases on the factory floor?**

    - **What is the "10x test" for validating manufacturing AI opportunities?**

    - **Why is tribal knowledge the biggest hidden barrier to AI in manufacturing?**

    - **How do you scale an AI solution from one factory to hundreds?**

    - **Should AI be embedded into existing products or built as a new experience layer?**

     

    ---

     

    ## 3. Episode Highlights with Timestamps

     

    **[1:05]** — **Renan's Background** — From math student to McKinsey consultant to factory director to tech startup founder, and how that path led to creating OSS Ventures.

     

    **[3:58]** — **OSS Ventures by the Numbers** — 22 companies spun out, 3,800 factories served, 200,000 monthly users, and €41 million in combined portfolio revenue.

     

    **[4:43]** — **What a Tech-Enabled Factory Looks Like** — Why Tesla's Austin and Shanghai factories cost roughly the same to operate, and how 5 engineers at Xiaomi replace 15 at BMW.

     

    **[8:32]** — **The Skills Gap in Manufacturing Leadership** — Why your digitization leader must understand code, just as a factory director must be able to read a plan.

     

    **[12:36]** — **The Talent Attraction Myth** — Why manufacturing doesn't have a talent problem — it has a system problem that makes jobs low-leverage and low-pay.

     

    **[13:29]** — **The Historical Parallel to Early 20th Century Industrialization** — How AI is creating intellectual leverage the same way machines created physical leverage.

     

    **[16:35]** — **Why 85% of AI Projects Fail — and Four Key Insights** — Choose big problems, use GenAI to write deterministic code, extract tribal knowledge, and design the human-in-the-loop experience.

     

    **[24:52]** — **The OSS Ventures Validation Process** — The "10x test," the three-out-of-ten factory director rule, and why money is the only real signal of demand.

     

    **[29:30]** — **Spotting AI Opportunities on the Shop Floor** — Look for pockets of people bottlenecked with 35-megabyte Excel files.

     

    **[34:34]** — **Why Copilots on Legacy Software Are "Chocolate-Covered Broccoli"** — The case for building entirely new AI-native experiences instead of bolting AI onto 20-year-old interfaces.

     

    **[36:20]** — **Scaling from 1 to 600 Factories** — Why you need both insane product quality and military-grade deployment discipline.

     

    **[42:41]** — **Prediction: Manufacturing Wages Up 25% and 25% of MIT Grads Enter Manufacturing Within 5 Years.**

     

    ---

     

    ## 4. Key Takeaways

     

    - **Choose big problems, not small ones:** AI projects in production are expensive. OSS Ventures only pursues opportunities where the solution delivers a 10x improvement over the status quo — measured in hard numbers, not feelings. If the economics don't justify the investment, don't build.

     

    - **GenAI writes the code, but deterministic code runs in production:** Across OSS Ventures' last eight AI projects, generative AI was used to create the underlying code, but the deployed system runs deterministic, auditable logic. You're not "vibe coding" your way to manufacturing an airplane.

     

    - **30–40% of critical data lives in people's heads:** Enterprise systems and ERPs don't contain everything. In one sock factory, 850 rules governing R&D existed only as tribal knowledge. Extracting this knowledge requires being physically present on the shop floor.

     

    - **Design the experience of the AI agent manager:** The new manufacturing role is managing AI agents, not doing the manual work. This requires more design investment, not less. Every successful OSS deployment created an experience where the operator felt in control of the system.

     

    - **Validate with money, not compliments:** Before building anything, OSS Ventures pitches the concept to 10 factory directors with a pay-on-results model. If fewer than three commit, the project doesn't launch. People are nice — only financial commitment reveals real demand.

     

    - **Scale requires both product excellence and deployment discipline:** Premature scaling kills companies. First, build a product users love. Then deploy with a process so detailed it resembles a military operation — specifying exactly what data, training, and configuration happens on each day.

     

    - **Shared infrastructure is a right-to-play, not a nice-to-have:** Cybersecurity compliance, ERP connectivity, and standard data structures must be solved before scaling. OSS Ventures provides this as shared "tech bricks" across its portfolio so startups don't have to build it from scratch.

     

    ---

     

    ## 5. Notable Quotes

     

    > "Why the heck is your digitization guy someone who never wrote a line of code?" — Renan De Villiers, Founder & CEO, OSS Ventures

     

    > "I don't think you have a talent attraction problem. I think you have a system problem that makes it so that people are not well paid." — Renan De Villiers, Founder & CEO, OSS Ventures

     

    > "Slapping a copilot with a RAG on top of a program designed 20 years ago is not innovation — it's laziness." — Renan De Villiers, Founder & CEO, OSS Ventures

     

    > "You're not vibe coding your way to create an airplane." — Renan De Villiers, Founder & CEO, OSS Ventures

     

    > "I don't think people are against AI. I think people are against bad products." — Renan De Villiers, Founder & CEO, OSS Ventures

     

    ---

     

    ## 6. Key Concepts Explained

     

    **Venture Builder (Studio Model)**

    Definition: A venture builder is an organization that systematically identifies market opportunities, builds initial products with an in-house team, validates product-market fit, and then recruits external founders to lead each company as a separate entity.

    Why it matters: This model de-risks early-stage industrial software by absorbing the cost and uncertainty of discovery and initial development.

    Episode context: OSS Ventures has used this model to launch 30 projects, spin out 22 companies, and reach 3,800 factories in five years.

     

    **The 10x Test**

    Definition: A validation framework requiring that any proposed AI solution must deliver outcomes at least 10 times better than the current alternative — measured in time, cost, or quality — before development begins.

    Why it matters: It prevents teams from building incremental improvements that don't justify the cost and complexity of AI deployment.

    Episode context: De Villiers illustrated this with a sock R&D example: reducing development time from 4–6 months and $35K to one week and $2K.

     

    **Tribal Knowledge Extraction**

    Definition: The process of capturing undocumented rules, heuristics, and expertise that exist only in the minds of experienced factory workers and encoding them into AI systems.

    Why it matters: 30–40% of the data needed for manufacturing AI doesn't
  • Industry40.tv

    Scaling Agentic AI Workflows in Manufacturing with Causal AI: Bernhard Kratzwald - Co Founder & CTO, EthonAI

    25.03.2026 | 55 min.
    ## Episode: Building and Scaling Agentic AI Workflows in Manufacturing

     

    **Podcast Name:** AI in Manufacturing Podcast 

    **Episode Title:** How to Build and Scale Agentic AI Workflows in Manufacturing

    **Guest:** Bernard Kraswald, Co-Founder & CTO at Ethon AI

    **Host:** Kudzai Manditereza

    ---

     

    ## Episode Summary

     

    This episode explores how manufacturers can build and scale agentic AI workflows to achieve operational excellence across factories. Bernard Kraswald, Co-Founder and CTO at Ethon AI, explains why traditional continuous improvement methods have reached their limits and how purpose-built industrial AI—grounded in process knowledge graphs and causal reasoning—unlocks the next wave of manufacturing optimization. Key insights include why deep data contextualization through knowledge graphs is essential for agentic AI (not just basic tag hierarchies), how causal AI differs from correlation-based analytics by making root cause findings actionable, and why a layered architecture of data infrastructure, specialized model layer, and application layer prevents hallucinated recommendations in safety-critical environments. Bernard also shares real-world results, including a globally scaled deployment at Siemens that generated over $10 million in documented savings. Whether you're evaluating industrial AI platforms or architecting your data stack for agentic workflows, this episode provides a practical roadmap from data ingestion to autonomous process control.

    ---

     

    ## Key Questions Answered in This Episode

     

    - What is a process knowledge graph, and why is it essential for agentic AI in manufacturing?

    - How does causal AI differ from correlation-based analytics in industrial settings?

    - What architecture layers are needed to run agentic AI workflows reliably in manufacturing?

    - Why can't general-purpose LLMs like ChatGPT or Claude replace purpose-built industrial AI models?

    - How do you build a knowledge graph iteratively without delaying ROI?

    - What does a typical deployment timeline look like for industrial AI platforms?

    - How should manufacturers handle security and governance when connecting OT systems to cloud-based AI?

    ---

    ## Episode Highlights with Timestamps

     

    **[2:27]** – **Bernard's Background & Ethon AI Origin Story** — How a PhD in computer science and collaboration with Fortune 500 manufacturers like Siemens led to founding Ethon AI, now approaching 100 employees with offices in Zurich and New York.

     

    **[4:24]** – **Why Traditional Methods Have Maxed Out** — Bernard explains the "20 cents of every dollar goes to waste" principle and why classic automation and data science have hit diminishing returns, requiring agentic workflows and foundation models for the next improvement frontier.

     

    **[7:49]** – **What Deep Contextualization Really Means** — A detailed walkthrough of why basic UNS tag hierarchies aren't sufficient for agentic AI, using the example of tracing a batch rework problem across tanks, recipes, time series, and operator interventions.

     

    **[12:45]** – **Process Knowledge Graph Explained** — Bernard defines ontologies and knowledge graph triples, showing how semantic meaning enables questions like "which five machines cost the most downtime today" versus simple tag queries.

     

    **[16:02]** – **Build the Graph First or Build the Application First?** — The chicken-and-egg debate on knowledge graph strategy, and why Ethon chose to build the graph behind ROI-delivering applications rather than creating a monolithic model upfront.

     

    **[18:16]** – **Causal AI vs. Correlation Analytics** — The ice cream and shark attacks analogy applied to manufacturing: how causal models turn seasonal production correlations into actionable insights about cooling water temperature adjustments.

     

    **[21:28]** – **The Full Agentic AI Architecture Stack** — Bernard outlines three layers: data infrastructure (connectivity + knowledge graph), model layer (purpose-built causal and inspection models), and application layer (agentic workflows or human interfaces).

     

    **[24:54]** – **Why General-Purpose LLMs Aren't Enough for Manufacturing** — Safety-critical environments require models that understand spec limits, user manuals, and process constraints—not just pattern-matched text generation.

     

    **[29:33]** – **Ethon AI Platform Walkthrough** — A modular enterprise platform that measures what's happening, understands why, suggests improvement actions, and enables autonomous process control through dynamic SOPs and centerline dashboards.

     

    **[37:19]** – **Causal AI's Medical Origins Applied to Manufacturing** — How treating a production process like a patient (healthy or sick) allows causal models to extract actionable knowledge from months of operator interventions and process adjustments.

     

    **[48:03]** – **Deployment Timeline and Forward Deployed Engineers** — Ethon's Palantir-inspired deployment model with on-site engineers, achieving first value consistently in under three months.

     

    **[51:17]** – **Case Studies: Siemens and Lindt & Sprüngli** — Globally scaled deployments with $10M+ documented savings at Siemens (published by the World Economic Forum) and significant waste reductions at Lindt & Sprüngli's chocolate production facilities.

     

    ---

     

    ## Key Takeaways

     

    - **Knowledge graphs are non-negotiable for agentic AI:** A unified namespace provides basic tag context, but agentic workflows require deep semantic relationships—connecting batches to recipes, tanks to flow paths, and time series to operator interventions. Without this ontology layer, AI agents cannot perform meaningful root cause investigation.

     

    - **Causal AI makes insights actionable, not just interesting:** Correlation analytics can tell you production runs better in winter, but causal AI identifies that lower feeding water temperature improves cooling behavior, giving operators a specific lever to pull in summer months. This distinction is critical for safety-critical environments where recommendations must be trustworthy.

     

    - **Purpose-built industrial models prevent hallucination in critical decisions:** By placing a specialized causal model layer between the data infrastructure and the agentic application layer, recommendations are grounded in verified causal relationships rather than LLM pattern matching. The agentic layer enriches these findings with SOPs and documentation but cannot fabricate the underlying analysis.

     

    - **Start with ROI-delivering applications, not infrastructure perfection:** Rather than building a complete knowledge graph before deploying AI, Ethon's approach builds the graph incrementally behind applications that deliver measurable value. Users often don't realize they're building a knowledge graph because they're simply modeling their data while getting returns.

     

    - **Change management is as important as the technology:** Operators and process engineers have solved problems for decades without data-driven tools. AI systems must explain their reasoning through causal chains, build trust incrementally, and integrate into existing workflows without adding friction—even one extra second per task multiplied across thousands of repetitions creates significant resistance.

     

    - **Security requires one-way data flow by design:** When connecting legacy OT systems (some 20-30 years old) to cloud AI, the architecture must ensure information flows only from factory to cloud, with no return path that could serve as an attack vector. Edge-deployable modules handle latency-sensitive tasks like optical inspection independently.

     

    - **Cross-factory intelligence is the next major value unlock:** Most manufacturers still analyze individual lines or factories in isolation. Connecting multiple factories to shared knowledge graph concepts enables cross-site learning—identifying why one line outperforms another and transferring those insights globally.

     

    ---

     

    ## Notable Quotes

     

    > "Every dollar you spend on manufacturing, 20 cents go to waste. That has been true 50 years ago, and it will be true probably 50 years in the future, because there's always 20% to get." — Bernard Kraswald, CTO at Ethon AI

     

    > "The insights you get cannot be hallucinated, because they're coming from this underlying model layer—from this causal model. The LLM agentic layer on top cannot fabricate that." — Bernard Kraswald, CTO at Ethon AI

     

    > "You're never done with building your knowledge graph, because there's always more knowledge you can distill out of it." — Bernard Kraswald, CTO at Ethon AI

     

    > "The only mistake you can make today is not doing anything. The best time to start was yesterday, and the second best time to start would be today." — Bernard Kraswald, CTO at Ethon AI

     

    > "Every AI system will make some mistakes. So here is my best, wholehearted suggestion, and this is why I believe it's true—and now you can click and triple down, follow the root cause links, and investigate everything." — Bernard Kraswald, CTO at Ethon AI

     

    ---

     

    ## Key Concepts Explained

     

    **Process Knowledge Graph**

    Definition: A semantic data model built on ontologies that assigns meaning to industrial data and defines how different data elements relate to each other—connecting machines, sensors, batches, recipes, and physical flows into a queryable graph structure using subject-predicate-object triples.

    Why it matters
  • Industry40.tv

    Why The Unified Namespace is The Essential Foundation for Industrial AI & Agentic Operations: Walker Reynolds - President, 4.0 Solutions

    17.03.2026 | 1 godz. 2 min.
    ## Episode: The State of Industrial AI, Unified Namespace, and Knowledge Graphs After PROVE IT 2025

     

    **Podcast Name:** AI in Manufacturing Podcast 

    **Guest:** Walker Reynolds, President & Solutions Architect at 4.0 Solutions, Founder of the PROVE IT Conference

    **Host:** Kudzai Manditereza

    **Target Audience:** Manufacturing data leaders, IT/OT solution architects, and digital transformation professionals

     

    ---

     

    ## Episode Summary

     

    Walker Reynolds, President and Solutions Architect at 4.0 Solutions and founder of the PROVE IT conference, delivers an unfiltered assessment of where industrial AI actually stands in 2025. Drawing from conversations with over 1,000 attendees at this year's PROVE IT conference—70% of whom were end users working in manufacturing—Reynolds identifies three critical industry shifts: AI fatigue is setting in as vendors outpace market readiness, knowledge graphs have emerged as the essential technology for enabling agentic AI in manufacturing, and the gap between digitally mature and immature manufacturers is widening. The conversation covers why most manufacturers still aren't getting value from their unified namespace implementations, the five most practical AI applications seen at PROVE IT, and why autonomous agents are a mathematical impossibility given current LLM reliability. Reynolds closes with his complete recommended technology stack for manufacturers and a prediction that plant floors will see *more* people, not fewer—but they'll be analysts supervising AI agents rather than middle managers managing people.

     

    ---

     

    ## Key Questions Answered in This Episode

     

    - What is the current state of AI adoption in manufacturing in 2025?

    - Why are some manufacturers failing to get value from unified namespace implementations?

    - What role do knowledge graphs play in enabling agentic AI for manufacturing?

    - What are the most practical AI applications for manufacturers right now?

    - Can AI agents run autonomously in manufacturing operations?

    - What does the ideal industrial data architecture stack look like for a small to midsize manufacturer?

    - How does unified namespace serve as the backbone for agentic AI?

     

    ---

     

    ## Episode Highlights with Timestamps

     

    **[1:56]** — **Introduction and episode overview** — Kudzai sets the agenda: PROVE IT conference takeaways, unified namespace adoption status, agentic AI's role, and the ideal industrial data architecture.

     

    **[4:23]** — **Walker Reynolds' background** — From salt mines to tier-one automotive to founding 4.0 Solutions, IoT University, and the PROVE IT conference—plus why he always introduces himself as if no one knows who he is.

     

    **[8:36]** — **Three core observations from PROVE IT 2025** — AI fatigue is real, most end users still ask "where do I start?", and knowledge graphs emerged as the breakout technology everyone now understands they need.

     

    **[20:37]** — **Top five practical AI applications from PROVE IT** — WinCC OA and Tatsoft for AI-assisted development, Atanta Analytics' prompt-to-insights, Thread Cloud's knowledge graph-driven root cause analysis, and Maestro Hub's live module generation with Claude Code.

     

    **[29:08]** — **The knowledge gap in agentic AI adoption** — Reynolds draws an analogy to the leap from algebra to calculus, warning that not every organization has someone who can bridge the gap to agent-based architectures.

     

    **[35:04]** — **Why autonomous agents are a myth** — Current LLMs are 99.9% reliable at best—one error per 1,000 words—compared to a PLC's nine nines of reliability. Agents must be human-supervised.

     

    **[42:55]** — **Why manufacturers fail or succeed with unified namespace** — The differentiator is understanding UNS as the real-time current state of the business, not a historical transaction store.

     

    **[52:09]** — **UNS as the backbone for agentic AI** — How agents use the semantic structure of UNS to navigate operations and then retrieve deeper context via MCP tools.

     

    **[54:40]** — **Walker's complete recommended technology stack** — From Docker and Node-RED to HiveMQ, Litmus, Frameworks 10, Thread Cloud, and Snowflake—the full architecture laid out step by step.

     

    **[59:45]** — **Where AVEVA PI fits** — No need to rip and replace; limit PI to what it's good at (historian), and leverage Aveva's more open Connect platform.

     

    **[1:02:11]** — **Prediction: More people on the plant floor, not fewer** — Fewer middle managers, more analysts supervising AI agents to optimize operations.

     

    ---

     

    ## Key Takeaways

     

    - **Knowledge graphs are the breakout technology of 2025:** Coming out of PROVE IT, even non-technical attendees understood that knowledge graphs—relational context between entities in an infrastructure—are essential for AI agents to navigate and reason through manufacturing systems. Manufacturers should prioritize building fluency in knowledge graph concepts now.

     

    - **AI fatigue is real, and vendors are outpacing market readiness:** Most end users are still asking "where do I start?" while vendors are shipping agentic AI features without clear problem-solution fit. The maturity gap between the most and least digitally advanced manufacturers is widening.

     

    - **Autonomous agents are not viable in manufacturing:** The most reliable LLMs achieve 99.9% accuracy—one error per 1,000 words—while PLCs operate at nine nines of reliability. Agents should be treated as force multipliers for human workers, not autonomous replacements.

     

    - **Unified namespace success depends on understanding what it is—and isn't:** UNS is the real-time current state of the business, semantically organized. Manufacturers who fail with UNS are trying to make it something it's not, such as a historical transaction store. It serves as the originating context that agents use before querying deeper systems.

     

    - **The most practical AI use cases are about building, not automating:** The top applications at PROVE IT involved using AI to accelerate development (natural language to code, dashboards, and workflows), not replacing human decision-making on the plant floor.

     

    - **Predefined workflows inside agents are a game changer:** Rather than letting agents create their own reasoning steps on the fly, giving engineers the ability to predefine part of an agent's workflow dramatically improves reliability and practical value.

     

    - **Start building AI fluency now, even if you haven't started your data journey:** Reynolds mandated his team use chatbots daily in January 2023—not because he knew how AI would be used, but to build fluency. Every manufacturer should be doing the same with knowledge graphs and agent concepts today.

     

    ---

     

    ## Notable Quotes

     

    > "The only person who believes agents can run autonomously are people who don't work with agents." — Walker Reynolds, President at 4.0 Solutions

     

    > "Think of agents as a force multiplier for your workforce, a way of unlocking the potential in people." — Walker Reynolds, President at 4.0 Solutions

     

    > "If you're not getting value out of unified namespace, then you're using it for something that it isn't." — Walker Reynolds, President at 4.0 Solutions

     

    > "We're going to see more people on the plant floor, not less. They're going to be analysts supervising AI to optimize operations." — Walker Reynolds, President at 4.0 Solutions

     

    > "Your homework this year is learn knowledge graphs, because you're going to need them." — Walker Reynolds, President at 4.0 Solutions

     

    ---

     

    ## Key Concepts Explained

     

    **Unified Namespace (UNS)**

    Definition: A unified namespace is a single, semantically organized source of truth that represents the real-time current state of a business—all events, data, and information models contextualized and normalized in one accessible structure.

    Why it matters: UNS serves as the foundational architecture for digital transformation and is the originating context layer that AI agents query to understand current operations before reasoning through deeper systems.

    Episode context: Reynolds emphasized that manufacturers failing with UNS misunderstand its purpose, treating it as a historical data store rather than a real-time state representation.

     

    **Knowledge Graphs**

    Definition: Knowledge graphs are data structures that represent the relationships between entities (nodes) in a system, providing relational context that enables navigation and reasoning across an infrastructure.

    Why it matters: AI agents require knowledge graphs to navigate up and down a business's infrastructure, moving from an objective at one layer to the specific data location where answers reside.

    Episode context: Reynolds identified knowledge graphs as the breakout technology from PROVE IT 2025, with Thread Cloud's root cause analysis demo receiving mid-presentation applause for demonstrating practical agent-driven analysis via knowledge graphs.

     

    **Model Context Protocol (MCP)**

    Definition: MCP is a protocol that allows AI agents to connect to external tools and data sources, enabling them to retrieve information and perform actions beyond what's contained in their training data.

    Why it matters: MCP enables agents to go beyond the initial context from UNS and query historical data, work orders, and other systems of record to
  • Industry40.tv

    Unlocking Productivity With Causal Models and Agentic AI in Manufacturing: Michael Carroll - Global Executive in Industrial Innovation & AI , LNS Research

    11.03.2026 | 1 godz. 1 min.
    # AI in Manufacturing Podcast — Episode Show Notes

     

    ## Episode Details

    - **Podcast Name:** AI in Manufacturing Podcast (Industry40.tv)

    - **Episode Title:** Unlocking Productivity With Casual Models and Agentic AI in Manufacturing

    - **Host:** Kudzai Manditereza

    - **Guest:** Michael Carroll

    - **Guest Title/Role:** Strategic Advisor & Fellow COO Council at LNS Research; Chief Strategy Officer at Trek AI

    - **Target Audience:** Manufacturing data leaders, COOs, VP of Operations, IT/OT solution architects, and digital transformation professionals

     

    ---

     

    ## 1. EPISODE SUMMARY

     

    Agentic AI is not another digital tool to add to the manufacturing technology stack — it is a fundamentally different species of software that treats decisions, not transactions, as the atomic unit of work. In this episode, Michael Carroll, Strategic Advisor at LNS Research and Chief Strategy Officer at Trek AI, explains why US manufacturing productivity has been flat since 2010 despite massive investments in digital tools, and why agentic AI with causal reasoning represents the structural fix. Carroll draws on his 15 years leading digital transformation at Georgia Pacific to reveal how the real productivity killer is not a lack of data or technology, but a cognitive overload crisis combined with organizational permission bottlenecks that drain value from companies in real time. He introduces a practical diagnostic framework — mapping inferencing load and permission load — that any operations leader can apply today to identify where value is leaking from their organization and where agentic AI can deliver immediate impact.

     

    ---

     

    ## 2. KEY QUESTIONS ANSWERED IN THIS EPISODE

     

    - Why has US manufacturing productivity been flat since 2010 despite massive digital investments?

    - What is agentic AI, and how is it fundamentally different from traditional manufacturing software like MES and ERP?

    - What is causal reasoning, and why does it matter more than explainable AI for manufacturing decisions?

    - How does the permission architecture in manufacturing organizations destroy value and slow decision velocity?

    - Where should COOs and VPs of Operations start when preparing their organizations for agentic AI?

    - Why do alignment meetings signal that a company's numbers can't be trusted?

    - How should IT and OT organizations restructure their relationship to enable competitive advantage?

     

    ---

     

    ## 3. EPISODE HIGHLIGHTS WITH TIMESTAMPS

     

    **[00:02]** - **Introduction & Guest Background** — Kudzai introduces Michael Carroll and his roles at LNS Research and Trek AI, emphasizing his prolific writing on LinkedIn about industrial AI.

     

    **[04:04]** - **Farm Roots and the Generalist Mindset** — Carroll shares how growing up on a farm in "Knock 'Em Stiff, Ohio" taught him orchestration and generalist thinking that shaped his approach to enterprise transformation.

     

    **[07:43]** - **The Flat Productivity Crisis** — Discussion of US Bureau of Labor Statistics data showing manufacturing productivity has been flat or declining from 2008-2023, despite heavy digitalization investments.

     

    **[09:39]** - **The COVID Productivity Paradox** — Carroll reveals how productivity actually spiked during COVID when corporate distractions were removed, disproving the hypothesis that talent attrition alone caused the decline.

     

    **[13:41]** - **The Cognitive Tipping Point** — Frontline workers now see 8x more information across 50% more equipment than in 1975, but have 50% less experience — creating a cognitive overload that degrades performance.

     

    **[16:56]** - **What Makes an Agent an Agent** — Carroll defines agentic AI through the lens of human agency: an agent shapes outcomes, bears your intention, but the responsibility remains yours.

     

    **[22:46]** - **Judea Pearl's Causal Ladder** — Deep explanation of how Pearl's three-layer causal framework (imagining, doing, observing) provides the mathematical foundation for trustworthy AI decision-making.

     

    **[24:49]** - **Chain of Reasoning vs. Explainability** — Carroll argues that "explainable AI" invites litigation, while causal chains of reasoning provide defensible, legitimate justification for decisions.

     

    **[30:00]** - **The Adaptive Architecture** — Carroll outlines the three-layer future architecture: ubiquitous connectivity, causal reasoning at the edge, and a trust/permission architecture at the center.

     

    **[36:39]** - **The Baum Study: Decision Speed and Performance** — Reference to J. Robert Baum's 2003 study of 318 companies showing decision speed — not decision quality — was the top predictor of company performance.

     

    **[47:22]** - **Causality Replaces Data Models** — Carroll explains why causal models are superior to traditional data models and ontologies, comparing data collection to stock options you wouldn't exercise immediately.

     

    **[53:30]** - **The Practical Starting Framework** — Carroll provides a step-by-step diagnostic: map your current architecture, identify where inferencing load and permission load are highest, and fix those intersection points first.

     

    ---

     

    ## 4. KEY TAKEAWAYS

     

    - **Manufacturing's productivity crisis is a cognitive overload problem, not a data problem:** Since 2010, frontline workers see 8x more information across 50% more equipment than in 1975, but have 50% less experience. More insights have not produced better performance — they have consumed the adaptive capacity workers need to make good decisions.

     

    - **Agentic AI treats decisions as the atomic unit of work, not transactions:** Unlike MES or ERP systems that automate transactions, agentic AI shapes outcomes by understanding what's true about the world, evaluating possible interventions, taking action, and learning from evidence. The responsibility always remains with the human.

     

    - **Causal reasoning provides defensible decisions; explainability invites litigation:** A chain of reasoning built through Judea Pearl's causal framework delivers the legitimate, defensible justification that governance structures require. Explainable AI merely offers interpretations that different stakeholders will contest — which is why alignment meetings exist.

     

    - **Decision speed outperforms decision quality as a predictor of company performance:** J. Robert Baum's 2003 study of 318 companies found that the highest-performing companies made decisions faster than competitors, centralized strategy while decentralizing operations, and only standardized things that were easy to standardize.

     

    - **The value leak happens between decision and action:** The time between knowing what to do and getting permission to do it is where most companies lose tremendous value. Permission architectures built around compliance — not governance — create vicious cycles between operations and IT that stall decision velocity.

     

    - **60% of value creation comes from staying focused:** Carroll's framework breaks down value creation: approximately 20% comes from doing the right things, 20% from doing those things right, and a full 60% from maintaining focus — which fragmented organizations systematically destroy.

     

    - **Start by mapping inferencing load and permission load:** Operations leaders should map how their company gets things done, identify where inferencing load (people synthesizing multiple insights to make decisions) and permission load (organizational gates) are both high, and target those intersection points first.

     

    ---

     

    ## 5. NOTABLE QUOTES

     

    > "We're not trying to be right. We're trying to get this right — because we're experiencing a time in humanity that's never been experienced before." — Michael Carroll, Strategic Advisor at LNS Research & CSO at Trek AI

     

    > "No machine will ever feel the consequences of the actions it takes and the decisions it makes — so the responsibility is still yours." — Michael Carroll, Strategic Advisor at LNS Research & CSO at Trek AI

     

    > "You know you have a company that can't trust its numbers when you have an alignment meeting — because alignment meetings mean the politics matter more than the numbers." — Michael Carroll, Strategic Advisor at LNS Research & CSO at Trek AI

     

    > "Something that automates a work process is not an agent. Something that carries out a task is not an agent. Because it doesn't shape an outcome." — Michael Carroll, Strategic Advisor at LNS Research & CSO at Trek AI

     

    > "Structure makes you effective. You've got to go be effective before you can ever be efficient." — Michael Carroll, Strategic Advisor at LNS Research & CSO at Trek AI

     

    ---

     

    ## 6. KEY CONCEPTS EXPLAINED

     

    **Agentic AI (Enterprise Agency)**

    Definition: Agentic AI is a category of artificial intelligence that shapes outcomes by understanding the current state of the world, evaluating possible interventions, taking action, and learning from evidence — operating on behalf of humans while the responsibility remains with the human.

    Why it matters: It represents a structural shift from transaction-based software (MES, ERP) to decision-based systems that can collapse the time between insight and action in manufacturing operations.

    Episode context: Carroll distinguishes agentic AI from task automation by emphasizing that true agents bear your intention and shape outcomes, rather than simply executing predefined workflows.

     

    **Causal Reasoning (Judea Pearl's Ladder of Causation)**

    Definition: A mathematical framework developed by Turing Prize winner Judea Pearl consisting of three layers —
  • Industry40.tv

    Context Engineering Techniques for Building Reliable Industrial AI Agents: Zach Etier - VP of Architecture , Flow Software

    05.03.2026 | 1 godz. 12 min.
    Podcast Name: AI in Manufacturing Podcast (Industry40.tv)

    Episode Title: Context Engineering Techniques for Building Reliable Industrial AI Agents

    Guest: Zach Etier, VP of Architecture at Flow Software

    Host: Kudzai Manditereza

    Episode Summary
    This episode explores context engineering — the discipline of curating and managing the information supplied to AI agents — and why it is the key to building reliable industrial AI systems. Zach Etier, VP of Architecture at Flow Software, joins host Kudzai Manditereza to break down why simply pumping more data into an AI agent's context window actually degrades performance through dilution, hallucination, and lost instructions. Zach walks through three core context engineering techniques — persisting context, summarization/compaction, and isolation via sub-agents — and explains how each one maps to real manufacturing use cases like automated shift-handover reports. The conversation also covers the practical differences between skills, MCP servers, and sub-agents, and why deterministic code should handle calculations while agents handle orchestration. Finally, Zach makes the case that knowledge graphs with formal ontologies will become essential data architecture for scaling industrial AI across the enterprise. Whether you are evaluating your first agent pilot or planning multi-site deployment, this episode provides a concrete framework for engineering context that agents can reliably act on.

    Key Questions Answered in This Episode

    What is an industrial AI agent, and how does it differ from a chatbot or general-purpose LLM?

    Why does giving an AI agent more context actually reduce its performance?

    What is context engineering, and why is it replacing prompt engineering for agentic AI?

    What are the three core techniques for managing an AI agent's context window in manufacturing?

    How should you decide when to use skills vs. MCP servers vs. sub-agents?

    Why should deterministic code handle calculations instead of letting the AI agent compute them?

    How do knowledge graphs and ontologies enable enterprise-scale industrial AI?

    Episode Highlights with Timestamps
    [00:33] — Meet Zach Etier — Zach introduces his role at Flow Software, his background at Northrop Grumman, and how he leads development of the Atlas knowledge modeling tool.

    [06:04] — Defining an Industrial AI Agent — A clear breakdown: an agent is an LLM that can call tools in a loop, acting as an orchestrator that reasons on context to decide which tool to invoke next.

    [09:54] — Shift-Handover Report Demo — Zach describes a concrete use case where an agent passively generates a shift-change report by pulling data from a historian, operator notes, the UNS, and MES/PLC data.

    [12:58] — From Prompt Engineering to Context Engineering — Why reasoning models and tool-calling changed the game: prompts are static, but agent context is dynamic.

    [16:37] — The Softmax Dilution Problem — How adding too many tokens dilutes relevant information through the normalization process, causing hallucinations and missed instructions.

    [17:17] — Lost in the Middle — The Stanford "needle in a haystack" study showing agents recall content at the start and end of the context window but lose information in the middle.

    [19:50] — Context vs. Knowledge — Context is curated knowledge packaged for a specific task — you don't read the entire equipment manual, only the sections relevant to your troubleshooting task.

    [25:00] — Three Categories of Industrial Context — Domain knowledge (equipment manuals, SOPs), data context (historian, UNS, MES), human-generated context (operator notes, Excel sheets), and behavioral context (skills, guardrails).

    [30:00] — Technique 1: Persisting Context — Writing context to the file system so agents (or sub-agents) can read curated information in future sessions.

    [31:27] — Technique 2: Summarization/Compaction — Condensing large context into essential insights; why auto-compaction sometimes breaks agent behavior.

    [33:56] — Technique 3: Isolation via Sub-Agents — Spinning up agents with clean context windows to offload research and prevent bloat in the main agent.

    [36:05] — Deterministic Tools for Calculations — Why OEE and other calculations should be handled by validated scripts exposed as tools, not computed by the probabilistic model.

    [54:21] — Knowledge Graphs for Enterprise-Scale AI — How ontologies provide a "map of meaning" that helps agents navigate large instance models across multi-site enterprises.

    [1:02:00] — Federated Knowledge Graphs — Zach's argument for domain experts owning their own models, with governance at the integration interfaces between domains.

    Key Takeaways

    Context engineering is the new core competency for industrial AI. It is the practice of populating the context window with only highly relevant, curated information — not dumping in everything you have. The softmax normalization in transformer attention blocks dilutes important tokens when too much irrelevant information is present.

    Agents recall the start and end of context, not the middle. The "Lost in the Middle" research confirms that instructions and critical data placed in the middle of a large context window are likely to be ignored, leading to hallucinations and forgotten instructions.

    Use three techniques to manage context: persist, summarize, and isolate. Persist important context to files for future sessions. Summarize large documents down to essential insights. Isolate research and noisy tasks into sub-agents with clean context windows so the main agent stays focused.

    Deterministic code should handle deterministic tasks. Never let a probabilistic model perform calculations. Write validated scripts for things like OEE, expose them as tools, and let the agent orchestrate when to call them.

    Skills, MCP, and sub-agents solve different problems. Skills are modular, composable instruction sets with progressive disclosure. MCP servers supply vendor-defined tool context but can bloat the context window. Sub-agents provide isolated context windows for offloading research or preventing context poisoning.

    Knowledge graphs are the data architecture for scalable industrial AI. An ontology (model of meaning) paired with an instance model gives agents a navigational map of the domain, enabling them to reason across large enterprises rather than drowning in flat instance data.

    Treat prompts, skills, and agent definitions as code. Source-control them, evaluate them, iterate on them. The organizations building this muscle now are developing an expertise gap that will compound over the coming years.

    Notable Quotes

    "Context is curated knowledge that is packaged for a specific task." — Zach Etier, VP of Architecture at Flow Software

    "If you have something that can be done with a deterministic tool, it should be done with a deterministic tool. Don't use an agent to do a calculation." — Zach Etier, VP of Architecture at Flow Software

    "You get the reliability by managing the context window." — Zach Etier, VP of Architecture at Flow Software

    "Agents can't do on-the-job training. The context needs to be digitized and packaged in a way the agent can reason on." — Zach Etier, VP of Architecture at Flow Software

    "My hope is agents being this passive thing happening in the background — augmenting humans rather than becoming the team." — Zach Etier, VP of Architecture at Flow Software

    Key Concepts Explained
    Context Engineering Definition: The practice of curating, managing, and optimizing the information placed into an AI agent's context window so that it contains only highly relevant content with no bloat. Why it matters: It is the primary lever for improving agent reliability and reducing hallucinations in industrial settings. Episode context: Zach contrasted it with prompt engineering, explaining that reasoning models and tool-calling made agent context dynamic rather than static, creating the need for deliberate context management.

    Context Rot (Softmax Dilution) Definition: The degradation of an agent's ability to reason on relevant information as more tokens are added to the context window, caused by the softmax normalization distributing attention weight across all tokens. Why it matters: It explains why "more data" often leads to worse agent performance, which is counter-intuitive for many engineering teams. Episode context: Zach explained this as the core reason the industry shifted from "give the agent everything" to deliberate context engineering.

    MCP (Model Context Protocol) Definition: A standard protocol that allows AI agents to connect to external tool servers, where the server supplies tool descriptions and context so the agent knows how to call tools it was never trained on. Why it matters: It enables agents to interact with industrial software like historians, MES, and ERP systems through a standardized interface. Episode context: Zach compared MCP to skills, noting that MCP loads all tool descriptions at once (potential bloat) while the vendor controls the context, whereas skills give users control with progressive disclosure.

    Knowledge Graph (Ontology + Instance Model) Definition: A data structure combining an ontology (a model of meaning that describes domain concepts and relationships) with an instance model (actual data and values), linked by explicit relationships. Why it matters: It provides AI agents with a navigational map of the domain, enabling reasoning across large, complex enterprise data landscapes. Episode context: Zach described knowledge graphs as the future data architecture for industrial AI, and explained Flow Software's Atlas product as a knowledge modeling tool built on this approach.

    Isolation (Sub-Agent Pattern) Definition: A context engineering technique where a sub-agent operates in its own context window, separate from the main agent, to perform research or noisy tasks without contaminating the main context. Why it matters: It prevents context poisoning and bloat in the primary agent, enabling multi-agent coordination for complex industrial workflows. Episode context: Zach used the example of a research sub-agent that reads many files, filters for relevance, and returns only the essential findings to the main agent.

    Resources & References
    Tools & Technologies: Flow Software Timebase (Historian, Explorer, Collector), Timebase Atlas (knowledge modeling tool), MCP servers, Claude Code, OPC UA, Unified Namespace (UNS)

    Concepts & Frameworks: Context engineering, Lost in the Middle (Stanford paper), softmax normalization, needle-in-a-haystack testing, context rot, context poisoning, context confusion, research-plan-implement workflow, progressive disclosure, federated knowledge graphs

    Companies & Organizations: Flow Software, Anthropic, Northrop Grumman, Google (Gemini), OpenAI

    Standards & Architecture Patterns: ISA-95, knowledge graphs, ontologies, Unified Namespace, MCP (Model Context Protocol)

    Guest Bio & Links
    Zach Etier is the VP of Architecture at Flow Software, where he leads the development of knowledge graph technology (Timebase Atlas) and AI integration across the product portfolio. Before Flow, he spent 10 years at Northrop Grumman, starting in additive manufacturing operations and moving to the CIO office to architect digital manufacturing and Industry 4.0 service lines. He holds degrees in mechanical engineering, aerospace engineering, and computer science.

    Company: Flow Software

    GitHub Repo: (Linked in episode description — context engineering workshop materials)

    Social: (Linked in episode description)

    FAQ Section
    Q: What is context engineering in manufacturing AI? A: Context engineering is the practice of curating and managing the information placed into an AI agent's context window so it contains only the data relevant to a specific task. In manufacturing, this means selectively providing historian data, equipment manuals, operator notes, and MES information rather than flooding the agent with everything available. The goal is to maximize reasoning quality while minimizing hallucinations caused by token dilution.

    Q: Why do industrial AI agents hallucinate, and how can you reduce it? A: Industrial AI agents hallucinate primarily because of context rot — when too many tokens are loaded into the context window, the softmax normalization process dilutes the attention paid to relevant information. This is compounded by the "lost in the middle" effect, where agents fail to recall content positioned in the center of long inputs. Reducing hallucinations requires managing the context window through techniques like summarization, sub-agent isolation, and persisting only curated context.

    Q: What is the difference between MCP servers, skills, and sub-agents for industrial AI? A: MCP servers provide vendor-defined tool descriptions and context through a standard protocol, loading all tools at once. Skills are user-defined instruction sets with progressive disclosure — only metadata loads initially, and full content loads on demand. Sub-agents are separate agents with isolated context windows, used to offload research or noisy tasks. MCP is best when the vendor knows best how to use their tools; skills offer granular user control; sub-agents manage context isolation.

    Q: How do knowledge graphs help AI agents reason about factory data? A: Knowledge graphs combine an ontology (a conceptual model of domain relationships) with an instance model (actual operational data). The ontology provides agents with a navigational map that explains how equipment, production lines, and processes relate to each other. Without this, agents see only flat data values and lose context in large enterprises. Knowledge graphs enable agents to traverse from a specific equipment instance up to conceptual relationships and back, dramatically improving reasoning across complex multi-site operations.

    Q: Should AI agents perform OEE and other manufacturing calculations? A: No. Manufacturing calculations like OEE should be handled by deterministic scripts that are validated through traditional software testing and then exposed to the agent as callable tools. AI agents are probabilistic and can make calculation errors. The agent's role is orchestration — deciding when to call the calculation tool and how to present the results — not performing the arithmetic itself.

    Q: What is the best data architecture approach for scaling industrial AI across an enterprise? A: A federated knowledge graph approach is recommended, where domain experts define knowledge models for their specific areas (manufacturing, quality, engineering) and governance is applied at the integration interfaces between domains. This aligns with how organizations actually operate, since no single person understands the full enterprise. Federation avoids the impractical requirement of building one monolithic ontology upfront while still enabling cross-domain agent reasoning.

Więcej Technologia podcastów

O Industry40.tv

Each episode of Industry40.tv Podcast will treat you to an in-depth interview with leading AI practitioners, exploring the Application of Artificial Intelligence in Manufacturing and offering practical guidance for successful implementation.
Strona internetowa podcastu

Słuchaj Industry40.tv, Technologicznie i wielu innych podcastów z całego świata dzięki aplikacji radio.pl

Uzyskaj bezpłatną aplikację radio.pl

  • Stacje i podcasty do zakładek
  • Strumieniuj przez Wi-Fi lub Bluetooth
  • Obsługuje Carplay & Android Auto
  • Jeszcze więcej funkcjonalności
Media spoecznościowe
v8.8.11| © 2007-2026 radio.de GmbH
Generated: 4/20/2026 - 11:48:59 AM