PodcastyTechnologiaIndustry40.tv

Industry40.tv

Kudzai Manditereza
Industry40.tv
Najnowszy odcinek

96 odcinków

  • Industry40.tv

    Scaling Industrial Intelligence with I3X Common API: Matthew Parris - Director of Advanced Manufacturing, GE Appliances

    30.04.2026 | 1 godz. 4 min.
    # AI in Manufacturing Podcast — Show Notes

    ## Episode: Scaling Industrial Intelligence with the I3X Common API

     

    **Podcast Name:** AI in Manufacturing Podcast (Industry40.tv)

    **Episode Title:** Scaling Industrial Intelligence with the I3X Common API

    **Guest Name:** Matthew Parris

    **Guest Title/Role:** Director of Quality Test Systems, GE Appliances; Leading Contributor to the I3X Specification

    **Host:** Kudzai Manditereza

     

    ---

     

    ## 1. Episode Summary

     

    This episode explores how the Industrial Information Interoperability Exchange (I3X) common API is poised to become the universal interface for accessing manufacturing data across software platforms. Matthew Paris, Director of Quality Test Systems at GE Appliances and a leading contributor to the I3X specification, explains why the manufacturing industry has lacked a standardized way to retrieve information from Level 3 and Level 4 software systems — and how I3X solves this by leveraging simple, proven IT technologies: HTTP and JSON. Paris draws a compelling analogy between I3X and the early web browser revolution, comparing the I3X Explorer tool to Netscape's role in breaking down walled-garden internet portals. The conversation covers how I3X differs from OPC UA and MQTT, why a vanilla MQTT broker is insufficient for a true Unified Namespace, and how standardized interfaces accelerate AI deployment in manufacturing. Listeners will gain a clear understanding of where I3X fits in modern industrial architectures and why now is the time to get involved with the specification while it's in beta.

     

    ---

     

    ## 2. Key Questions Answered in This Episode

     

    - What is I3X and what problem does it solve for manufacturers?

    - How is I3X different from OPC UA and MQTT?

    - Why is an MQTT broker alone not sufficient for a Unified Namespace (UNS)?

    - How does I3X enable manufacturers to scale from data visibility to operational AI?

    - Where does I3X fit in a modern industrial architecture alongside UNS and MQTT brokers?

    - Why does I3X support OPC UA Part 5 information models, and how should manufacturers think about data typing?

    - How will I3X achieve vendor adoption without a chicken-and-egg problem?

     

    ---

     

    ## 3. Episode Highlights with Timestamps

     

    **[0:00]** — **Introduction & Guest Background** — Matthew Paris introduces his role at GE Appliances, where his team functions as an internal OEM, system integrator, and end user simultaneously.

     

    **[3:12]** — **The Origin of I3X** — Paris describes the frustration of learning unique REST APIs for every software product and the "data access model" stack that manufacturers must navigate.

     

    **[8:55]** — **The Shifting Risk in Manufacturing Software** — Discussion on how the real risk has moved from picking the right vendor to maintaining the ability to adapt as technology evolves.

     

    **[10:23]** — **Monoliths Breaking Apart** — Why specialization and the proliferation of vendors demand a stable architectural foundation with standardized interfaces.

     

    **[14:37]** — **Claude and AI as Software Developers** — How AI coding assistants make standardized interfaces even more critical — it's easier to tell Claude to build against one standard than 20 proprietary APIs.

     

    **[16:21]** — **From Dashboards to AI Intelligence** — Paris explains the journey from visibility (polling/dashboards) to subscription-based intelligence and why AI agents need structured, typed, and related data.

     

    **[24:35]** — **What I3X Actually Is** — A concise breakdown: HTTP + JSON + a handful of standard methods (explore, read current value, read historical values, subscribe). The 80/20 rule applied to data access.

     

    **[34:49]** — **I3X vs. OPC UA** — Why OPC UA's own cloud reference architecture still just says "HTTP REST," and how I3X fills that gap with a defined, common contract.

     

    **[41:50]** — **I3X and the Unified Namespace** — Paris explains why an MQTT broker is "woefully insufficient" for a UNS and how I3X wraps around information sources including brokers like HiveMQ Pulse.

     

    **[50:23]** — **OPC UA Information Models in I3X** — How I3X is silent on which types you must use but enables you to leverage OPC UA companion specs, custom namespaces, or both.

     

    **[58:22]** — **Adoption Strategy & Vendor Momentum** — Why I3X avoids the chicken-and-egg problem: software-level deployment, minimal lift for vendors, and early adoption from Ignition, HiByte, Microsoft Azure, AWS, HighByte, FlowSoftware, and others.

     

    ---

     

    ## 4. Key Takeaways

     

    - **I3X is the "web browser" for manufacturing data:** Just as Netscape eliminated walled-garden internet portals, I3X provides a single, standard interface to connect to any manufacturing software and retrieve information — using nothing more than HTTP and JSON.

     

    - **REST alone is not enough:** Having a REST API does not guarantee interoperability. Without standardized methods, encoding (JSON), and capability definitions, every software product's API is still a bespoke integration project.

     

    - **An MQTT broker is not a UNS:** A vanilla MQTT broker lacks the ability to serve current values on demand, enforce data governance, advertise data types, or represent multi-dimensional relationships. Products like HiveMQ Pulse exist precisely because a broker alone is insufficient.

     

    - **AI agents need structured, typed, related data:** Moving from dashboards to operational AI requires more than raw data visibility. AI agents are like new employees — they need onboarding via object types, relationships, and knowledge graphs to reason effectively.

     

    - **I3X is deliberately minimal and opinionated:** The specification covers roughly 20% of possible capabilities but addresses 80% of real-world data access use cases: explore what's available, read current values, read historical values, and subscribe to changes.

     

    - **The specification is in beta — now is the time to contribute:** I3X moved from alpha to beta ahead of Hannover Messe 2025. Manufacturers and vendors can influence the spec by engaging on GitHub and providing feedback on missing capabilities.

     

    - **Vendor adoption is accelerating organically:** Because I3X requires minimal implementation effort for vendors already running HTTP/JSON endpoints, companies like Microsoft, HighByte, FlowSoftware, Inductive Automation (Ignition), and AWS are already building or demoing I3X support.

     

    ---

     

    ## 5. Notable Quotes

     

    > "I3X Explorer is the equivalent of that Netscape experience — eliminating the proprietary interfaces and letting you access any software's information in a standard way." — Matthew Paris, Director of Quality Test Systems at GE Appliances

     

    > "An MQTT broker is woefully insufficient to achieve the goals of what a UNS should be." — Matthew Paris, Director of Quality Test Systems at GE Appliances

     

    > "Think of AI agents as new employees, and what you want to do is reduce as short as possible what that onboarding process looks like." — Matthew Paris, Director of Quality Test Systems at GE Appliances

     

    > "It's embarrassingly simple — HTTP and JSON. The technology was probably hardened 10 years ago. We're 10 years too late having a standard definition for manufacturing." — Matthew Paris, Director of Quality Test Systems at GE Appliances

     

    > "Chicken and egg is just an excuse thrown around when something is struggling to take adoption. If you're not solving the problem the right way, that's the real issue." — Matthew Paris, Director of Quality Test Systems at GE Appliances

     

    ---

     

    ## 6. Key Concepts Explained

     

    ### **I3X (Industrial Information Interoperability Exchange)**

    **Definition:** I3X is a lightweight, common API specification built on HTTP and JSON that standardizes how applications query, read, and subscribe to data from Level 3/Level 4 manufacturing software systems.

    **Why it matters:** It eliminates the need for manufacturers to learn and integrate against dozens of proprietary REST APIs, dramatically reducing the time and cost of software interoperability.

    **Episode context:** Paris described I3X as the "web browser" equivalent for manufacturing — a universal client interface that works the same regardless of which software platform serves the data.

     

    ### **Unified Namespace (UNS)**

    **Definition:** A UNS is an architectural pattern that provides a single, unified view of all operational and business data across an enterprise, organized with contextual relationships and governed data types.

    **Why it matters:** A true UNS enables cross-functional data access, AI reasoning, and enterprise-wide analytics — but is frequently conflated with simply deploying an MQTT broker.

    **Episode context:** Paris argued that a vanilla MQTT broker is only the first step toward a UNS, lacking data governance, type definitions, multi-dimensional relationships, and on
  • Industry40.tv

    Optimizing AI Inferencing for Agentic Operations in Manufacturing: Calvin Cooper - Co-Founder & Coo, Neurometric AI

    22.04.2026 | 39 min.
    # AI in Manufacturing Podcast: Episode Show Notes

     

    ## Episode: Optimizing AI Inference for Agentic Operations in Manufacturing

     

    **Podcast Name:** AI in Manufacturing Podcast (Industry40.tv)

    **Episode Title:** Optimizing AI Inference for Agentic Operations in Manufacturing

    **Guest:** Kelvin Cooper, Co-Founder & CEO, Neurometric.ai

    **Host:** Kudzai Manditereza

    ---

     

    ## 1. Episode Summary

     

    This episode explores why manufacturing companies struggle to scale AI from pilot to production—and how inference orchestration and small language models (SLMs) offer a practical path forward. Kelvin Cooper, Co-Founder and CEO of Neurometric.ai, joins host Kudzai Manditereza to break down why routing all AI tasks through a single frontier model becomes a cost and reliability liability at scale. Cooper draws on his background in venture capital, private equity AI rollups at Pilot Wave Holdings, and AI policy research at the Milken Institute to argue that the future of industrial AI is not one model that knows everything, but a coordinated system of specialized models that each know their job. The conversation covers Neurometric's AI maturity framework, real customer results showing 10x cost and latency improvements, the concept of catastrophic forgetting, and why manufacturing leaders need to adopt a startup execution mindset rather than over-analyzing use cases. Leaders seeking to cut AI inference costs and accelerate deployment will find actionable strategies throughout.

     

    ---

     

    ## 2. Key Questions Answered in This Episode

     

    - Why do 95% of AI proof-of-concepts in manufacturing never make it to production?

    - How should manufacturers select their first AI use case instead of getting stuck in analysis paralysis?

    - What is inference orchestration and why does it matter for scaling AI in manufacturing?

    - Why is relying on a single large language model a liability for industrial AI at scale?

    - What are small language models (SLMs) and how do they deliver faster, cheaper, and more accurate AI?

    - What is catastrophic forgetting and how does it affect AI deployments in manufacturing?

    - How can manufacturers avoid vendor lock-in when building AI systems?

     

    ---

     

    ## 3. Episode Highlights with Timestamps

     

    - **[00:00]** — **Introduction** — Host Kudzai Manditereza introduces the topic of optimizing AI inference for agentic manufacturing operations and welcomes Kelvin Cooper.

    - **[00:36]** — **Kelvin Cooper's Background** — Cooper describes Neurometric.ai's mission to "make intelligence essentially free," his role at Pilot Wave Holdings, and his AI policy work at the Milken Institute.

    - **[03:32]** — **The Pilot-to-Production Gap** — Discussion on why the vast majority of AI proof-of-concepts fail to reach production and what the startup world can teach manufacturers.

    - **[06:52]** — **The Flywheel, Not the Pilot** — Cooper argues that companies mistakenly think the pilot is the product, when what they should be building is a rapid feedback loop of shipping, learning, and iterating.

    - **[08:01]** — **Selecting Your First AI Use Case** — Advice on why "just pick and execute" often beats months of use case analysis, with examples of low-hanging fruit across white-collar and shop-floor workflows.

    - **[11:31]** — **Why One Frontier Model Doesn't Scale** — Cooper explains how relying on a single LLM becomes a cost and latency bottleneck, citing AT&T's public shift to orchestration and multi-agent stacks.

    - **[14:44]** — **Intelligence vs. Reliability** — Why reliability—not raw intelligence—determines whether AI is allowed to scale in production environments.

    - **[16:27]** — **Task-Specific SLMs and Fine-Tuning** — How specialized small language models deliver faster, cheaper, and more accurate results through fine-tuning and production data feedback loops.

    - **[18:13]** — **Neurometric's AI Maturity Framework** — Walk-through of how organizations progress from "get something to work" through cost optimization to full AI system orchestration.

    - **[20:32]** — **Catastrophic Forgetting Explained** — Cooper defines catastrophic forgetting and contextualizes it for manufacturing leaders.

    - **[24:16]** — **The Future: Coordinated Model Teams** — A vision of AI systems that automatically select the right model for each task, abstracting away vendor choice entirely.

    - **[28:12]** — **Neurometric Platform Overview** — Details on the SLM Marketplace, model analysis dashboards, and the self-improving system roadmap.

    - **[33:33]** — **Prediction: The Factory of the Future** — Cooper's forecast on Jevons paradox, nearshoring, and why competing on technology and automation—not labor—defines the next era of manufacturing.

     

    ---

     

    ## 4. Key Takeaways

     

    - **Build the flywheel, not the pilot:** The real KPI for early AI efforts isn't proving a specific use case—it's building a team that can ship, learn, and iterate quickly. The feedback loop is the product.

    - **Just pick and execute:** Spending three months analyzing use cases costs more in lost learning than picking an imperfect starting point and iterating. Low-hanging fruit exists across both shop-floor and back-office workflows.

    - **One frontier model is a scaling liability:** Routing all tasks through a single large language model creates unsustainable cost and latency at scale. AT&T cut costs by 90% by shifting to orchestration with task-specific models.

    - **Small language models deliver outsized results:** Fine-tuned SLMs can be faster, cheaper, and more accurate than general-purpose LLMs for repetitive, well-defined tasks—because they don't need to know world history to handle a purchase order.

    - **Avoid vendor lock-in from day one:** Build AI systems with the assumption that you'll need to swap models. Abstraction layers let you shift from GPT-4o to Llama Maverick and see 10x cost and 4x latency improvements.

    - **Reliability beats intelligence for production AI:** Models that are impressively capable in demos may be non-deterministic and unreliable at scale. In manufacturing, consistent accuracy is the prerequisite for deployment.

    - **The time to act is now:** Billions in capital are flowing into AI rollups targeting industrial businesses. Companies that wait risk being acquired or outcompeted by those that moved first.

     

    ---

     

    ## 5. Notable Quotes

     

    > "Most doors are two-way doors. We tend to overestimate risk associated with getting something wrong, and underestimate the opportunity of getting something right." — Kelvin Cooper, CEO at Neurometric.ai

     

    > "The problem is that you think the pilot is what you're building. What you're actually building is a feedback loop." — Kelvin Cooper, CEO at Neurometric.ai

    > "Intelligence gets headlines, but reliability determines whether AI is allowed to scale." — Kelvin Cooper, CEO at Neurometric.ai

    > "You don't need to know world history to handle some repetitive tasks." — Kelvin Cooper, CEO at Neurometric.ai

    > "The future is now, just not evenly distributed." — Kelvin Cooper, CEO at Neurometric.ai

     

    ---

     

    ## 6. Key Concepts Explained

     

    **Inference Orchestration**

    Definition: Inference orchestration is the automated routing of AI tasks to the optimal model based on cost, latency, and accuracy requirements, rather than sending all queries to a single large language model.

    Why it matters: It enables manufacturers to scale AI deployments without prohibitive costs or performance bottlenecks.

    Episode context: Cooper describes how AT&T used orchestration to cut AI costs by 90% when scaling to 27 billion tokens per day, and positions Neurometric as an off-the-shelf solution for this capability.

     

    **Small Language Models (SLMs)**

    Definition: Small language models are compact, task-specific AI models with fewer parameters that are fine-tuned for narrow use cases, delivering faster and cheaper inference than general-purpose large language models.

    Why it matters: SLMs allow manufacturers to run AI at production scale without the cost and latency penalties of frontier models.

    Episode context: Cooper explains that Neurometric's SLM Marketplace lets users browse, download, and deploy task-specific models, with customers seeing 10x improvements in cost and latency.

     

    **Catastrophic Forgetting**

    Definition: Catastrophic forgetting occurs when an AI neural network learns new tasks and abruptly loses its ability to perform previously learned tasks.

    Why it matters: It's a fundamental challenge when trying to update or expand AI systems in production without degrading existing performance.

    Episode context: Cooper notes that while this is a known research challenge, billions of dollars in AI research are actively solving it, and manufacturing leaders should not let it become a reason for inaction.
  • Industry40.tv

    How to Build AI Solutions That Actually Work on the Factory Floor: Renan Devillieres - Founder & CEO, OSS Ventures

    01.04.2026 | 43 min.
    **Podcast Name:** AI in Manufacturing Podcast 

    **Episode Title:** How to Build AI Solutions That Actually Work on the Factory Floor

    **Guest:** Renan De Villiers, Founder & CEO, OSS Ventures

    **Host:** Kudzai Manditereza

     

    ---

     

    ## 1. Episode Summary

     

    This episode explores why only 5% of factories currently operate like tech companies — and what it will take to reach 50% within a decade. Renan De Villiers, founder and CEO of OSS Ventures, a Paris- and Boston-based venture builder with 22 spun-out companies live in 3,800 factories worldwide, shares hard-won lessons from visiting over 900 manufacturing sites and deploying AI across 100+ factories in the past two years. Drawing on his background as a former McKinsey consultant, factory director, and tech startup founder, De Villiers explains why most manufacturing AI initiatives fail, how to industrialize the discovery process, and why designing the human experience of managing AI agents is the most underestimated challenge in scaling industrial AI. Listeners will learn the concrete frameworks OSS Ventures uses to validate problems before building, the "10x test" for deciding what to pursue, and why the factory of the future requires fewer but far better-paid people. This episode is essential for anyone leading AI adoption in manufacturing or building software products for the factory floor.

     

    ---

     

    ## 2. Key Questions Answered in This Episode

     

    - **What does a tech-enabled factory look like compared to a traditional factory?**

    - **Why do 85% of manufacturing AI projects fail, and how can you beat those odds?**

    - **How do you identify the right AI use cases on the factory floor?**

    - **What is the "10x test" for validating manufacturing AI opportunities?**

    - **Why is tribal knowledge the biggest hidden barrier to AI in manufacturing?**

    - **How do you scale an AI solution from one factory to hundreds?**

    - **Should AI be embedded into existing products or built as a new experience layer?**

     

    ---

     

    ## 3. Episode Highlights with Timestamps

     

    **[1:05]** — **Renan's Background** — From math student to McKinsey consultant to factory director to tech startup founder, and how that path led to creating OSS Ventures.

     

    **[3:58]** — **OSS Ventures by the Numbers** — 22 companies spun out, 3,800 factories served, 200,000 monthly users, and €41 million in combined portfolio revenue.

     

    **[4:43]** — **What a Tech-Enabled Factory Looks Like** — Why Tesla's Austin and Shanghai factories cost roughly the same to operate, and how 5 engineers at Xiaomi replace 15 at BMW.

     

    **[8:32]** — **The Skills Gap in Manufacturing Leadership** — Why your digitization leader must understand code, just as a factory director must be able to read a plan.

     

    **[12:36]** — **The Talent Attraction Myth** — Why manufacturing doesn't have a talent problem — it has a system problem that makes jobs low-leverage and low-pay.

     

    **[13:29]** — **The Historical Parallel to Early 20th Century Industrialization** — How AI is creating intellectual leverage the same way machines created physical leverage.

     

    **[16:35]** — **Why 85% of AI Projects Fail — and Four Key Insights** — Choose big problems, use GenAI to write deterministic code, extract tribal knowledge, and design the human-in-the-loop experience.

     

    **[24:52]** — **The OSS Ventures Validation Process** — The "10x test," the three-out-of-ten factory director rule, and why money is the only real signal of demand.

     

    **[29:30]** — **Spotting AI Opportunities on the Shop Floor** — Look for pockets of people bottlenecked with 35-megabyte Excel files.

     

    **[34:34]** — **Why Copilots on Legacy Software Are "Chocolate-Covered Broccoli"** — The case for building entirely new AI-native experiences instead of bolting AI onto 20-year-old interfaces.

     

    **[36:20]** — **Scaling from 1 to 600 Factories** — Why you need both insane product quality and military-grade deployment discipline.

     

    **[42:41]** — **Prediction: Manufacturing Wages Up 25% and 25% of MIT Grads Enter Manufacturing Within 5 Years.**

     

    ---

     

    ## 4. Key Takeaways

     

    - **Choose big problems, not small ones:** AI projects in production are expensive. OSS Ventures only pursues opportunities where the solution delivers a 10x improvement over the status quo — measured in hard numbers, not feelings. If the economics don't justify the investment, don't build.

     

    - **GenAI writes the code, but deterministic code runs in production:** Across OSS Ventures' last eight AI projects, generative AI was used to create the underlying code, but the deployed system runs deterministic, auditable logic. You're not "vibe coding" your way to manufacturing an airplane.

     

    - **30–40% of critical data lives in people's heads:** Enterprise systems and ERPs don't contain everything. In one sock factory, 850 rules governing R&D existed only as tribal knowledge. Extracting this knowledge requires being physically present on the shop floor.

     

    - **Design the experience of the AI agent manager:** The new manufacturing role is managing AI agents, not doing the manual work. This requires more design investment, not less. Every successful OSS deployment created an experience where the operator felt in control of the system.

     

    - **Validate with money, not compliments:** Before building anything, OSS Ventures pitches the concept to 10 factory directors with a pay-on-results model. If fewer than three commit, the project doesn't launch. People are nice — only financial commitment reveals real demand.

     

    - **Scale requires both product excellence and deployment discipline:** Premature scaling kills companies. First, build a product users love. Then deploy with a process so detailed it resembles a military operation — specifying exactly what data, training, and configuration happens on each day.

     

    - **Shared infrastructure is a right-to-play, not a nice-to-have:** Cybersecurity compliance, ERP connectivity, and standard data structures must be solved before scaling. OSS Ventures provides this as shared "tech bricks" across its portfolio so startups don't have to build it from scratch.

     

    ---

     

    ## 5. Notable Quotes

     

    > "Why the heck is your digitization guy someone who never wrote a line of code?" — Renan De Villiers, Founder & CEO, OSS Ventures

     

    > "I don't think you have a talent attraction problem. I think you have a system problem that makes it so that people are not well paid." — Renan De Villiers, Founder & CEO, OSS Ventures

     

    > "Slapping a copilot with a RAG on top of a program designed 20 years ago is not innovation — it's laziness." — Renan De Villiers, Founder & CEO, OSS Ventures

     

    > "You're not vibe coding your way to create an airplane." — Renan De Villiers, Founder & CEO, OSS Ventures

     

    > "I don't think people are against AI. I think people are against bad products." — Renan De Villiers, Founder & CEO, OSS Ventures

     

    ---

     

    ## 6. Key Concepts Explained

     

    **Venture Builder (Studio Model)**

    Definition: A venture builder is an organization that systematically identifies market opportunities, builds initial products with an in-house team, validates product-market fit, and then recruits external founders to lead each company as a separate entity.

    Why it matters: This model de-risks early-stage industrial software by absorbing the cost and uncertainty of discovery and initial development.

    Episode context: OSS Ventures has used this model to launch 30 projects, spin out 22 companies, and reach 3,800 factories in five years.

     

    **The 10x Test**

    Definition: A validation framework requiring that any proposed AI solution must deliver outcomes at least 10 times better than the current alternative — measured in time, cost, or quality — before development begins.

    Why it matters: It prevents teams from building incremental improvements that don't justify the cost and complexity of AI deployment.

    Episode context: De Villiers illustrated this with a sock R&D example: reducing development time from 4–6 months and $35K to one week and $2K.

     

    **Tribal Knowledge Extraction**

    Definition: The process of capturing undocumented rules, heuristics, and expertise that exist only in the minds of experienced factory workers and encoding them into AI systems.

    Why it matters: 30–40% of the data needed for manufacturing AI doesn't
  • Industry40.tv

    Scaling Agentic AI Workflows in Manufacturing with Causal AI: Bernhard Kratzwald - Co Founder & CTO, EthonAI

    25.03.2026 | 55 min.
    ## Episode: Building and Scaling Agentic AI Workflows in Manufacturing

     

    **Podcast Name:** AI in Manufacturing Podcast 

    **Episode Title:** How to Build and Scale Agentic AI Workflows in Manufacturing

    **Guest:** Bernard Kraswald, Co-Founder & CTO at Ethon AI

    **Host:** Kudzai Manditereza

    ---

     

    ## Episode Summary

     

    This episode explores how manufacturers can build and scale agentic AI workflows to achieve operational excellence across factories. Bernard Kraswald, Co-Founder and CTO at Ethon AI, explains why traditional continuous improvement methods have reached their limits and how purpose-built industrial AI—grounded in process knowledge graphs and causal reasoning—unlocks the next wave of manufacturing optimization. Key insights include why deep data contextualization through knowledge graphs is essential for agentic AI (not just basic tag hierarchies), how causal AI differs from correlation-based analytics by making root cause findings actionable, and why a layered architecture of data infrastructure, specialized model layer, and application layer prevents hallucinated recommendations in safety-critical environments. Bernard also shares real-world results, including a globally scaled deployment at Siemens that generated over $10 million in documented savings. Whether you're evaluating industrial AI platforms or architecting your data stack for agentic workflows, this episode provides a practical roadmap from data ingestion to autonomous process control.

    ---

     

    ## Key Questions Answered in This Episode

     

    - What is a process knowledge graph, and why is it essential for agentic AI in manufacturing?

    - How does causal AI differ from correlation-based analytics in industrial settings?

    - What architecture layers are needed to run agentic AI workflows reliably in manufacturing?

    - Why can't general-purpose LLMs like ChatGPT or Claude replace purpose-built industrial AI models?

    - How do you build a knowledge graph iteratively without delaying ROI?

    - What does a typical deployment timeline look like for industrial AI platforms?

    - How should manufacturers handle security and governance when connecting OT systems to cloud-based AI?

    ---

    ## Episode Highlights with Timestamps

     

    **[2:27]** – **Bernard's Background & Ethon AI Origin Story** — How a PhD in computer science and collaboration with Fortune 500 manufacturers like Siemens led to founding Ethon AI, now approaching 100 employees with offices in Zurich and New York.

     

    **[4:24]** – **Why Traditional Methods Have Maxed Out** — Bernard explains the "20 cents of every dollar goes to waste" principle and why classic automation and data science have hit diminishing returns, requiring agentic workflows and foundation models for the next improvement frontier.

     

    **[7:49]** – **What Deep Contextualization Really Means** — A detailed walkthrough of why basic UNS tag hierarchies aren't sufficient for agentic AI, using the example of tracing a batch rework problem across tanks, recipes, time series, and operator interventions.

     

    **[12:45]** – **Process Knowledge Graph Explained** — Bernard defines ontologies and knowledge graph triples, showing how semantic meaning enables questions like "which five machines cost the most downtime today" versus simple tag queries.

     

    **[16:02]** – **Build the Graph First or Build the Application First?** — The chicken-and-egg debate on knowledge graph strategy, and why Ethon chose to build the graph behind ROI-delivering applications rather than creating a monolithic model upfront.

     

    **[18:16]** – **Causal AI vs. Correlation Analytics** — The ice cream and shark attacks analogy applied to manufacturing: how causal models turn seasonal production correlations into actionable insights about cooling water temperature adjustments.

     

    **[21:28]** – **The Full Agentic AI Architecture Stack** — Bernard outlines three layers: data infrastructure (connectivity + knowledge graph), model layer (purpose-built causal and inspection models), and application layer (agentic workflows or human interfaces).

     

    **[24:54]** – **Why General-Purpose LLMs Aren't Enough for Manufacturing** — Safety-critical environments require models that understand spec limits, user manuals, and process constraints—not just pattern-matched text generation.

     

    **[29:33]** – **Ethon AI Platform Walkthrough** — A modular enterprise platform that measures what's happening, understands why, suggests improvement actions, and enables autonomous process control through dynamic SOPs and centerline dashboards.

     

    **[37:19]** – **Causal AI's Medical Origins Applied to Manufacturing** — How treating a production process like a patient (healthy or sick) allows causal models to extract actionable knowledge from months of operator interventions and process adjustments.

     

    **[48:03]** – **Deployment Timeline and Forward Deployed Engineers** — Ethon's Palantir-inspired deployment model with on-site engineers, achieving first value consistently in under three months.

     

    **[51:17]** – **Case Studies: Siemens and Lindt & Sprüngli** — Globally scaled deployments with $10M+ documented savings at Siemens (published by the World Economic Forum) and significant waste reductions at Lindt & Sprüngli's chocolate production facilities.

     

    ---

     

    ## Key Takeaways

     

    - **Knowledge graphs are non-negotiable for agentic AI:** A unified namespace provides basic tag context, but agentic workflows require deep semantic relationships—connecting batches to recipes, tanks to flow paths, and time series to operator interventions. Without this ontology layer, AI agents cannot perform meaningful root cause investigation.

     

    - **Causal AI makes insights actionable, not just interesting:** Correlation analytics can tell you production runs better in winter, but causal AI identifies that lower feeding water temperature improves cooling behavior, giving operators a specific lever to pull in summer months. This distinction is critical for safety-critical environments where recommendations must be trustworthy.

     

    - **Purpose-built industrial models prevent hallucination in critical decisions:** By placing a specialized causal model layer between the data infrastructure and the agentic application layer, recommendations are grounded in verified causal relationships rather than LLM pattern matching. The agentic layer enriches these findings with SOPs and documentation but cannot fabricate the underlying analysis.

     

    - **Start with ROI-delivering applications, not infrastructure perfection:** Rather than building a complete knowledge graph before deploying AI, Ethon's approach builds the graph incrementally behind applications that deliver measurable value. Users often don't realize they're building a knowledge graph because they're simply modeling their data while getting returns.

     

    - **Change management is as important as the technology:** Operators and process engineers have solved problems for decades without data-driven tools. AI systems must explain their reasoning through causal chains, build trust incrementally, and integrate into existing workflows without adding friction—even one extra second per task multiplied across thousands of repetitions creates significant resistance.

     

    - **Security requires one-way data flow by design:** When connecting legacy OT systems (some 20-30 years old) to cloud AI, the architecture must ensure information flows only from factory to cloud, with no return path that could serve as an attack vector. Edge-deployable modules handle latency-sensitive tasks like optical inspection independently.

     

    - **Cross-factory intelligence is the next major value unlock:** Most manufacturers still analyze individual lines or factories in isolation. Connecting multiple factories to shared knowledge graph concepts enables cross-site learning—identifying why one line outperforms another and transferring those insights globally.

     

    ---

     

    ## Notable Quotes

     

    > "Every dollar you spend on manufacturing, 20 cents go to waste. That has been true 50 years ago, and it will be true probably 50 years in the future, because there's always 20% to get." — Bernard Kraswald, CTO at Ethon AI

     

    > "The insights you get cannot be hallucinated, because they're coming from this underlying model layer—from this causal model. The LLM agentic layer on top cannot fabricate that." — Bernard Kraswald, CTO at Ethon AI

     

    > "You're never done with building your knowledge graph, because there's always more knowledge you can distill out of it." — Bernard Kraswald, CTO at Ethon AI

     

    > "The only mistake you can make today is not doing anything. The best time to start was yesterday, and the second best time to start would be today." — Bernard Kraswald, CTO at Ethon AI

     

    > "Every AI system will make some mistakes. So here is my best, wholehearted suggestion, and this is why I believe it's true—and now you can click and triple down, follow the root cause links, and investigate everything." — Bernard Kraswald, CTO at Ethon AI

     

    ---

     

    ## Key Concepts Explained

     

    **Process Knowledge Graph**

    Definition: A semantic data model built on ontologies that assigns meaning to industrial data and defines how different data elements relate to each other—connecting machines, sensors, batches, recipes, and physical flows into a queryable graph structure using subject-predicate-object triples.

    Why it matters
  • Industry40.tv

    Why The Unified Namespace is The Essential Foundation for Industrial AI & Agentic Operations: Walker Reynolds - President, 4.0 Solutions

    17.03.2026 | 1 godz. 2 min.
    ## Episode: The State of Industrial AI, Unified Namespace, and Knowledge Graphs After PROVE IT 2025

     

    **Podcast Name:** AI in Manufacturing Podcast 

    **Guest:** Walker Reynolds, President & Solutions Architect at 4.0 Solutions, Founder of the PROVE IT Conference

    **Host:** Kudzai Manditereza

    **Target Audience:** Manufacturing data leaders, IT/OT solution architects, and digital transformation professionals

     

    ---

     

    ## Episode Summary

     

    Walker Reynolds, President and Solutions Architect at 4.0 Solutions and founder of the PROVE IT conference, delivers an unfiltered assessment of where industrial AI actually stands in 2025. Drawing from conversations with over 1,000 attendees at this year's PROVE IT conference—70% of whom were end users working in manufacturing—Reynolds identifies three critical industry shifts: AI fatigue is setting in as vendors outpace market readiness, knowledge graphs have emerged as the essential technology for enabling agentic AI in manufacturing, and the gap between digitally mature and immature manufacturers is widening. The conversation covers why most manufacturers still aren't getting value from their unified namespace implementations, the five most practical AI applications seen at PROVE IT, and why autonomous agents are a mathematical impossibility given current LLM reliability. Reynolds closes with his complete recommended technology stack for manufacturers and a prediction that plant floors will see *more* people, not fewer—but they'll be analysts supervising AI agents rather than middle managers managing people.

     

    ---

     

    ## Key Questions Answered in This Episode

     

    - What is the current state of AI adoption in manufacturing in 2025?

    - Why are some manufacturers failing to get value from unified namespace implementations?

    - What role do knowledge graphs play in enabling agentic AI for manufacturing?

    - What are the most practical AI applications for manufacturers right now?

    - Can AI agents run autonomously in manufacturing operations?

    - What does the ideal industrial data architecture stack look like for a small to midsize manufacturer?

    - How does unified namespace serve as the backbone for agentic AI?

     

    ---

     

    ## Episode Highlights with Timestamps

     

    **[1:56]** — **Introduction and episode overview** — Kudzai sets the agenda: PROVE IT conference takeaways, unified namespace adoption status, agentic AI's role, and the ideal industrial data architecture.

     

    **[4:23]** — **Walker Reynolds' background** — From salt mines to tier-one automotive to founding 4.0 Solutions, IoT University, and the PROVE IT conference—plus why he always introduces himself as if no one knows who he is.

     

    **[8:36]** — **Three core observations from PROVE IT 2025** — AI fatigue is real, most end users still ask "where do I start?", and knowledge graphs emerged as the breakout technology everyone now understands they need.

     

    **[20:37]** — **Top five practical AI applications from PROVE IT** — WinCC OA and Tatsoft for AI-assisted development, Atanta Analytics' prompt-to-insights, Thread Cloud's knowledge graph-driven root cause analysis, and Maestro Hub's live module generation with Claude Code.

     

    **[29:08]** — **The knowledge gap in agentic AI adoption** — Reynolds draws an analogy to the leap from algebra to calculus, warning that not every organization has someone who can bridge the gap to agent-based architectures.

     

    **[35:04]** — **Why autonomous agents are a myth** — Current LLMs are 99.9% reliable at best—one error per 1,000 words—compared to a PLC's nine nines of reliability. Agents must be human-supervised.

     

    **[42:55]** — **Why manufacturers fail or succeed with unified namespace** — The differentiator is understanding UNS as the real-time current state of the business, not a historical transaction store.

     

    **[52:09]** — **UNS as the backbone for agentic AI** — How agents use the semantic structure of UNS to navigate operations and then retrieve deeper context via MCP tools.

     

    **[54:40]** — **Walker's complete recommended technology stack** — From Docker and Node-RED to HiveMQ, Litmus, Frameworks 10, Thread Cloud, and Snowflake—the full architecture laid out step by step.

     

    **[59:45]** — **Where AVEVA PI fits** — No need to rip and replace; limit PI to what it's good at (historian), and leverage Aveva's more open Connect platform.

     

    **[1:02:11]** — **Prediction: More people on the plant floor, not fewer** — Fewer middle managers, more analysts supervising AI agents to optimize operations.

     

    ---

     

    ## Key Takeaways

     

    - **Knowledge graphs are the breakout technology of 2025:** Coming out of PROVE IT, even non-technical attendees understood that knowledge graphs—relational context between entities in an infrastructure—are essential for AI agents to navigate and reason through manufacturing systems. Manufacturers should prioritize building fluency in knowledge graph concepts now.

     

    - **AI fatigue is real, and vendors are outpacing market readiness:** Most end users are still asking "where do I start?" while vendors are shipping agentic AI features without clear problem-solution fit. The maturity gap between the most and least digitally advanced manufacturers is widening.

     

    - **Autonomous agents are not viable in manufacturing:** The most reliable LLMs achieve 99.9% accuracy—one error per 1,000 words—while PLCs operate at nine nines of reliability. Agents should be treated as force multipliers for human workers, not autonomous replacements.

     

    - **Unified namespace success depends on understanding what it is—and isn't:** UNS is the real-time current state of the business, semantically organized. Manufacturers who fail with UNS are trying to make it something it's not, such as a historical transaction store. It serves as the originating context that agents use before querying deeper systems.

     

    - **The most practical AI use cases are about building, not automating:** The top applications at PROVE IT involved using AI to accelerate development (natural language to code, dashboards, and workflows), not replacing human decision-making on the plant floor.

     

    - **Predefined workflows inside agents are a game changer:** Rather than letting agents create their own reasoning steps on the fly, giving engineers the ability to predefine part of an agent's workflow dramatically improves reliability and practical value.

     

    - **Start building AI fluency now, even if you haven't started your data journey:** Reynolds mandated his team use chatbots daily in January 2023—not because he knew how AI would be used, but to build fluency. Every manufacturer should be doing the same with knowledge graphs and agent concepts today.

     

    ---

     

    ## Notable Quotes

     

    > "The only person who believes agents can run autonomously are people who don't work with agents." — Walker Reynolds, President at 4.0 Solutions

     

    > "Think of agents as a force multiplier for your workforce, a way of unlocking the potential in people." — Walker Reynolds, President at 4.0 Solutions

     

    > "If you're not getting value out of unified namespace, then you're using it for something that it isn't." — Walker Reynolds, President at 4.0 Solutions

     

    > "We're going to see more people on the plant floor, not less. They're going to be analysts supervising AI to optimize operations." — Walker Reynolds, President at 4.0 Solutions

     

    > "Your homework this year is learn knowledge graphs, because you're going to need them." — Walker Reynolds, President at 4.0 Solutions

     

    ---

     

    ## Key Concepts Explained

     

    **Unified Namespace (UNS)**

    Definition: A unified namespace is a single, semantically organized source of truth that represents the real-time current state of a business—all events, data, and information models contextualized and normalized in one accessible structure.

    Why it matters: UNS serves as the foundational architecture for digital transformation and is the originating context layer that AI agents query to understand current operations before reasoning through deeper systems.

    Episode context: Reynolds emphasized that manufacturers failing with UNS misunderstand its purpose, treating it as a historical data store rather than a real-time state representation.

     

    **Knowledge Graphs**

    Definition: Knowledge graphs are data structures that represent the relationships between entities (nodes) in a system, providing relational context that enables navigation and reasoning across an infrastructure.

    Why it matters: AI agents require knowledge graphs to navigate up and down a business's infrastructure, moving from an objective at one layer to the specific data location where answers reside.

    Episode context: Reynolds identified knowledge graphs as the breakout technology from PROVE IT 2025, with Thread Cloud's root cause analysis demo receiving mid-presentation applause for demonstrating practical agent-driven analysis via knowledge graphs.

     

    **Model Context Protocol (MCP)**

    Definition: MCP is a protocol that allows AI agents to connect to external tools and data sources, enabling them to retrieve information and perform actions beyond what's contained in their training data.

    Why it matters: MCP enables agents to go beyond the initial context from UNS and query historical data, work orders, and other systems of record to

Więcej Technologia podcastów

O Industry40.tv

Each episode of Industry40.tv Podcast will treat you to an in-depth interview with leading AI practitioners, exploring the Application of Artificial Intelligence in Manufacturing and offering practical guidance for successful implementation.
Strona internetowa podcastu

Słuchaj Industry40.tv, MacGadka 🎙 – podcast MyApple i wielu innych podcastów z całego świata dzięki aplikacji radio.pl

Uzyskaj bezpłatną aplikację radio.pl

  • Stacje i podcasty do zakładek
  • Strumieniuj przez Wi-Fi lub Bluetooth
  • Obsługuje Carplay & Android Auto
  • Jeszcze więcej funkcjonalności
Media spoecznościowe
v8.8.13| © 2007-2026 radio.de GmbH
Generated: 4/30/2026 - 9:23:18 PM