Powered by RND
PodcastySztukaExperiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design)

Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design)

Brian T. O’Neill from Designing for Analytics
Experiencing Data w/ Brian T. O’Neill  (AI & data product management leadership—powered by UX design)
Najnowszy odcinek

Dostępne odcinki

5 z 109
  • 178 - Designing Human-Friendly AI Tech in a World Moving Too Fast with Author and Speaker Kate O’Neill
    In this episode, I sat down with tech humanist Kate O’Neill to explore how organizations can balance human-centered design in a time when everyone is racing to find ways to leverage AI in their businesses. Kate introduced her “Now–Next Continuum,” a framework that distinguishes digital transformation (catching up) from true innovation (looking ahead). We dug into real-world challenges and tensions of moving fast vs. creating impact with AI, how ethics fits into decision making, and the role of data in making informed decisions.      Kate stressed the importance of organizations having clear purpose statements and values from the outset, proxy metrics she uses to gauge human-friendliness, and applying a “harms of action vs. harms of inaction” lens for ethical decisions. Her key point: human-centered approaches to AI and technology creation aren’t slow; they create intentional structures that speed up smart choices while avoiding costly missteps.     Highlights/ Skip to: How Kate approaches discussions with executives about moving fast, but also moving in a human-centered way when building out AI solutions (1:03) Exploring the lack of technical backgrounds among many CEOs and how this shapes the way organizations make big decisions around technical solutions (3:58)  FOMO and the “Solution in Search of a Problem” problem in Data (5:18)  Why ongoing ethnographic research and direct exposure to users are essential for true innovation (11:21)  Balancing organizational purpose and human-centered tech decisions, and why a defined purpose must precede these decisions (18:09) How organizations can define, measure, operationalize, and act on ethical considerations in AI and data products (35:57) Risk management vs. strategic optimism: balancing risk reduction with embracing the art of the possible when building AI solutions (43:54) Quotes from Today’s Episode "I think the ethics and the governance and all those kinds of discussions [about the implications of digital transformation] are all very big word - kind of jargon-y kinds of discussions - that are easy to think aren't important, but what they all tend to come down to is that alignment between what the business is trying to do and what the person on the other side of the business is trying to do." –Kate O’Neill     " I've often heard the term digital transformation used almost interchangeably with the term innovation. And I think that that's a grave disservice that we do to those two concepts because they're very different. Digital transformation, to me, seems as if it sits much more comfortably on the earlier side of the Now-Next Continuum. So, it's about moving the past to the present… Innovation is about standing in the present and looking to the future and thinking about the art of the possible, like you said. What could we do? What could we extract from this unstructured data (this mess of stuff that’s something new and different) that could actually move us into green space, into territory that no one’s doing yet? And those are two very different sets of questions. And in most organizations, they need to be happening simultaneously." –Kate O’Neill     "The reason I chose human-friendly [as a term] over human-centered partly because I wanted to be very honest about the goal and not fall back into, you know, jargony kinds of language that, you know, you and I and the folks listening probably all understand in a certain way, but the CEOs and the folks that I'm necessarily trying to get reading this book and make their decisions in a different way based on it." –Kate O’Neill     “We love coming up with new names for different things. Like whether something is “cloud,” or whether it’s like, you know, “SaaS,” or all these different terms that we’ve come up with over the years… After spending so long working in tech, it is kind of fun to laugh at it. But it’s nice that there’s a real earnestness [to it]. That’s sort of evergreen [laugh]. People are always trying to genuinely solve human problems, which is what I try to tap into these days, with the work that I do, is really trying to help businesses—business leaders, mostly, but a lot of those are non-tech leaders, and I think that’s where this really sticks is that you get a lot of people who have ascended into CEO or other C-suite roles who don’t come from a technology background.”  –Kate O’Neill     "My feeling is that if you're not regularly doing ethnographic research and having a lot of exposure time directly to customers, you’re doomed. The people—the makers—have to be exposed to the users and stakeholders.  There has to be ongoing work in this space; it can't just be about defining project requirements and then disappearing. However, I don't see a lot of data teams and AI teams that have non-technical research going on where they're regularly spending time with end users or customers such that they could even imagine what the art of the possible could be.” –Brian T. O’Neill   Links KO Insights: https://www.koinsights.com/ LinkedIn for Kate O’Neill: https://www.linkedin.com/in/kateoneill/ Kate O’Neill Book: What Matters Next: A Leader's Guide to Making Human-Friendly Tech Decisions in a World That's Moving Too Fast
    --------  
    50:10
  • 177 - Designing Effective Commercial AI Data Products for the Cold Chain with the CEO of Paxafe
    In this episode, I talk with Ilya Preston, co-founder and CEO of PAXAFE, a logistics orchestration and decision intelligence platform for temperature-controlled supply chains (aka “cold chain”). Ilya explains how PAXAFE helps companies shipping sensitive products, like pharmaceuticals, vaccines, food, and produce, by delivering end-to-end visibility and actionable insights powered by analytics and AI that reduce product loss, improve efficiency, and support smarter real-time decisions. Ilya shares the challenges of building a configurable system that works for transportation, planning, and quality teams across industries. We also discuss their product development philosophy, team structure, and use of AI for document processing, diagnostics, and workflow automation.  Highlights/ Skip to:   Intro to Paxafe  (2:13)   How PAXAFE brings tons of cold chain data together in one user experience (2:33) Innovation in cold chain analytics is up, but so is cold chain product loss. (4:42) The product challenge of getting sufficient telemetry data at the right level of specificity to derive useful analytical insights (7:14)  Why and how PAXAFE pivoted away from providing IoT hardware to collect telemetry (10:23) How PAXAFE supports complex customer workflows, cold chain logistics, and complex supply chains (13:57) Who the end users of PAXAFE are, and how the product team designs for these users (20:00) Pharma loses around $40 billion a year relying on ‘Bob’s intuition’ in the warehouse. How Paxafe balances institutional user knowledge with the cold hard facts of analytics (42:43) Lessons learned when Ilya’s team fell in love with its own product and didn’t listen to the market  (23:57) Quotes from Today’s Episode "Our initial vision for what PAXAFE would become was 99.9% spot on. The only thing we misjudged was market readiness—we built a product that was a few years ahead of its time." –IIya "As an industry, pharma is losing $40 billion worth of product every year because decisions are still based on warehouse intuition about what works and what doesn’t. In production, the problem is even more extreme, with roughly $800 billion lost annually due to temperature issues and excursions." -IIya "With our own design, our initial hypothesis and vision for what Pacaf could be really shaped where we are today. Early on, we had a strong perspective on what our customers needed—and along the way, we fell in love with our own product and design.." -IIya "We spent months perfecting risk scores… only to hear from customers, ‘I don’t care about a 71 versus a 62—just tell me what to do.’ That single insight changed everything." -IIya "If you’re not talking to customers or building a product that supports those conversations, you’re literally wasting time. In the zero-to-product-market-fit phase, nothing else matters, you need to focus entirely on understanding your customers and iterating your product around their needs..” -IIya "Don’t build anything on day one, probably not on day two, three, or four either. Go out and talk to customers. Focus not on what they think they need, but on their real pain points. Understand their existing workflows and the constraints they face while trying to solve those problems." -IIya Links PAXAFE: https://www.paxafe.com/ LinkedIn for Ilya Preston: https://www.linkedin.com/in/ilyapreston/ LinkedIn for company: https://www.linkedin.com/company/paxafe/
    --------  
    49:20
  • 176 - (Part 2) The MIRRR UX Framework for Designing Trustworthy Agentic AI Applications
    This is part two of the framework; if you missed part one, head to episode 175 and start there so you're all caught up.  In this episode of Experiencing Data, I continue my deep dive into the MIRRR UX Framework for designing trustworthy agentic AI applications. Building on Part 1’s “Monitor” and “Interrupt,” I unpack the three R’s: Redirect, Rerun, and Rollback—and share practical strategies for data product managers and leaders tasked with creating AI systems people will actually trust and use. I explain human-centered approaches to thinking about automation and how to handle unexpected outcomes in agentic AI applications without losing user confidence. I am hoping this control framework will help you get more value out of your data while simultaneously creating value for the human stakeholders, users, and customers. Highlights / Skip to: Introducing the MIRRR UX Framework (1:08) Designing for trust and user adoption plus perspectives you should be including when designing systems. (2:31) Monitor and interrupt controls let humans pause anything from a single AI task to the entire agent (3:17) Explaining “redirection” in the example context of use cases for claims adjusters working on insurance claims—so adjusters (users) can focus on important decisions. (4:35)  Rerun controls: lets humans redo an angentic task after unexpected results, preventing errors and building trust in early AI rollouts (11:12) Rerun vs. Redirect: what the difference is in the context of AI, using additional use cases from the insurance claim processing domain  (12:07) Empathy and user experience in AI adoption, and how the most useful insights come from directly observing users—not from analytics (18:28) Thinking about agentic AI as glue for existing applications and workflows, or as a worker  (27:35) Quotes from Today’s Episode The value of AI isn’t just about technical capability, it’s based in large part on whether the end-users will actually trust and adopt it. If we don’t design for trust from the start, even the most advanced AI can fail to deliver value." "In agentic AI, knowing when to automate is just as important as knowing what to automate. Smart product and design decisions mean sometimes holding back on full automation until the people, processes, and culture are ready for it." "Sometimes the most valuable thing you can do is slow down, create checkpoints, and give people a chance to course-correct before the work goes too far in the wrong direction."   "Reruns and rollbacks shouldn’t be seen as failures, they’re essential safety mechanisms that protect both the integrity of the work and the trust of the humans in the loop. They give people the confidence to keep using the system, even when mistakes happen." "You can’t measure trust in an AI system by counting logins or tracking clicks. True adoption comes from understanding the people using it, listening to them, observing their workflows, and learning what really builds or breaks their confidence."   "You’ll never learn the real reasons behind a team’s choices by only looking at analytics, you have to actually talk to them and watch them work."   "Labels matter, what you call a button or an action can shape how people interpret and trust what will happen when they click it." Quotes from Today’s Episode Part 1: The MIRRR UX Framework for Designing Trustworthy Agentic AI Applications 
    --------  
    29:52
  • 175 - The MIRRR UX Framework for Designing Trustworthy Agentic AI Applications (Part 1)
    In this episode of Experiencing Data, I introduce part 1 of my new MIRRR UX framework for designing trustworthy agentic AI applications—you know, the kind that might actually get used and have the opportunity to create the desired business value everyone seeks! One of the biggest challenges with both traditional analytics, ML, and now, LLM-driven AI agents, is getting end users and stakeholders to trust and utilize these data products—especially if we’re asking humans in the loop to make changes to their behavior or ways of working.  In this episode, I challenge the idea that software UIs will vanish with the rise of AI-based automation. In fact, the MIRRR framework is based on the idea that AI agents should be “in the human loop,” and a control surface (user interface) may in many situations be essential to ensure any automated workers engender trust with their human overlords.   By properly considering the control and oversight that end users and stakeholders need, you can enable the business value and UX outcomes that your paying customers, stakeholders, and application users seek from agentic AI.  Using use cases from insurance claims processing, in this episode, I introduce the first two of five control points in the MIRRR framework—Monitor and Interrupt. These control points represent core actions that define how AI agents often should operate and interact within human systems: Monitor – enabling appropriate transparency into AI agent behavior and performance Interrupt – designing both manual and automated pausing mechanisms to ensure human oversight remains possible when needed  …and in a couple weeks, stay tuned for part 2 where I’ll wrap up this first version of my MIRRR framework.  Highlights / Skip to: 00:34 Introducing the MIRRR UX Framework for designing trustworthy agentic AI Applications.  01:27 The importance of trust in AI systems and how it is linked to user adoption 03:06 Cultural shifts, AI hype, and growing AI skepticism 04:13  Human centered design practices for agentic AI   06:48 I discuss how understanding your users’ needs does not change with agentic AI, and that trust in agentic applications has direct ties to user adoption and value creation 11:32 Measuring success of agentic applications with UX outcomes 15:26 Introducing the first two of five MIRRR framework control points: 16:29 M is for Monitor; understanding the agent’s “performance,” and the right level of transparency end users need, from individual tasks to aggregate views  20:29 I is for Interrupt; when and why users may need to stop the agent—and what happens next 28:02 Conclusion and next steps
    --------  
    28:51
  • 174 - Why AI Adoption Moves at the Speed of User Trust Irina Malkova on Lessons Learned Building Data Products at Salesforce
    In this episode of Experiencing Data, I chat with Irina Malkova who is the VP of AI Engineering and VP of Data and Analytics for Tech and Product at Salesforce. Irina shares how her teams are reinventing internal analytics, combining classic product data work with cutting-edge AI engineering—and her recent post on LinkedIn titled “AI adoption moves at the speed of user trust,” having a strong design-centered perspective, inspires today’s episode. (I even quoted her on this in a couple recent product design conference talks I gave!)  In today’s drop, Irina shares how they’re enabling analytical insights at Salesforce via a Slack-based AI agent, how they have changed their AI and engineering org structures (and why), the bad advice they got on organizing their data product teams, and more. This is a great episode for senior data product and AI executives managing complex orgs and technology environments who want to see how Salesforce is scaling AI for smarter, faster decisions.
    --------  
    47:50

Więcej Sztuka podcastów

O Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design)

Is the value of your enterprise analytics SAAS or AI product not obvious through it’s UI/UX? Got the data and ML models right...but user adoption of your dashboards and UI isn’t what you hoped it would be?While it is easier than ever to create AI and analytics solutions from a technology perspective, do you find as a founder or product leader that getting users to use and buyers to buy seems harder than it should be?If you lead an internal enterprise data team, have you heard that a ”data product” approach can help—but you’re concerned it’s all hype?My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I share the stories of leaders who are leveraging product and UX design to make SAAS analytics, AI applications, and internal data products indispensable to their customers. After all, you can’t create business value with data if the humans in the loop can’t or won’t use your solutions.Every 2 weeks, I release interviews with experts and impressive people I’ve met who are doing interesting work at the intersection of enterprise software product management, UX design, AI and analytics—work that you need to hear about and from whom I hope you can borrow strategies.I also occasionally record solo episodes on applying UI/UX design strategies to data products—so you and your team can unlock financial value by making your users’ and customers’ lives better.Hashtag: #ExperiencingData. JOIN MY INSIGHTS LIST FOR 1-PAGE EPISODE SUMMARIES, TRANSCRIPTS, AND FREE UX STRATEGY TIPShttps://designingforanalytics.com/edABOUT THE HOST, BRIAN T. O’NEILL:https://designingforanalytics.com/bio/
Strona internetowa podcastu

Słuchaj Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design), OTULINA O SZTUCE i wielu innych podcastów z całego świata dzięki aplikacji radio.pl

Uzyskaj bezpłatną aplikację radio.pl

  • Stacje i podcasty do zakładek
  • Strumieniuj przez Wi-Fi lub Bluetooth
  • Obsługuje Carplay & Android Auto
  • Jeszcze więcej funkcjonalności
Media spoecznościowe
v7.23.9 | © 2007-2025 radio.de GmbH
Generated: 9/17/2025 - 5:27:43 AM