PodcastyWiadomościScrum Master Toolbox Podcast: Agile storytelling from the trenches

Scrum Master Toolbox Podcast: Agile storytelling from the trenches

Vasco Duarte, Agile Coach, Certified Scrum Master, Certified Product Owner
Scrum Master Toolbox Podcast: Agile storytelling from the trenches
Najnowszy odcinek

Dostępne odcinki

5 z 354
  • Swimming in Tech Debt — Practical Techniques to Keep Your Team from Drowning in Its Codebase | Lou Franco
    BONUS: Swimming in Tech Debt — Practical Techniques to Keep Your Team from Drowning in Its Codebase In this fascinating conversation, veteran software engineer and author Lou Franco shares hard-won lessons from decades at startups, Trello, and Atlassian. We explore his book "Swimming in Tech Debt," diving deep into the 8 Questions framework for evaluating tech debt decisions, personal practices that compound over time, team-level strategies for systematic improvement, and leadership approaches that balance velocity with sustainability. Lou reveals why tech debt is often the result of success, how to navigate the spectrum between ignoring debt and rewriting too much, and practical techniques individuals, teams, and leaders can use starting today. The Exit Interview That Changed Everything "We didn't go slower by paying tech debt. We went actually faster, because we were constantly in that code, and now we didn't have to run into problems." — Lou Franco   Lou's understanding of tech debt crystallized during an exit interview at Atalasoft, a small startup where he'd spent years. An engineer leaving the company confronted him: "You guys don't care about tech debt." Lou had been focused on shipping features, believing that paying tech debt would slow them down. But this engineer told a different story — when they finally fixed their terrible build and installation system, they actually sped up. They were constantly touching that code, and removing the friction made everything easier. This moment revealed a fundamental truth: tech debt isn't just about code quality or engineering pride. It's about velocity, momentum, and the ability to move fast sustainably. Lou carried this lesson through his career at Trello (where he learned the dangers of rewriting too much) and Atlassian (where he saw enterprise-scale tech debt management). These experiences became the foundation for "Swimming in Tech Debt." Tech Debt Is the Result of Success "Tech debt is often the result of success. Unsuccessful projects don't have tech debt." — Lou Franco   This reframes the entire conversation about tech debt. Failed products don't accumulate debt — they disappear before it matters. Tech debt emerges when your code survives long enough to outlive its original assumptions, when your user base grows beyond initial expectations, when your team scales faster than your architecture anticipated. At Atalasoft, they built for 10 users and got 100. At Trello, mobile usage exploded beyond their web-first assumptions. Success creates tech debt by changing the context in which code operates. This means tech debt conversations should happen at different intensities depending on where you are in the product lifecycle. Early startups pursuing product-market fit should minimize tech debt investments — move fast, learn, potentially throw away the code. Growth-stage companies need balanced approaches. Mature products benefit significantly from tech debt investments because operational efficiency compounds over years. Understanding this lifecycle perspective helps teams make appropriate decisions rather than applying one-size-fits-all rules. The 8 Questions Framework for Tech Debt Decisions "Those 8 questions guide you to what you should do. If it's risky, has regressions, and you don't even know if it's gonna work, this is when you're gonna do a project spike." — Lou Franco   Lou introduces a systematic framework for evaluating whether to pay tech debt, inspired by Bob Moesta's push-pull forces from product management. The 8 questions create a complete picture:   Visibility — Will people outside the team understand what we're doing? Alignment — Does this match our engineering values and target architecture? Resistance — How hard is this code to work with right now? Volatility — How often do we touch this code? Regression Risk — What's the chance we'll introduce new problems? Project Size — How big is this to fix? Estimate Risk — How uncertain are we about the effort required? Outcome Uncertainty — How confident are we the fix will actually improve things?   High volatility and high resistance with low regression risk? Pay the debt now. High regression risk with no tests? Write tests first, then reassess. Uncertain outcomes on a big project? Do a spike or proof of concept. The framework prevents both extremes — ignoring costly debt and undertaking risky rewrites without proper preparation. Personal Practices That Compound Daily "When I sit down at my desk, the first thing I do is I pay a little tech debt. I'm looking at code, I'm about to change it, do I even understand it? Am I having some kind of resistance to it? Put in a little helpful comment, maybe a little refactoring." — Lou Franco   Lou shares personal habits that create compounding improvements over time. Start each coding session by paying a small amount of tech debt in the area you're about to work — add a clarifying comment, extract a confusing variable, improve a function name. This warms you up, reduces friction for your actual work, and leaves the code slightly better than you found it. The clean-as-you-go philosophy means tech debt never accumulates faster than you can manage it. But Lou's most powerful practice comes at the end of each session: mutation testing by hand. Before finishing for the day, deliberately break something — change a plus to minus, a less-than to less-than-or-equal. See if tests catch it. Often they don't, revealing gaps in test coverage. The key insight: don't fix it immediately. Leave that failing test as the bridge to tomorrow's coding session. It connects today's momentum to tomorrow's work, ensuring you always start with context and purpose rather than cold-starting each day. Mutation Testing: Breaking Things on Purpose "Before I'm done working on a coding session, I break something on purpose. I'll change a plus to a minus, a less than to a less than equals, and see if tests break. A lot of times tests don't break. Now you've found a problem in your test." — Lou Franco   Manual mutation testing — deliberately breaking code to verify tests catch the break — reveals a critical gap in most test suites. You can have 100% code coverage and still have untested behavior. A line of code that's executed during tests isn't necessarily tested — the test might not actually verify what that line does. By changing operators, flipping booleans, or altering constants, you discover whether your tests protect against actual logic errors or just exercise code paths. Lou recommends doing this manually as part of your daily practice, but automated tools exist for systematic discovery: Stryker (for JavaScript, C#, Scala) and MutMut (for Python) can mutate your entire codebase and report which mutations survive uncaught. This isn't just about test quality — it's about understanding what your code actually does and building confidence that changes won't introduce subtle bugs. Team-Level Practices: Budgets, Backlogs, and Target Architecture "Create a target architecture document — where would we be if we started over today? Every PR is an opportunity to move slightly toward that target." — Lou Franco   At the team level, Lou advocates for three interconnected practices. First, create a target architecture document that describes where you'd be if starting fresh today — not a detailed design, but architectural patterns, technology choices, and structural principles that represent current best practices. This isn't a rewrite plan; it's a North Star. Every pull request becomes an opportunity to move incrementally toward that target when touching relevant code. Second, establish a budget split between PM-led feature work and engineering-led tech debt work — perhaps 80/20 or whatever ratio fits your product lifecycle stage. This creates predictable capacity for tech debt without requiring constant negotiation. Third, hold quarterly tech debt backlog meetings separate from sprint planning. Treat this backlog like PMs treat product discovery — explore options, estimate impacts, prioritize based on the 8 Questions framework. Some items fit in sprints; others require dedicated engineers for a quarter or two. This systematic approach prevents tech debt from being perpetually deprioritized while avoiding the opposite extreme of engineers disappearing into six-month "improvement" projects with no visible progress. The Atlassian Five-Alarm Fire "The Atlassian CTO's 'five-alarm fire' — stopping all feature development to focus on reliability. I reduced sync errors by 75% during that initiative." — Lou Franco   Lou shares a powerful example of leadership-driven tech debt management at scale. The Atlassian CTO called a "five-alarm fire" — halting all feature development across the company to focus exclusively on reliability and tech debt. This wasn't panic; it was strategic recognition that accumulated debt threatened the business. Lou worked on reducing sync errors, achieving a 75% reduction during this focused period. The initiative demonstrated several leadership principles: willingness to make hard calls that stop revenue-generating feature work, clear communication of why reliability matters strategically, trust that teams will use the time wisely, and commitment to see it through despite pressure to resume features. This level of intervention is rare and shouldn't be frequent, but it shows what's possible when leadership truly prioritizes tech debt. More commonly, leaders should express product lifecycle constraints (startup urgency vs. mature product stability), give teams autonomy to find appropriate projects within those constraints, and require accountability through visible metrics and dashboards that show progress. The Rewrite Trap: Why Big Rewrites Usually Fail "A system that took 10 years to write has implicit knowledge that can't be replicated in 6 months. I'm mostly gonna advocate for piecemeal migrations along the way, reducing the size of the problem over time." — Lou Franco   Lou lived through Trello's iOS navigation rewrite — a classic example of throwing away working code to start fresh, only to discover all the edge cases, implicit behaviors, and user expectations baked into the "old" system. A codebase that evolved over several years contains implicit knowledge — user workflows, edge case handling, performance optimizations, and subtle behaviors that users rely on even if they never explicitly requested them. Attempting to rewrite this in six months inevitably misses critical details. Lou strongly advocates for piecemeal migrations instead. The Trello "Decaffeinate Project" exemplifies this approach — migrating from CoffeeScript to TypeScript incrementally, with public dashboards showing the percentage remaining, interoperable technologies allowing gradual transition, and the ability to pause or reverse if needed. Keep both systems running in parallel during migrations. Use runtime observability to verify new code behaves identically to old code. Reduce the problem size steadily over months rather than attempting big-bang replacements. The only exception: sometimes keeping parallel systems requires scaffolding that creates its own complexity, so evaluate whether piecemeal migration is actually simpler or if you're better off living with the current system. Making Tech Debt Visible Through Dashboards "Put up a dashboard, showing it happen. Make invisible internal improvements visible through metrics engineering leadership understands." — Lou Franco   One of tech debt's biggest challenges is invisibility — non-technical stakeholders can't see the improvement from refactoring or test coverage. Lou learned to make tech debt work visible through dashboards and metrics. The Decaffeinate Project tracked percentage of CoffeeScript files remaining, providing a clear progress indicator anyone could understand. When reducing sync errors, Lou created dashboards showing error rates declining over time. These visualizations serve multiple purposes: they demonstrate value to leadership, create accountability for engineering teams, build momentum as progress becomes visible, and help teams celebrate wins that would otherwise go unnoticed. The key is choosing metrics that matter to the business — error rates, page load times, deployment frequency, mean time to recovery — rather than pure code quality metrics like cyclomatic complexity that don't translate outside engineering. Connect tech debt work to customer experience, reliability, or developer productivity in ways leadership can see and value. Onboarding as a Tech Debt Opportunity "Unit testing is a really great way to learn a system. It's like an executable specification that's helping you prove that you understand the system." — Lou Franco   Lou identifies onboarding as an underutilized opportunity for tech debt reduction. When new engineers join, they need to learn the codebase. Rather than just reading code or shadowing, Lou suggests having them write unit tests in areas they're learning. This serves dual purposes: tests are executable specifications that prove understanding of system behavior, and they create safety nets in areas that likely lack coverage (otherwise, why would new engineers be confused by the code?). The new engineer gets hands-on learning, the team gets better test coverage, and everyone wins. This practice also surfaces confusing code — if new engineers struggle to understand what to test, that's a signal the code needs clarifying comments, better naming, or refactoring. Make onboarding a systematic tech debt reduction opportunity rather than passive knowledge transfer. Leadership's Role: Constraints, Autonomy, and Accountability "Leadership needs to express the constraints. Tell the team what you're feeling about tech debt at a high level, and what you think generally is the appropriate amount of time to be spent on it. Then give them autonomy." — Lou Franco   Lou distills leadership's role in tech debt management to three elements. First, express constraints — communicate where you believe the product is in its lifecycle (early startup, rapid growth, mature cash cow) and what that means for tech debt tolerance. Are we pursuing product-market fit where code might be thrown away? Are we scaling a proven product where reliability matters? Are we maintaining a stable system where operational efficiency pays dividends? These constraints help teams make appropriate trade-offs. Second, give autonomy — once constraints are clear, trust teams to identify specific tech debt projects that fit those constraints. Engineers understand the codebase's pain points better than leaders do. Third, require accountability — teams must make their work visible through dashboards, metrics, and regular updates. Autonomy without accountability becomes invisible engineering projects that might not deliver value. Accountability without autonomy becomes micromanagement that wastes engineering judgment. The balance creates space for teams to make smart decisions while keeping leadership informed and confident in the investment. AI and the Future of Tech Debt "I really do AI-assisted software engineering. And by that, I mean I 100% review every single line of that code. I write the tests, and all the code is as I would have written it, it's just a lot faster. Developers are still responsible for it. Read the code." — Lou Franco   Lou has a chapter about AI in his book, addressing the elephant in the room: will AI-generated code create massive tech debt? His answer is nuanced. AI can accelerate development tremendously if used correctly — Lou uses it extensively but reviews every single line, writes all tests himself, and ensures the code matches what he would have written manually. The problem emerges with "vibe coders" — non-developers using AI to generate code they don't understand, creating unmaintainable messes that become someone else's problem. Developers remain responsible for all code, regardless of how it's generated. This means you must read and understand AI-generated code, not blindly accept it. Lou also raises supply chain security concerns — dependencies can contain malicious code, and AI might introduce vulnerabilities developers miss. His recommendation: stay six months behind on dependency updates, let others discover the problems first, and consider separate sandboxed development machines to limit security exposure. AI is a powerful tool, but it doesn't eliminate the need for engineering judgment, testing discipline, or code review practices. The Style Guide Beyond Formatting "Have a style guide that goes beyond formatting to include target architecture. This is the kind of code we want to write going forward." — Lou Franco   Lou advocates for style guides that extend beyond tabs-versus-spaces formatting rules to include architectural guidance. Document patterns you want to move toward: how should components be structured, what state management approaches do we prefer, how should we handle errors, what testing patterns should we follow? This creates a shared understanding of the target architecture without requiring a massive design document. When reviewing pull requests, teams can reference the style guide to explain why certain approaches align with where the codebase is headed versus perpetuating old patterns. This makes tech debt conversations less personal and more objective — it's not about criticizing someone's code, it's about aligning with team standards and strategic direction. The style guide becomes a living document that evolves as the team learns and technology changes, capturing collective wisdom about what good code looks like in your specific context. Recommended Resources Some of the resources mentioned in this episode include:  Steve Blank's Four Steps To Epiphany The podcast episode with Bernie Maloney where we discuss the critical difference between "enterprise" and "startup". And Geoffrey Moore's Crossing the Chasm, and Dealing with Darwin.   About Lou Franco   Lou Franco is a veteran software engineer and author of Swimming in Tech Debt. With decades of experience at startups, as well as Trello, and Atlassian, he's seen both sides of debt—as coder and leader. Today, he advises teams on engineering practices, helping them turn messy codebases into momentum.   You can link with Lou Franco on LinkedIn and learn more at LouFranco.com.
    --------  
    33:56
  • The Agile Organization as a Learning System With Tom Gilb and Simon Holzapfel
    BONUS: The Agile Organization as a Learning System Think Like a Farmer, Not a Factory Manager "Go slow to go fast. If you want to go somewhere, go together as a team. Take a farmer's mentality."   Simon contrasts monoculture industrial thinking with the permaculture approach of Joel Salatin. Industrial approaches optimize for short-term efficiency but create fragile systems. Farmer thinking recognizes that healthy ecosystems require patience, diversity, and nurturing conditions for growth. The nervous system that's constantly stressed never builds much over time—think of the body, trust the body, let the body be a body. Value Masters, Not Scrum Masters "We need value masters, not Scrum Masters. Agile is a useful tool for delivering value, but value itself is primary. Everything else is secondary—Agile included."   Tom makes his most provocative point: if you asked a top manager whether they'd prefer an agile person or value delivery, the answer is obvious. Agile is one tactic among many for delivering value—not even a necessary one. The shift required is from process mastery to value mastery, from Scrum Masters to people who understand and can deliver on critical stakeholder values. The DOVE Manifesto "I wrote a paper called DOVE—Deliver Optimum Values Efficiently. It's the manifesto focusing on delivering value, delivering value, delivering value."   Tom offers his alternative to the Agile Manifesto: a set of principles laser-focused on value delivery. The document includes 10 principles on a single page that can guide any organization toward genuine impact. Everything else—processes, frameworks, methodologies—are secondary tools in service of this primary goal. Read Tom's DOVE manifesto here.  Building the Glue Between Social and Physical Technology "Value is created in interactions. That's where the social and physical technology meet—that joyous boundary where stuff gets done."   Simon describes seeing the world through two lenses: physical technology (visible tools and systems) and social technology (culture, relationships, the air we breathe). Eric Beinhoeker's insight is that progress happens at the intersection. The Gilbian learning loops provide the structure; trust and human connection provide the fuel. Together, they create organizations that can actually learn and adapt.   Further Reading To Support Your Learning Journey Resources & Further Reading Explore these curated resources to deepen your understanding of strategic planning, value-based management, and transformative organizational change.     📚 Essential Reading Competitive Engineering  Tom Gilb's seminal book on requirements engineering and value-based development approaches. What is Wrong with OKRs (Paper by Tom Gilb) A critical analysis of the popular OKR framework and its limitations in measuring real value. DOVE Manifesto by Tom Gilb Detailed exploration of the DOVE (Design Of Value Engineering) methodology for quantifying and optimizing stakeholder value.     🎓 Learning Materials Tom Gilb's Strategy Ringbook A comprehensive collection of strategic planning principles and practical frameworks. Tom Gilb's Video at the Strategy Meetup Watch Tom Gilb discuss key strategic concepts and answer questions from the community. Design Process Paper by Tom Gilb An in-depth look at value-driven design processes and their practical application. Esko Kilpi's Work on Conversations Exploring how organizational conversations shape thinking, decision-making, and change.     🧭 Frameworks & Models OODA Loop The Observe-Orient-Decide-Act decision cycle for rapid strategic thinking and adaptation.     🎯 Practical Tips Measurement of Increased Value Focus on tracking actual value delivery rather than activity completion. Establish baseline measurements and regularly assess improvements in stakeholder-defined value dimensions. Quantify Critical Values Identify the 3-5 most important value attributes for your stakeholders. Make these concrete and measurable, avoiding vague qualities in favor of specific, quantifiable metrics. Measurement vs Testing Process Understand the distinction: measurement tells you how much value exists, while testing validates whether something works. Use both strategically—test hypotheses early, then measure outcomes continuously.     🔗 Related Profiles Todd Covert - Montessori School of the Berkshires Educational leadership and innovative approaches to value-based learning environments.   About Tom Gilb and Simon Holzapfel   Tom Gilb, born in the US, lived in London, and then moved to Norway in 1958. An independent teacher, consultant, and writer, he has worked in software engineering, corporate top management, and large-scale systems engineering. As the saying goes, Tom was writing about Agile before Agile was named. In 1976, Tom introduced the term "evolutionary" in his book Software Metrics, advocating for development in small, measurable steps. Today, we talk about Evo, the name Tom uses to describe his approach. Tom has worked with Dr. Deming and holds a certificate personally signed by him. You can listen to Tom Gilb's previous episodes here.    You can link with Tom Gilb on LinkedIn    Simon Holzapfel is an educator, coach, and learning innovator who helps teams work with greater clarity, speed, and purpose. He specializes in separating strategy from tactics, enabling short-cycle decision-making and higher-value workflows. Simon has spent his career coaching individuals and teams to achieve performance with deeper meaning and joy. Simon is also the author of the Equonomist newsletter on Substack. And you can listen to Simon's previous episodes on the podcast here.    You can link with Simon Holzapfel on LinkedIn.  
    --------  
    21:33
  • Quality 5.0—Quantifying the "Unmeasurable" With Tom Gilb and Simon Holzapfel
    BONUS: Quality 5.0—Quantifying the "Unmeasurable" With Tom Gilb and Simon Holzapfel Clarification Before Quantification "Quantification is not the main idea. The key idea is clarification—so that the executive team understands each other."   Tom emphasizes that measurement is a means to an end. The real goal is shared understanding. But quantification is a powerful clarification tactic because it forces precision. When someone says they want a "very fast car," asking "can we define a scale of measure?" immediately surfaces the vagueness. Miles per hour? Acceleration time? Top speed? Each choice defines what you're actually optimizing for. The Scale-Meter-Target Framework "First, define a scale of measure. Second, define the meter—the device for measuring. Third, set numbers: where are we now, what's the minimum to survive, and what does success look like?"   Tom's framework makes the abstract concrete:   Scale of measure: What dimension are you measuring? (e.g., time to complete task) Meter: How will you measure it? (e.g., user testing with stopwatch) Past/Status: Where are you now? (e.g., currently takes 47 seconds) Tolerable: What's the minimum acceptable? (e.g., must be under 30 seconds to survive) Target/Goal: What does success look like? (e.g., 15 seconds or less)   Many important concepts like "usability" decompose into 10+ different scales of measure—you're not looking for one magic number but a set of relevant metrics. Trust as the Organizational Hormone "Change moves at the speed of trust. Once there's trust, information flows. Once information flows, the system comes to life and can learn. Until there's trust, you have the Soviet problem."   Simon introduces trust as the "human growth hormone" of organizational change—it's fast, doesn't require a user's manual, and enables everything else. Low-trust environments hoard information, guaranteeing poor outcomes. The practical advice? Make your work visible to your manager, alignment-check first, do something, show results. Living the learning cycle yourself builds trust incrementally. And as Tom adds: if you deliver increased critical value every week, you will build trust.   About Tom Gilb and Simon Holzapfel   Tom Gilb, born in the US, lived in London, and then moved to Norway in 1958. An independent teacher, consultant, and writer, he has worked in software engineering, corporate top management, and large-scale systems engineering. As the saying goes, Tom was writing about Agile before Agile was named. In 1976, Tom introduced the term "evolutionary" in his book Software Metrics, advocating for development in small, measurable steps. Today, we talk about Evo, the name Tom uses to describe his approach. Tom has worked with Dr. Deming and holds a certificate personally signed by him. You can listen to Tom Gilb's previous episodes here.    You can link with Tom Gilb on LinkedIn    Simon Holzapfel is an educator, coach, and learning innovator who helps teams work with greater clarity, speed, and purpose. He specializes in separating strategy from tactics, enabling short-cycle decision-making and higher-value workflows. Simon has spent his career coaching individuals and teams to achieve performance with deeper meaning and joy. Simon is also the author of the Equonomist newsletter on Substack. And you can listen to Simon's previous episodes on the podcast here.    You can link with Simon Holzapfel on LinkedIn.  
    --------  
    17:09
  • Testing as Measurement—Why Bug-Hunting Misses the Point With Tom Gilb and Simon Holzapfel
    BONUS: Testing as Measurement—Why Bug-Hunting Misses the Point With Tom Gilb and Simon Holzapfel The Revelation That Almost Caused a Car Crash "Tom said like 10 sentences in a row, kind of like a geometric proof, that just so blew my mind I almost drove off the road. I realized I had wasted hundreds of hours in boardrooms arguing about errors of which we were aware of perhaps 10%."   Simon shares the moment Tom's framework clicked for him. The insight? Traditional testing—finding bugs and defects—is the wrong focus entirely. It's a programmer's view of the world. Managers don't care about bugs; they care about results, about improvements in their business. Tom calls this shift moving from "testing" to "measurement of enhanced or increased value at every cycle." The American Toast Problem "How do we make toast in America? We burn the toast, and then we pay someone to scrape off the black bits off the bread."   Vasco invokes Deming's classic analogy to describe traditional software testing. The entire testing-at-the-end approach is fundamentally wasteful. Instead, Tom advocates for continuous measurement against quantified values. If you expected 3% progress toward your goals this week and didn't get it, you've learned something critical: your strategy needs to change. If you did get it, keep going with confidence. Four Questions at Every Checkpoint "Where are we going? Where are we now? Where should we have been at this point? And why is there a gap?"   Drawing from fighter pilot doctrine, these four questions should be asked at every micro-cycle—not just at quarterly reviews. Fighter pilots ask these questions every minute during critical missions, with clear abort criteria if answers are unacceptable. Most organizations have no abort criteria for their strategies at all, guaranteeing they'll discover failures far too late.   About Tom Gilb and Simon Holzapfel   Tom Gilb, born in the US, lived in London, and then moved to Norway in 1958. An independent teacher, consultant, and writer, he has worked in software engineering, corporate top management, and large-scale systems engineering. As the saying goes, Tom was writing about Agile before Agile was named. In 1976, Tom introduced the term "evolutionary" in his book Software Metrics, advocating for development in small, measurable steps. Today, we talk about Evo, the name Tom uses to describe his approach. Tom has worked with Dr. Deming and holds a certificate personally signed by him. You can listen to Tom Gilb's previous episodes here.    You can link with Tom Gilb on LinkedIn    Simon Holzapfel is an educator, coach, and learning innovator who helps teams work with greater clarity, speed, and purpose. He specializes in separating strategy from tactics, enabling short-cycle decision-making and higher-value workflows. Simon has spent his career coaching individuals and teams to achieve performance with deeper meaning and joy. Simon is also the author of the Equonomist newsletter on Substack. And you can listen to Simon's previous episodes on the podcast here.    You can link with Simon Holzapfel on LinkedIn.
    --------  
    12:57
  • Continuous Strategy Engineering—Beyond Waterfall Planning With Tom Gilb and Simon Holzapfel
    BONUS: Continuous Strategy Engineering—Beyond Waterfall Planning With Tom Gilb and Simon Holzapfel Strategy Professors Are Decades Behind "The professors of strategy have no clue as to what Evo is. They are locked in decades ago, waterfall mode."   Tom's analysis is stark: the people teaching strategy in business schools haven't undergone the same agile transformation that software development experienced. They still think in terms of 5-year plans that get tested at the end—a guaranteed recipe for discovering failure too late. The alternative? Decompose any large strategy into weekly value delivery steps. And if you think that's impossible, ask any AI to do it for you—it will produce 52 reasonable weekly increments in about a minute. Why OKRs Aren't Enough for Complex Systems "If you're doing small-scale stuff that OKRs were designed for, like planning your personal work 14 days hence, OKRs are wonderful. If you're designing the air traffic control system for Europe, they're just too simple."   Tom distinguishes between tools appropriate for personal productivity and those needed for complex organizational strategy. OKRs force some thinking, which is good, but they weren't designed for—and have never been adapted to—large-scale systems engineering. His paper "What is Wrong with OKRs?" documents roughly 100 gaps between simple OKRs and what robust value requirements actually require. Check out Tom Gilb's paper on what's wrong with OKR's and how to fix it.  The Missing Alignment Layer "We have no mental model for most of leadership about how you actually align people around clear vision."   Simon introduces the concept of a Hoshin-Kanri "sprinkler" system—imagine strategic clarity flowing from the top and misting over everyone's desk as alignment. Most organizations lack anything resembling this. They have Moses descending from expensive consultant retreats with tablets, but no continuous two-way flow of strategic information. The result? Teams work hard on things that don't matter while critical values go unaddressed. About Tom Gilb and Simon Holzapfel   Tom Gilb, born in the US, lived in London, and then moved to Norway in 1958. An independent teacher, consultant, and writer, he has worked in software engineering, corporate top management, and large-scale systems engineering. As the saying goes, Tom was writing about Agile before Agile was named. In 1976, Tom introduced the term "evolutionary" in his book Software Metrics, advocating for development in small, measurable steps. Today, we talk about Evo, the name Tom uses to describe his approach. Tom has worked with Dr. Deming and holds a certificate personally signed by him. You can listen to Tom Gilb's previous episodes here.    You can link with Tom Gilb on LinkedIn    Simon Holzapfel is an educator, coach, and learning innovator who helps teams work with greater clarity, speed, and purpose. He specializes in separating strategy from tactics, enabling short-cycle decision-making and higher-value workflows. Simon has spent his career coaching individuals and teams to achieve performance with deeper meaning and joy. Simon is also the author of the Equonomist newsletter on Substack. And you can listen to Simon's previous episodes on the podcast here.    You can link with Simon Holzapfel on LinkedIn.
    --------  
    14:04

Więcej Wiadomości podcastów

O Scrum Master Toolbox Podcast: Agile storytelling from the trenches

Every week day, Certified Scrum Master, Agile Coach and business consultant Vasco Duarte interviews Scrum Masters and Agile Coaches from all over the world to get you actionable advice, new tips and tricks, improve your craft as a Scrum Master with daily doses of inspiring conversations with Scrum Masters from the all over the world. Stay tuned for BONUS episodes when we interview Agile gurus and other thought leaders in the business space to bring you the Agile Business perspective you need to succeed as a Scrum Master. Some of the topics we discuss include: Agile Business, Agile Strategy, Retrospectives, Team motivation, Sprint Planning, Daily Scrum, Sprint Review, Backlog Refinement, Scaling Scrum, Lean Startup, Test Driven Development (TDD), Behavior Driven Development (BDD), Paper Prototyping, QA in Scrum, the role of agile managers, servant leadership, agile coaching, and more!
Strona internetowa podcastu

Słuchaj Scrum Master Toolbox Podcast: Agile storytelling from the trenches, Raport o stanie świata Dariusza Rosiaka i wielu innych podcastów z całego świata dzięki aplikacji radio.pl

Uzyskaj bezpłatną aplikację radio.pl

  • Stacje i podcasty do zakładek
  • Strumieniuj przez Wi-Fi lub Bluetooth
  • Obsługuje Carplay & Android Auto
  • Jeszcze więcej funkcjonalności
Media spoecznościowe
v8.1.2 | © 2007-2025 radio.de GmbH
Generated: 12/13/2025 - 12:34:57 PM