Powered by RND
PodcastyWiadomościM365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily

M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily

Mirko Peters - Microsoft 365 Expert Podcast
M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily
Najnowszy odcinek

Dostępne odcinki

5 z 219
  • Your SharePoint Content Map Is Lying to You
    Quick question: if someone new joined your organization tomorrow, how long would it take them to find the files they need in SharePoint or Teams? Ten seconds? Ten minutes? Or never? The truth is, most businesses don’t actually know the answer. In this podcast, we’ll break down the three layers of content assessment most teams miss and show you how to build a practical “report on findings” that leadership can act on. Today, we’ll walk through a systematic process inside Microsoft 365. Then we’ll look at what it reveals: how content is stored, how it’s used, and how people actually search. By the end, you’ll see what’s working, what’s broken, and how to fix findability step by step. Here’s a quick challenge before we dive in—pick one SharePoint site in your tenant and track how it’s used over the next seven days. I’ll point out the key metrics to collect as we go. Because neat diagrams and tidy maps often hide the real problem: they only look good on paper.Why Your Content Map Looks Perfect but Still FailsThat brings us to the bigger issue: why does a content map that looks perfect still leave people lost? On paper, everything may seem in order. Sites are well defined, libraries are separated cleanly, and even the folders look like they were built to pass an audit. But in practice, the very people who should benefit are the ones asking, “Where’s the latest version?” or “Should this live in Teams or SharePoint?” The structure exists, yet users still can’t reliably find what they need when it matters. That disconnect is the core problem. The truth is, a polished map gives the appearance of control but doesn’t prove actual usability. Imagine drawing a city grid with neat streets and intersections. It looks great, but the map doesn’t show you the daily traffic jams, the construction that blocks off half the roads, or the shortcuts people actually take. A SharePoint map works the same way—it explains where files *should* live, not how accessible those files really are in day-to-day work. We see a consistent pattern in organizations that go through a big migration or reorganization. The project produces beautiful diagrams, inventories, and folder structures. IT and leadership feel confident in the new system’s clarity. But within weeks, staff are duplicating files to avoid slow searches or even recreating documents rather than hunting for the “official” version. The files exist, but the process to reach them is so clunky that employees simply bypass it. This isn’t a one-off story; it’s a recognizable trend across many rollouts. What this shows is that mapping and assessment are not the same thing. Mapping catalogs what you have and where it sits. Assessment, on the other hand, asks whether those files still matter, who actually touches them, and how they fit into business workflows. Mapping gives you the layout, but assessment gives you the reality check—what’s being used, what’s ignored, and what may already be obsolete. This gap becomes more visible when you consider how much content in most organizations sits idle. The exact numbers vary, but analysts and consultants often point out that a large portion of enterprise content—sometimes the majority—is rarely revisited after it’s created. That means an archive can look highly structured yet still be dominated by documents no one searches, opens, or references again. It might resemble a well-maintained library where most of the books collect dust. Calling it “organized” doesn’t change the fact that it’s not helping anyone. And if so much content goes untouched, the implication is clear: neat diagrams don’t always point to value. A perfectly labeled collection of inactive files is still clutter, just with tidy labels. When leaders assume clean folders equal effective content, decisions become based on the illusion of order rather than on what actually supports the business. At that point, the governance effort starts managing material that no longer matters, while the information people truly rely on gets buried under digital noise. That’s why the “perfect” content map isn’t lying—it’s just incomplete. It shows one dimension but leaves out the deeper indicators of relevance and behavior. Without those, you can’t really tell whether your system is a healthy ecosystem or a polished ghost town. Later, we’ll highlight one simple question you can ask that instantly exposes whether your map is showing real life or just an illusion. And this takes us to the next step. If a content map only scratches the surface, the real challenge is figuring out how to see the layers underneath—the ones that explain not just where files are, but how they’re actually used and why they matter.The Three Layers of Content Assessment Everyone MissesThis is where most organizations miss the mark. They stop at counting what exists and assume that’s the full picture. But a real assessment has three distinct layers—and you need all of them to see content health clearly. Think of this as the framework to guide every decision about findability. Here are the three layers you can’t afford to skip: - Structural: this is the “where.” It’s your sites, libraries, and folders. Inventory them, capture last-modified dates, and map out the storage footprint. - Behavioral: this is the “what.” Look at which files people open, edit, share, or search for. Track access frequency, edit activity, and even common search queries. - Contextual: this is the “why.” Ask who owns the content, how it supports business processes, whether it has compliance requirements, and where it connects to outcomes. When you start treating these as layers, the flaws in a single-dimension audit become obvious. Let’s say you only measure structure. You’ll come back with a neat folder count but no sense of which libraries are dormant. If you only measure behavior, you’ll capture usage levels but miss out on the legal or compliance weight a file might carry even if it’s rarely touched. Without context, you’ll miss the difference between a frequently viewed but trivial doc and a rarely accessed yet critical record. One layer alone will always give you a distorted view. Think of it like a doctor’s checkup. Weight and height are structural—they describe the frame. Exercise habits and sleep patterns are behavioral—they show activity. But medical history and conditions are contextual—they explain risk. You’d never sign off on a person’s health using just one of those measures. Content works the same way. Of course, knowing the layers isn’t enough. You need practical evidence to fill each one. For structure, pull a site and library inventory along with file counts and last-modified dates. The goal is to know what you have and how long it’s been sitting there. For behavior, dig into access logs, edit frequency, shares, and even abandoned searches users run with no results. For context, capture ownership, compliance retention needs, and the processes those files actually support. Build your assessment artifacts around these three buckets, and suddenly the picture sharpens. A library might look pristine structurally. But if your logs show almost no one opens it, that’s a behavioral red flag. At the same time, don’t rush to archive it if it carries contextual weight—maybe it houses your contracts archive that legally must be preserved. By layering the evidence, you avoid both overreacting to noise and ignoring quiet-but-critical content. Use your platform’s telemetry and logs wherever possible. That might mean pulling audit, usage, or activity reports in Microsoft 365, or equivalent data in your environment. The point isn’t the specific tool—it’s collecting the behavior data. And when you present your findings, link the evidence directly to how it affects real work. A dormant library is more than just wasted storage; it’s clutter that slows the people who are trying to find something else. The other value in this layered model is communication. Executives often trust architectural diagrams because they look complete. But when you can show structure, behavior, and context side by side, blind spots become impossible to ignore. A report that says “this site has 30,000 files, 95% of which haven’t been touched in three years, and a business owner who admits it no longer supports operations” makes a stronger case than any map alone. Once you frame your assessment in these layers, you’re no longer maintaining the illusion that an organized system equals a healthy one. You see the ecosystem for what it is—what’s being used, what isn’t, and what still matters even if it’s silent. That clarity is the difference between keeping a stagnant archive and running a system that actually supports work. And with that understanding, you’re ready for the next question: out of everything you’ve cataloged, which of it really deserves to be there, and which of it is just background noise burying the valuable content?Separating Signal from Noise: Content That MattersIf you look closely across a tenant, the raw volume of content can feel overwhelming. And that’s where the next challenge comes into focus: distinguishing between files that actually support work and files that only create noise. This is about separating the signal—the content people count on daily—from everything else that clutters the system. Here’s the first problem: storage numbers are misleading. Executives see repositories expanding in the terabytes and assume this growth reflects higher productivity or retained knowledge. But in most cases, it’s simply accumulation. Files get copied over during migrations, duplicates pile up, and outdated material lingers with no review. Measuring volume alone doesn’t reveal value. A file isn’t valuable because it exists. It’s valuable because it’s used when someone needs it. That’s why usage-based reporting should always sit at the center of content assessment. Instead of focusing on how many documents you have, start tracking which items are actually touched. Metrics like file views, edits, shares, and access logs give you a living picture of activity. Look at Microsoft 365’s built-in reporting: which libraries are drawing daily traffic, which documents are routinely opened in Teams, and which sites go silent. Activity data exposes the real divide—files connected to business processes versus files coasting in the background. We’ve seen organizations discover this gap in hard ways. After major migrations, some teams find a significant portion of their files have gone untouched for years. All the effort spent on preserving and moving them added no business value. Worse, the clutter buries relevant material, forcing users to dig through irrelevant search results or re-create documents they couldn’t find. Migrating without first challenging the usefulness of content leads to huge amounts of dead weight in the new system. So what can you do about it? Start small with practical steps. Generate a last-accessed report across a set of sites or libraries. Define a reasonable review threshold that matches your organization’s governance policy—for example, files untouched after a certain number of years. Tag that material for review. From there, move confirmed stale files into a dedicated archive tier where they’re still retrievable but don’t dominate search. This isn’t deletion first—it’s about segmenting so active content isn’t buried beneath inactive clutter. At the same time, flip your focus toward the busiest areas. High-activity libraries reveal where your energy should go. If multiple teams open a library every week, that’s a strong signal it deserves extra investment. Add clearer metadata, apply stronger naming standards, or build out filters to make results faster. Prioritize tuning the spaces people actually use, rather than spreading effort evenly across dormant and active repositories. When you take this two-pronged approach—archiving stale content while improving high-use areas—the system itself starts to feel lighter. Users stop wading through irrelevant results, navigation gets simpler, and confidence in search goes up. Even without changing any technical settings, the everyday experience improves because the noise is filtered out before people ever run a query. It’s worth noting that this kind of cleanup often delivers more immediate benefit than adding advanced tooling on top. Before investing in complex custom search solutions or integrations, try validating whether content hygiene unlocks faster wins. Run improvements in your most active libraries first and measure whether findability improves. If users instantly feel less friction, you’ve saved both budget and frustration by focusing effort where it counts. The cost of ignoring digital clutter isn’t just wasted space. Each unused file actively interferes—pushing important documents deeper in rankings, making it hard to spot the latest version, and prompting people to duplicate instead of reusing. Every irrelevant file separates your users from the content that actually drives outcomes. The losses compound quietly but daily. Once you start filtering for signal over noise, the narrative of “value” in your system changes. You stop asking how much content you’ve stored and start asking what content is advancing current work. That pivot resets the culture around knowledge management and forces governance efforts into alignment with what employees truly use. And this naturally raises another layer of questions. If we can now see which content is alive versus which is idle, why do users still struggle to reach the important files they need? The files may exist and the volume may be balanced, but something in the system design may still be steering people away from the right content. That’s the next source of friction to unpack.Tracing User Behavior to Find Gaps in Your SystemContent problems usually don’t start with lazy users. They start with a system that makes normal work harder than it should be. When people can’t get quick access to the files they need, they adapt. And those adaptations—duplicating documents, recreating forms, or bypassing “official” libraries—are usually signs of friction built into the design. That’s why tracing behavior is so important. Clean diagrams may look reassuring, but usage trails and search logs uncover the real story of how people work around the system. SharePoint searches show you the actual words users type in—often very different from the technical labels assigned by IT. Teams metrics show which channels act as the hub of activity, and which areas sit unused. Even navigation logs reveal where people loop back repeatedly, signaling a dead end. Each of these signals surfaces breakdowns that no map is designed to capture. Here’s the catch: in many cases, the “lost” files do exist. They’re stored in the right library, tagged with metadata, and linked in a navigation menu. But when the way someone searches doesn’t match the way it was tagged, the file may as well be invisible. The gap isn’t the absence of content; it’s the disconnect between user intent and system design. That’s the foundation of ongoing complaints about findability. A common scenario: a team needs the company’s budget template for last quarter. The finance department has stored it in SharePoint, inside a library under a folder named “Planning.” The team searches “budget template,” but the official version ranks low in the results. Frustrated, they reuse last year’s copy and modify it. Soon, multiple versions circulate across Teams, each slightly different. Before long, users don’t trust search at all, because they’re never sure which version is current. You can often find this pattern in your own tenant search logs. Look for frequent queries that show up repeatedly but generate low clicks or multiple attempts. This reveals where intent isn’t connecting with the surfaced results. A finance user searching “expense claims” may miss the file titled “reimbursement forms.” The need is real. The content exists. The bridge fails because the language doesn’t align. A practical way to get visibility here is straightforward. Export your top search queries for a 30-day window. Identify queries with low result clicks or many repeated searches. Then, map those queries to the files or libraries that should satisfy them. When the results aren’t matching the expectation, you’ve found one of your clearest gap zones. Behavioral data doesn’t stop at search. Navigation traces often show users drilling into multiple layers of folders, backing out, and trying again before quitting altogether. That isn’t random behavior—it’s the digital equivalent of pulling drawers open and finding nothing useful. Each abandoned query or circular navigation flow is evidence of a system that isn’t speaking the user’s language. Here’s where governance alone can miss the point. You can enforce rigid folder structures, metadata rules, and naming conventions, but if those conventions don’t match how people think about their work, the system will keep failing. Clean frameworks matter, but they only solve half the problem. The rest is acknowledging the human side of the interaction. This is why logs should be complemented with direct input from users. Run a short survey asking people how they search for content and what keywords they typically use. Or hold a short round of interviews with frequent contributors from different departments. Pair their language with the system’s metadata labels, and you’ll immediately spot where the gaps are widest. Sometimes the fix is as simple as updating a title or adding a synonym. Other times, it requires rethinking how certain libraries are structured altogether. When you combine these insights—the signals from logs with the words from users—you build a clear picture of friction. You can highlight areas where duplication happens, where low-engagement queries point to misaligned metadata, and where navigation dead-ends frustrate staff. More importantly, you produce evidence that helps prioritize fixes. Instead of vague complaints about “search not working,” you can point to exact problem zones and propose targeted adjustments. And that’s the real payoff of tracing user behavior. You stop treating frustration as noise and start treating it as diagnostic data. Every abandoned search, duplicate file, or repeated query is a marker showing where the system is out of sync. Capturing and analyzing those markers sets up the critical next stage—turning this diagnosis into something leaders can act on. Because once you know where the gaps are, the question becomes: how do you communicate those findings in a form that drives real change?From Audit to Action: Building the Report That Actually WorksOnce you’ve gathered the assessment evidence and uncovered the gaps, the next challenge is packaging it into something leaders can actually use. This is where “From Audit to Action: Building the Report That Actually Works” comes in. A stack of raw data or a giant slide deck won’t drive decisions. What leadership expects is a clear, structured roadmap that explains the current state, what’s broken, and how to fix it in a way that supports business priorities. That’s the real dividing line between an assessment that gets shelved and one that leads to lasting change. Numbers alone are like a scan without a diagnosis—they may be accurate, but without interpretation they don’t tell anyone what to do. Translation matters. The purpose of your findings isn’t just to prove you collected data. It’s to connect the evidence to actions the business understands and can prioritize. One of the most common mistakes is overloading executives with dashboards. You might feel proud of the search query counts, storage graphs, and access charts, but from the executive side, it quickly blends into noise. What leaders need is a story: here’s the situation, here’s the cost of leaving it as-is, and here’s the opportunity if we act. Everything in your report should serve that narrative. So what does that look like in practice? A useful report should have a repeatable structure you can follow. A simple template might include: a one-page executive summary, a short list of the top pain points with their business impact, a section of quick wins that demonstrate momentum, medium-term projects with defined next steps, long-term governance commitments, and finally, named owners with KPIs. Laying it out this way ensures your audience sees both the problems and the path forward without drowning in details. The content of each section matters too. Quick wins should be tactical fixes that can be delivered almost immediately. Examples include adjusting result sources so key libraries surface first, tuning ranking in Microsoft 365 search, or fixing navigation links to eliminate dead ends. These are changes users notice the next day, and they create goodwill that earns support for the harder projects ahead. Medium-term work usually requires more coordination. This might involve reworking metadata frameworks, consolidating inactive sites or Teams channels, or standardizing file naming conventions. These projects demand some resourcing and cross-team agreement, so in your report you should include an estimated effort level, a responsible owner, and a clear acceptance measure that defines when the fix is considered complete. A vague “clean up site sprawl” is far less useful than “consolidate 12 inactive sites into one archive within three months, measured by reduced navigation paths.” Long-term governance commitments address the systemic side. These are things like implementing retention schedules, establishing lifecycle policies, or creating an information architecture review process. None of these complete in a sprint—they require long-term operational discipline. That’s why your report should explicitly recommend naming one accountable owner for governance and setting a regular review cadence, such as quarterly usage analysis. Without a named person and an explicit rhythm, these commitments almost always slip and the clutter creeps back. It’s also worth remembering that not every issue calls for expensive new tools. In practice, small configuration changes—like tuning default ranking or adjusting search scope—can sometimes create significant improvement on their own. Before assuming you need custom solutions, validate changes with A/B testing or gather user feedback. If those quick adjustments resolve the problem, highlight that outcome in your report as a low-cost win. Position custom development or specialized solutions only when the data shows that baseline configuration cannot meet the requirement. And while the instinct is often to treat the report as the finish line, it should be more like a handoff. The report sets the leadership agenda, but it also has to define accountability so improvements stick. That means asking: who reviews usage metrics every quarter? Who validates that metadata policies are being followed? Who ensures archives don’t silently swell back into relevance? Governance doesn’t end with recommendations—it’s about keeping the system aligned long after the initial fixes are implemented. When you follow this structure, your assessment report becomes more than a collection of stats. It shows leadership a direct line from problem to outcome. The ugly dashboards and raw logs get reshaped into a plan with clear priorities, owners, and checkpoints. The result is not just awareness of the cracks in the system but a systematic way to close them and prevent them from reopening. To make this practical, I want to hear from you: if you built your own report today, what’s one quick win you’d include in the “immediate actions” section? Drop your answer in the comments, because hearing what others would prioritize can spark ideas for your next assessment. And with that, we can step back and consider the bigger perspective. You now have a model for turning diagnostic chaos into a roadmap. But reports and diagrams only ever show part of the story. The deeper truth lies in understanding that a clean map can’t fully capture how your organization actually uses information day to day.ConclusionSo what does all this mean for you right now? It means taking the ideas from audit and assessment and testing them in your own environment, even in a small way. Here’s a concrete challenge: pick one SharePoint site or a single Team. Track open and edit counts for a week. Then report back in the comments with what you discovered—whether files are active, duplicated, or sitting unused. You’ll uncover patterns faster than any diagram can show. Improving findability is never one-and-done. It’s about aligning people, content, and technology over time. Subscribe if you want more practical walkthroughs for assessments like this. Get full access to M365 Show - Microsoft 365 Digital Workplace Daily at m365.show/subscribe
    --------  
    20:25
  • Build Azure Apps WITHOUT Writing Boilerplate
    How many hours have you lost wrestling with boilerplate code just to get an Azure app running? Most developers can point to days spent setting up configs, wiring authentication, or fighting with deployment scripts before writing a single useful line of code. Now, imagine starting with a prompt instead. In this session, I’ll show a short demo where we use GitHub Copilot for Azure to scaffold infrastructure, run a deployment with the Azure Developer CLI, and even fix a runtime error—all live, so you can see exactly how the flow works. Because if setup alone eats most of your time, there’s a bigger problem worth talking about.Why Boilerplate Holds Teams BackThink about the last time you kicked off a new project. The excitement’s there—you’ve got an idea worth testing, you open a fresh repo, and you’re ready to write code that matters. Instead, the day slips away configuring pipelines, naming resources, and fixing some cryptic YAML error. By the time you shut your laptop, you don’t have a working feature—you have a folder structure and a deployment file. It’s not nothing, but it doesn’t feel like progress either. In many projects, a surprisingly large portion of that early effort goes into repetitive setup work. You’re filling in connection strings, creating service principals, deciding on arbitrary resource names, copying secrets from one place to another, or hunting down which flag controls authentication. None of it is technically impressive. It’s repeatable scaffolding we’ve all done before, and yet it eats up cycles every time because the details shift just enough to demand attention. One project asks for DNS, another for networking, the next for managed identity. The variations keep engineers stuck in setup mode longer than they expected. What makes this drag heavy isn’t just the mechanics—it’s the effect it has on teams. When the first demo rolls around and there’s no visible feature to show, leaders start asking hard questions, and developers feel the pressure of spending “real” effort on things nobody outside engineering will notice. Teams often report that these early sprints feel like treading water, with momentum stalling before it really begins. In a startup, that can mean chasing down a misconfigured firewall instead of iterating on the product’s value. In larger teams, it shows up as week-long delays before even a basic “Hello World” can be deployed. The cost isn’t just lost time—it’s morale and missed opportunity. Here’s the good news: these barriers are exactly the kinds of steps that can be automated away. And that’s where new tools start to reshape the equation. Instead of treating boilerplate as unavoidable, what if the configuration, resource wiring, and secrets management could be scaffolded for you, leaving more space for real innovation? Here’s how Copilot and azd attack exactly those setup steps—so you don’t repeat the same manual work every time.Copilot as Your Cloud Pair ProgrammerThat’s where GitHub Copilot for Azure comes in—a kind of “cloud pair programmer” sitting alongside you in VS Code. Instead of searching for boilerplate templates or piecing together snippets from old repos, you describe what you want in natural language, and Copilot suggests the scaffolding to get you started. The first time you see it, it feels less like autocomplete and more like a shift in how infrastructure gets shaped from the ground up. Here’s what that means. Copilot for Azure isn’t just surfacing random snippets—it’s generating infrastructure-as-code artifacts, often in Bicep or ARM format, that match common Azure deployment patterns. Think of it as a starting point you can iterate on, not a finished production blueprint. For example, say you type: “create a Python web app using Azure Functions with a SQL backend.” In seconds, files appear in your project that define a Function App, create the hosting plan, provision a SQL Database with firewall rules, and insert connection strings. That scaffolding might normally take hours or days for someone to build manually, but here it shows up almost instantly. This is the moment where the script should pause for a live demo. Show the screen in VS Code as you type in that prompt. Let Copilot generate the resources, and then reveal the resulting file list—FunctionApp.bicep, sqlDatabase.bicep, maybe a parameters.json. Open one of them and point out a key section, like how the Function App references the database connection string. Briefly explain why that wiring matters—because it’s the difference between a project that’s deployable and a project that’s just “half-built.” Showing the audience these files on screen anchors the claim and lets them judge for themselves how useful the output really is. Now, it’s important to frame this carefully. Copilot is not “understanding” your project the way a human architect would. What it’s doing is using AI models trained on a mix of open code and Azure-specific grounding so it can map your natural language request to familiar patterns. When you ask for a web app with a SQL backend, the system recognizes the elements typically needed—App Service or Function App, a SQL Database, secure connection strings, firewall configs—and stitches them together into templates. There’s no mystery, just a lot of trained pattern recognition that speeds up the scaffolding process. Developers might assume that AI output is always half-correct and a pain to clean up. And with generic code suggestions, that often rings true. But here you’re starting from infrastructure definitions that are aligned with how Azure resources are actually expected to fit together. Do you need to review them? Absolutely. You’ll almost always adjust naming conventions, check security configurations, and make sure they comply with your org’s standards. Copilot speeds up scaffolding—it doesn’t remove the responsibility of production-readiness. Think of it as knocking down the blank-page barrier, not signing off your final IaC. This also changes team dynamics. Instead of junior developers spending their first sprint wrestling with YAML errors or scouring docs for the right resource ID format, they can begin reviewing generated templates and focusing energy on what matters. Senior engineers, meanwhile, shift from writing boilerplate to reviewing structure and hardening configurations. The net effect is fewer hours wasted on rote setup, more attention given to design and application logic. For teams under pressure to show something running by the next stakeholder demo, that difference is critical. Behind the scenes, Microsoft designed this Azure integration intentionally for enterprise scenarios. It ties into actual Azure resource models and the way the SDKs expect configurations to be defined. When resources appear linked correctly—Key Vault storing secrets, a Function App referencing them, a database wired securely—it’s because Copilot pulls on those structured expectations rather than improvising. That grounding is why people call it a pair programmer for the cloud: not perfect, but definitely producing assets you can move forward with. The bottom line? Copilot for Azure gives you scaffolding that’s fast, context-aware, and aligned with real-world patterns. You’ll still want to adjust outputs and validate them—no one should skip that—but you’re several steps ahead of where you’d be starting from scratch. So now you’ve got these generated infrastructure files sitting in your repo, looking like they’re ready to power something real. But that leads to the next question: once the scaffolding exists, how do you actually get it running in Azure without spending another day wrestling with commands and manual setup?From Scaffolding to Deployment with AZDThis is where the Azure Developer CLI, or azd, steps in. Think of it less as just another command-line utility and more as a consistent workflow that bridges your repo and the cloud. Instead of chaining ten commands together or copying values back and forth, azd gives you a single flow for creating an environment, provisioning resources, and deploying your application. It doesn’t remove every decision, but it makes the essential path something predictable—and repeatable—so you’re not reinventing it every project. One key clarification: azd doesn’t magically “understand” your app structure out of the box. It works with configuration files in your repo or prompts you for details when they’re missing. That means your project layout and azd’s environment files work together to shape what gets deployed. In practice, this design keeps it transparent—you can always open the config to see exactly what’s being provisioned, rather than trusting something hidden behind an AI suggestion. Let’s compare the before and after. Traditionally you’d push infrastructure templates, wait, then spend half the afternoon in the Azure Portal fixing what didn’t connect correctly. Each missing connection string or misconfigured role sent you bouncing between documentation, CLI commands, and long resource JSON files. With azd, the workflow is tighter: - Provision resources as a group. - Wire up secrets and environment variables automatically. - Deploy your app code directly against that environment. That cuts most of the overhead out of the loop. Instead of spending your energy on plumbing, you’re watching the app take shape in cloud resources with less handholding. This is a perfect spot to show the tool in action. On-screen in your terminal, run through a short session: azd init. azd provision. azd deploy. Narrate as you go—first command sets up the environment, second provisions the resources, third deploys both infrastructure and app code together. Let the audience see the progress output and the final “App deployed successfully” message appear, so they can judge exactly what azd does instead of taking it on faith. That moment validates the workflow and gives them something concrete to try on their own. The difference is immediate for small teams. A startup trying to secure funding can stand up a working demo in a day instead of telling investors it’ll be ready “next week.” Larger teams see the value in onboarding too. When a new developer joins, the instructions aren’t “here’s three pages of setup steps”—it’s “clone the repo, run azd, and start coding.” That predictability lowers the barrier both for individuals and for teams with shifting contributors. Of course, there are still times you’ll adjust what azd provisioned. Maybe your org has naming rules, maybe you need custom networking. That’s expected. But the scaffolding and first deployment are no longer blockers—they’re the baseline you refine instead of hurdles you fight through every time. In that sense, azd speeds up getting to the “real” engineering work without skipping the required steps. The experience of seeing your application live so quickly changes how projects feel. Instead of calculating buffer time just to prepare a demo environment, you can focus on what your app actually does. The combination of Copilot scaffolding code and azd deploying it through a clean workflow removes the heavy ceremony from getting started. But deployment is only half the story. Once your app is live in the cloud, the challenges shift. Something will eventually break, whether it’s a timeout, a missing secret, or misaligned scaling rules. The real test isn’t just spinning up an environment—it’s how quickly you can understand and fix issues when they surface. That’s where the next set of tools comes into play.AI-Powered Debugging and Intelligent DiagnosticsWhen your app is finally running in Azure, the real test begins—something unexpected breaks. AI-powered debugging and intelligent diagnostics are designed to help in those exact moments. Cloud-native troubleshooting isn’t like fixing a bug on your laptop. Instead of one runtime under your control, the problem could sit anywhere across distributed services—an API call here, a database request there, a firewall blocking traffic in between. The result is often a jumble of error messages that feel unhelpful without context, leaving developers staring at logs and trying to piece together a bigger picture. The challenge is less about finding “the” error and more about tracing how small misconfigurations ripple across services. One weak link, like a mismatched authentication token or a missing environment variable, can appear as a vague timeout or a generic connection failure. Traditionally, you’d field these issues by combing through Application Insights and Azure Monitor, then manually cross-referencing traces to form a hypothesis—time-consuming, often frustrating work. This is where AI can assist by narrowing the search space. Copilot doesn’t magically solve problems, but it can interpret logs and suggest plausible diagnostic next steps. Because it uses the context of code and error messages in your editor, it surfaces guidance that feels closer to what you might try anyway—just faster. To make this meaningful, let’s walk through an example live. Here’s the scenario: your app just failed with a database connection error. On screen, we’ll show the error snippet: “SQL connection failed. Client unable to establish connection.” Normally you’d start hunting through firewall rules, checking connection strings, or questioning whether the database even deployed properly. Instead, in VS Code, highlight the log, call up Copilot, and type a prompt: “Why is this error happening when connecting to my Azure SQL Database?” Within moments, Copilot suggests that the failure may be due to firewall rules not allowing traffic from the hosting environment, and also highlights that the connection string in configuration might not be using the correct authentication type. Alongside that, it proposes a corrected connection string example. Now, apply that change in your configuration file. Walk the audience through replacing the placeholder string with the new suggestion. Reinforce the safe practice here: “Copilot’s answer looks correct, but before we assume it’s fixed, we’ll test this in staging. You should always validate suggestions in a non-production environment before rolling them out widely.” Then redeploy or restart the app in staging to check if the connection holds. This on-screen flow shows the AI providing value—not by replacing engineering judgment, but by giving you a concrete lead within minutes instead of hours of log hunting. Paired with telemetry from Application Insights or Azure Monitor, this process gets even more useful. Those services already surface traces, metrics, and failure signals, but it’s easy to drown in the detail. By copying a snippet of trace data into a Copilot prompt, you can anchor the AI’s suggestions around your actual telemetry. Instead of scrolling through dozens of graphs, you get an interpretation: “These failures occur when requests exceed the database’s DTU allocation; check whether auto-scaling rules match expected traffic.” That doesn’t replace the observability platform—it frames the data into an investigative next step you can act on. The bigger win is in how it reframes the rhythm of debugging. Instead of losing a full afternoon parsing repetitive logs, you cycle faster between cause and hypothesis. You’re still doing the work, but with stronger directional guidance. That difference can pull a developer out of the frustration loop and restore momentum. Teams often underestimate the morale cost of debugging sessions that feel endless. With AI involved, blockers don’t linger nearly as long, and engineers spend more of their energy on meaningful problem solving. And when developers free up that energy, it shifts where the attention goes. Less time spelunking in log files means more time improving database models, refining APIs, or making user flows smoother. That’s work with visible impact, not invisible firefighting. AI-powered diagnostics won’t eliminate debugging, but they shrink its footprint. Problems still surface, no question, but they stop dominating project schedules the way they often do now. The takeaway is straightforward: Copilot’s debugging support creates faster hypothesis generation, shorter downtime, and fewer hours lost to repetitive troubleshooting. It’s not a guarantee the first suggestion will always be right, but it gives you clarity sooner, which matters when projects are pressed for time. With setup, deployment, and diagnostics all seeing efficiency gains, the natural question becomes: what happens when these cumulative improvements start to reshape the pace at which teams can actually deliver?The Business Payoff: From Slow Starts to Fast LaunchesThe business payoff comes into focus when you look at how these tools compress the early friction of a project. Teams frequently report that when they pair AI-driven scaffolding with azd-powered deployments, they see faster initial launches and earlier stakeholder demos. The real value isn’t just about moving quickly—it’s about showing progress at the stage when momentum matters most. Setup tasks have a way of consuming timelines no matter how strong the idea or team is. Greenfield efforts, modernization projects, or even pilot apps often run into the same blocker: configuring environments, reconciling dependencies, and fixing pipeline errors that only emerge after hours of trial and error. While engineers worry about provisioning and authentication, leadership sees stalled velocity. The absence of visible features doesn’t just frustrate developers—it delays when business value is delivered. That lag creates risk, because stakeholders measure outcomes in terms of what can be demonstrated, not in terms of background technical prep. This contrast becomes clear when you think about it in practical terms. Team A spends their sprint untangling configs and environment setup. Team B, using scaffolded infrastructure plus azd to deploy, puts an early demo in front of leadership. Stakeholders don’t need to know the details—they see one team producing forward motion and another explaining delays. The upside to shipping something earlier is obvious: feedback comes sooner, learning happens earlier, and developers are less likely to sit blocked waiting on plumbing to resolve before building features. That advantage stacks over time. By removing setup as a recurring obstacle, projects shift their center of gravity toward building value instead of fighting scaffolding. More of the team’s focus lands on the product—tightening user flows, improving APIs, or experimenting with features—rather than copying YAML or checking secrets into the right vault. When early milestones show concrete progress, leadership’s questions shift from “when will something run?” to “what can we add next?” That change in tone boosts morale as much as it accelerates delivery. It also transforms how teams work together. Without constant bottlenecks at setup, collaboration feels smoother. Developers can work in parallel because the environment is provisioned faster and more consistently. You don’t see as much time lost to blocked tasks or handoffs just to diagnose why a pipeline broke. Velocity often increases not by heroes working extra hours, but by fewer people waiting around. In this way, tooling isn’t simply removing hours from the schedule—it’s flattening the bumps that keep a group from hitting stride together. Another benefit is durability. Because the workflows generated by Copilot and azd tie into source control and DevOps pipelines, the project doesn’t rest on brittle, one-off scripts. Instead, deployments become reproducible. Every environment is created in a consistent way, configuration lives in versioned files, and new developers can join without deciphering arcane tribal knowledge. Cleaner pipelines and repeatable deployments reduce long-term maintenance overhead as well as startup pain. That reliability is part of the business case—it keeps velocity predictable instead of dependent on a few specialists. It’s important to frame this realistically. These tools don’t eliminate all complexity, and they won’t guarantee equal results for every team. But even when you account for adjustments—like modifying resource names, tightening security, or handling custom networking—the early blockers that typically delay progress are drastically softened. Some teams have shared that this shift lets them move into meaningful iteration cycles sooner. In our experience, the combination of prompt-driven scaffolding and streamlined deployment changes the pacing of early sprints enough to matter at the business level. If you’re wondering how to put this into action right away, there are three simple steps you could try on your own projects. First, prompt Copilot to generate a starter infrastructure file for an Azure service you already know you need. Second, use azd to run a single environment deploy of that scaffold—just enough to see how the flow works in your repo. Third, when something does break, practice pairing your telemetry output with a Copilot prompt to test how the suggestions guide you toward a fix. These aren’t abstract tips; they’re tactical ways to see the workflow for yourself. What stands out is that the payoff isn’t narrowly technical. It’s about unlocking a faster business rhythm—showing stakeholders progress earlier, gathering feedback sooner, and cutting down on developer idle time spent in setup limbo. Even small improvements here compound over the course of a project. The net result is not just projects that launch faster, but projects that grow more confidently because iteration starts earlier. And at this stage, the question isn’t whether scaffolding, deploying, and debugging can be streamlined. You’ve just seen how that works in practice. The next step is recognizing what that unlocks: shifting focus away from overhead and into building the product itself. That’s where the real story closes.ConclusionAt this point, let’s wrap with the key takeaway. The real value here isn’t about writing code faster—it’s about clearing away the drag that slows projects long before features appear. When boilerplate gets handled, progress moves into delivering something visible much sooner. Here’s the practical next step: don’t start your next Azure project from a blank config. Start it with a prompt, scaffold a small sample, then run azd in a non-production environment to see the workflow end to end. Prompt → scaffold → deploy → debug. That’s the flow. If you try it, share one surprising thing Copilot generated for you in the comments—I’d love to hear what shows up. And if this walkthrough was useful, subscribe for more hands-on demos of real-world Azure workflows. Get full access to M365 Show - Microsoft 365 Digital Workplace Daily at m365.show/subscribe
    --------  
    18:56
  • Quantum Code Isn’t Magic—It’s Debuggable
    Quantum computing feels like something only physicists in lab coats deal with, right? But what if I told you that today, from your own laptop, you can actually write code in Q# and send it to a physical quantum computer in the cloud? By the end of this session, you’ll run a simple Q# program locally and submit that same job to a cloud quantum device. Microsoft offers Azure Quantum and the Q# language, and I’ll link the official docs in the description so you have up‑to‑date commands and version details. Debugging won’t feel like magic tricks either—it’s approachable, practical, and grounded in familiar patterns. And once you see how the code is structured, you may find it looks a lot more familiar than you expect.Why Quantum Code Feels FamiliarWhen people first imagine quantum programming, they usually picture dense equations, impenetrable symbols, and pages of math that belong to physicists, not developers. Then you actually open up Q#, and the surprise hits—it doesn’t look foreign. Q# shares programming structures you already know: namespaces, operations, and types. You write functions, declare variables, and pass parameters much like you would in C# or Python. The entry point looks like code, not like physics homework. The comfort, however, hides an important difference. In classical programming, those variables hold integers, strings, or arrays. In Q#, they represent qubits—the smallest units of quantum information. That’s where familiar syntax collides with unfamiliar meaning. You may write something that feels normal on the surface, but the execution has nothing to do with the deterministic flow your past experience has trained you to expect. The easiest way to explain this difference is through a light switch. Traditional code is binary: it’s either fully on or fully off, one or zero. A qubit acts more like a dimmer switch—not locked at one end, but spanning many shades in between. Until you measure it, it lives in a probabilistic blend of outcomes. And when you apply Q# operations, you’re sliding that dimmer back and forth, not just toggling between two extremes. Each operation shifts probability, not certainty, and the way they combine can either reinforce or cancel each other out—much like the way waves interfere. Later, we’ll write a short Q# program so you can actually see this “dimmer” metaphor behave like a coin flip that refuses to fully commit until you measure it. So: syntax is readable; what changes is how you reason about state and measurement. Where classical debugging relies on printing values or tracing execution, quantum debugging faces its own twist—observing qubits collapses them, altering the very thing you’re trying to inspect. A for-loop or a conditional still works structurally, but its content may be evolving qubits in ways you can’t easily watch step by step. This is where developers start to realize the challenge isn’t memorizing a new language—it’s shifting their mental model of what “running” code actually means. That said, the barrier is lower than the hype suggests. You don’t need a physics degree or years of mathematics before you can write something functional. Q# is approachable exactly because it doesn’t bury you in new syntax. You can rely on familiar constructs—functions, operations, variables—and gradually build up the intuition for when the dimmer metaphor applies and when it breaks down. The real learning curve isn’t the grammar of the language, but the reasoning about probabilistic states, measurement, and interference. This framing changes how you think about errors too. They don’t come from missing punctuation or mistyped keywords. More often, they come from assumptions—for example, expecting qubits to behave deterministically when they fundamentally don’t. That shift is humbling at first, but it’s also encouraging. The tools to write quantum code are within your reach, even if the behavior behind them requires practice to understand. You can read Q# fluently in its surface form while still building intuition for the underlying mechanics. In practical terms, this means most developers won’t struggle with reading or writing their first quantum operations. The real obstacle shows up before you even get to execution—setting up the tools, simulators, and cloud connections in a way that everything communicates properly. And that setup step is where many people run into the first real friction, long before qubit probabilities enter the picture.Your Quantum Playground: Setting Up Q# and AzureSo before you can experiment with Q# itself, you need a working playground. And in practice, that means setting up your environment with the right tools so your code can actually run, both locally and in the cloud with Azure Quantum. None of the syntax or concepts matter if the tooling refuses to cooperate, so let’s walk through what that setup really looks like. The foundation is Microsoft’s Quantum Development Kit, which installs through the .NET ecosystem. The safest approach is to make sure your .NET SDK is current, then install the QDK itself. I won’t give you version numbers here since they change often—just check the official documentation linked in the description for the exact commands for your operating system. Once installed, you create a new Q# project much like any other .NET project: one command and you’ve got a recognizable file tree ready to work with. From there, the natural choice is Visual Studio Code. You’ll want the Q# extension, which adds syntax highlighting, IntelliSense, and templates so the editor actually understands what you’re writing. Without it, everything looks like raw text and you keep second-guessing your own typing. Installing the extension is straightforward, but one common snag is forgetting to restart VS Code after adding it. That simple oversight leads to lots of “why isn’t this working” moments that fix themselves the second you relaunch the editor. Linking to Azure is the other half of the playground. Running locally is important to learn concepts, but if you want to submit jobs to real quantum hardware, you’ll need an Azure subscription with a Quantum workspace already provisioned. After that, authenticate with the Azure CLI, set your subscription, and point your local project at the workspace. It feels more like configuring a web app than like writing code, but it’s standard cloud plumbing. Again, the documentation in the description covers the exact CLI commands, so you can follow from your machine without worrying that something here is out of date. To make this all easier to digest, think of it like a short spoken checklist. Three things to prepare: one, keep your .NET SDK up to date. Two, install the Quantum Development Kit and add the Q# extension in VS Code. Three, create an Azure subscription with a Quantum workspace, then authenticate in the CLI so your project knows where to send jobs. That’s the big picture you need in your head before worrying about any code. For most people, the problems here aren’t exotic—they’re the same kinds of trip-ups you’ve dealt with in other projects. If you see compatibility errors, updating .NET usually fixes it. If VS Code isn’t recognizing your Q# project, restart after installing the extension. If you submit a job and nothing shows up, check that your workspace is actually linked to the project. Those three quick checks solve most of the early pain points. It’s worth stressing that none of this is quantum-specific frustration. It’s the normal environment setup work you’ve done in every language stack you’ve touched, whether setting up APIs or cloud apps. And it’s exactly why the steepest slope at the start isn’t about superposition or entanglement—it’s about making sure the tools talk to one another. Once they do, you’re pressing play on your code like you would anywhere else. To address another common concern—yes, in this video I’ll actually show the exact commands during the demo portion, so you’ll see them typed out step by step. And in the description, you’ll find verified links to Microsoft’s official instructions. That way, when you try it on your own machine, you’re not stuck second‑guessing whether the commands I used are still valid. The payoff here is a workspace that feels immediately comfortable. Your toolchain isn’t exotic—it’s VS Code, .NET, and Azure, all of which you’ve likely used in other contexts. The moment it all clicks together and you get that first job running, the mystique drops away. What you thought were complicated “quantum errors” were really just the same dependency or configuration problems you’ve been solving for years. With the environment in place, the real fun begins. Now that your project is ready to run code both locally and in the cloud, the next logical step is to see what a first quantum program actually looks like.Writing Your First Quantum ProgramSo let’s get practical and talk about writing your very first quantum program in Q#. Think of this as the quantum version of “Hello World”—not text on a screen, but your first interaction with a qubit. In Q#, you don’t greet the world, you initialize and measure quantum state. And in this walkthrough, we’ll actually allocate a qubit, apply a Hadamard gate, measure it, and I’ll show you the run results on both the local simulator and quantum hardware so you can see the difference. The structure of this first Q# program looks surprisingly ordinary. You define an operation—Q#’s equivalent of a function—and from inside it, allocate a qubit. That qubit begins in a known classical state, zero. From there, you call an operation, usually the Hadamard, which places the qubit into a balanced superposition between zero and one. Finally, you measure. That last step collapses the quantum state into a definite classical bit you can return, log, or print. So the “Hello World” flow is simple: allocate, operate, measure. The code is only a few lines long, yet it represents quantum computation in its most distilled form. The measurement step is where most newcomers feel the biggest shift. In classical programming, once you print output, you know exactly what it will be. In quantum computing, a single run gives you either a zero or a one—but never both. Run the program multiple times, and you’ll see a mix of outcomes. That variability isn’t a bug; it is the feature. A single run returns one classical bit. When you repeat the program many times, the collection of results reveals the distribution of probabilities your algorithm is creating. This is the foundation for reasoning about quantum programs: you don’t judge correctness by one run but by the long-run statistics. An analogy helps here. If you think of the qubit as a coin, when you first allocate it, it always lands on heads. Measuring right away yields a zero every time. Once you apply the Hadamard operation, though, you’ve prepared a fair coin that gives you heads or tails with equal probability. Each individual flip looks unpredictable, but the pattern across many flips settles into the expected balance. And while that might feel frustrating at first, the power of quantum programming comes from your ability to “nudge” those probabilities using different gates—tilting the coin rather than forcing a deterministic number. This is also a point where your instincts as a classical developer push back. In a traditional program, each run of the same function yields the same result. Quantum doesn’t break that expectation; it reframes it. Correctness isn’t about identical outputs but about whether your sequence of operations shapes the probability distribution exactly as anticipated. As a result, your debugging mindset shifts: instead of checking whether one return matches your expectation, you look at the distribution across many runs and check if it aligns with what theory predicts. That’s why the simulator is so useful. Run your Q# program there, and you’ll see clean probabilistic results without real-world noise. When you repeat the same simple program many iterations, you’ll notice the outcomes spread evenly, just as the math says they should. This makes the simulator your best debugging partner. A concrete tip here: whenever you write a new operation, don’t settle for one result. Run it many times on the simulator so you can validate that the distribution matches your understanding before sending the job to actual hardware. On the simulator, the only randomness comes from the math; on hardware, physical noise and interference complicate that pattern. And this brings up an important practical point. Real quantum devices, even when running this “Hello World” program, won’t always match the simulator perfectly. Hardware might show a subtle bias toward one value simply because of natural error sources. That doesn’t mean your code failed—it highlights the difference between a perfect theoretical model and the messy world of physical qubits. In the upcoming section, I’ll walk through what that means in practice so you can recognize when an odd result is noise versus when it’s a mistake in your program. Even in this tiny program, you can see how quantum work challenges old habits. Measuring isn’t like printing output—it’s an action that changes what you’re measuring. Debugging requires you to think differently, since you can’t just peek at the “state” in the middle of execution without collapsing it. These challenges come into sharp focus once you start thinking about how to find and fix mistakes in this environment. And that brings us directly to the next question every new quantum programmer asks: if you can’t observe variables the way you normally would, how do you actually debug your code?Debugging in a World Where You Can’t PeekIn classical development, debugging usually relies on inspecting state: drop a print statement, pause in a debugger, and examine variables while the program is running. Quantum development removes that safety net. You can’t peek inside a qubit mid-execution without changing it. The very act of measurement collapses its state into a definite zero or one. That’s why debugging here takes a different form: instead of direct inspection, you depend on simulation-based checks to gain confidence in what your algorithm is doing. This is exactly where simulators in Q# earn their importance. They aren’t just training wheels; they’re your main environment for reasoning about logic. Simulators give you a controlled version of the system where you run the same operations you would on hardware, but with extra insight. You can analyze how states are prepared, whether probability distributions look correct, and whether your logic is shaping outcomes the way you intended. You don’t read out a qubit like an integer, but by repeating the program many times you can see whether the statistics converge toward the expected pattern. That shift makes debugging less about catching one wrong output, and more about validating trends. A practical workflow is to run your algorithm hundreds or thousands of times in the simulator. If you expected a balanced distribution but the results skew heavily to one side, something in your code isn’t aligning with your intent. Think of it as unit testing, but where the test passes only when the overall distribution of results matches theory. It’s not deterministic checks line by line—it’s statistical reasoning about whether the algorithm behaves as designed. To make this more concrete, here’s a simple triage checklist you can always fall back on when debugging Q#: First, run your algorithm in the simulator with many shots and check whether the distribution lines up with expectations. Second, add assertions or diagnostics in the simulator to confirm that your qubits are being prepared and manipulated into the states you expect. Third, only move to hardware once those statistical checks pass consistently. This gives you a structured process rather than trial-and-error guesswork. Alongside statistical mismatches, there are common mistakes beginners run into often. One example is measuring a qubit too early, which kills interference patterns and ruins the outcome. If you do this, your results flatten into something that looks random when you expected constructive or destructive interference. If the demo includes it, we’ll actually show what that mistake looks like in the output so you can recognize the symptom when it happens to you. Another pitfall is forgetting to properly release qubits at the end of an operation. Q# expects clean allocation and release patterns, and while the runtime helps flag errors, check the official documentation—linked in the description—for the exact requirements. Think of it like leaving open file handles: avoid it early and it saves headaches later. Q# also includes structured tools to confirm program logic. Assertions allow you to check that qubits are in the intended state at specific points, and additional diagnostics can highlight whether probabilities match your expectations before you ever go near hardware. These tools are designed to make debugging a repeatable process rather than guesswork. The idea isn’t to replace careful coding, but to complement it: you construct checkpoints that verify each stage of your algorithm works the way you thought it did. Once those checkpoints pass consistently in simulation, you carry real confidence into hardware runs. The main mindset change is moving away from single-run certainty. In a classical program, if your print statement shows the wrong number, you trace it back and fix it. In quantum, a single zero or one tells you nothing, so you widen your perspective. Debugging means asking: does my program produce the right pattern when repeated many times? Does the logic manipulate probabilities the way I predict? That broader view actually makes your algorithm stronger—you’re reasoning about structure and flow, rather than chasing isolated outliers. Over time this stops feeling foreign. The simulator becomes your primary partner, not just in finding mistakes but in validating the architecture of your algorithm. Assertions, diagnostics, and statistical tests supplement your intuition until the process feels structured and systematic. And when you do step onto real hardware, you’ll know that if results drift, it’s likely due to physical noise rather than a flaw in your logic. Which sets up the next stage of the journey: once your algorithm is passing these checks locally, how do you move beyond the simulator and see it run on an actual quantum device sitting in the cloud?From Laptop to Quantum ComputerThe real difference shows up once you take the same Q# project you’ve been running locally and push it through to a quantum device in the cloud. This is the moment where quantum stops being hypothetical and becomes data you can measure from a machine elsewhere in the world. For most developers, that’s the point when “quantum programming” shifts from theory into something tangible you can actually validate. On your side, the process looks familiar. You’re still in Visual Studio Code with the same files and project structure—the only change comes when you decide where to send the job. Instead of targeting the local simulator, you direct execution to Azure Quantum. From there, your code is bundled into a job request and sent to the workspace you’ve already linked. The workspace then takes care of routing the job to the hardware provider you’ve chosen. You don’t rewrite logic or restructure your program—your algorithm stays exactly as it is. The difference is in the backend that receives it. The workflow itself is straightforward enough to describe as a short checklist. Switch your target to Azure Quantum. Submit the job. Open your workspace to check its status. Once the job is complete, download the results to review locally. If you’ve ever deployed code to a cloud resource, the rhythm will feel familiar. You’re not reinventing your process—you’re rerouting where the program runs. Expect differences in how fast things move. Local simulators finish nearly instantly, while jobs sent to actual hardware often enter a shared queue. That means results take longer and aren’t guaranteed on demand. There are also costs and usage quotas to be aware of. Rather than relying on fixed numbers, the best guidance is to check the official documentation for your specific provider—links are in the description. What’s important here is managing expectations: cloud hardware isn’t for every quick test, it’s for validation once you’re confident in your logic. Another adjustment you’ll notice is in the output itself. Simulators return distributions that match the math almost perfectly. Hardware results come back with noise. A balanced Hadamard test, for instance, won’t give you an exact half-and-half split every time. You might see a tilt in one direction or the other simply because the hardware isn’t exempt from imperfections. Rather than interpreting that as a logic bug, it’s better to treat it as measured physical data. The smart approach is to confirm your program’s correctness in the simulator first, then interpret hardware results as an overlay of noise on top of correct behavior. That way, you don’t waste time chasing issues in code when the difference actually reflects hardware limits. The usefulness of this stage isn’t in precision alone—it’s in realism. By submitting jobs to real hardware, you get experience with actual error rates, interference effects, and queue limitations. You see what your algorithm looks like in practice, not just what theory predicts. And you do so without re-architecting your whole project. Adjusting one configuration is enough to move from simulation into the real world, and that sense of continuity makes the process approachable. Think about a simple example like the same coin-flip routine you tried locally. Running it on the simulator gives you a perfectly even distribution across many trials. Running it on hardware is different: you’ll download results that lean slightly one way or the other. It feels less precise, but it’s more instructive. Those results remind you that your algorithm isn’t operating in isolation—it’s interacting with a physical device managed in a lab you’ll never see. The trade-off is speed and cleanliness for authenticity. Not long ago, this type of access wasn’t even on the table. The only way to run quantum programs on hardware involved tightly controlled research environments and limited availability. Today, the difference is striking: you can launch a job from your desktop and retrieve results using the same interfaces you already know from other Azure workflows. The experience brings quantum closer to everyday development practice, where experimenting isn’t reserved for laboratories but happens wherever developers are curious enough to try. Stepping onto hardware for the first time doesn’t make your local simulator obsolete. Instead, it places both tools next to each other: the simulator for debugging and validating distributions, the hardware for confirming physical behavior. Used together, they encourage you to form habits around testing, interpreting, and refining. And that dual view—ideal math balanced against noisy reality—is what prepares you to think about quantum not as a concept but as a working technology. Which brings us to the larger perspective. If you’ve come this far, you’ve seen how approachable the workflow actually is. The local toolchain gets your code running, the simulator helps debug and validate, and submitting to hardware grounds the outcome in physical reality. That progression isn’t abstract—it’s something you can work through now, as a developer at your own machine. And it sets the stage for an important realization about where quantum programming fits today, and how getting hands-on now positions you for what’s coming next.ConclusionQuantum programming isn’t abstract wizardry—it’s code you can write, run, and debug today. The syntax looks familiar, the tooling works inside editors you already use, and the real adjustment comes from how qubits behave, not how the code is written. That makes it practical and approachable, even if you’re not a physicist. Start by installing the Quantum Development Kit, run a simple job on the simulator, and once you trust the results, submit one small job to hardware to see how noise affects outcomes. If you want the exact install and run commands I used, check the description where I’ve linked the official docs and a sample project. And if you hit a snag, drop a comment with the CLI error text—I’ll help troubleshoot. If this walkthrough was useful, don’t forget to like and subscribe so you’ll catch future deep dives into quantum development. Get full access to M365 Show - Microsoft 365 Digital Workplace Daily at m365.show/subscribe
    --------  
    19:41
  • The Cloud Promise Is Broken
    You’ve probably heard the promise: move to the cloud and you’ll get speed, savings, and security in one neat package. But here’s the truth—many organizations don’t see all those benefits at the same time. Why? Because the cloud isn’t a destination. It’s a moving target. Services change, pricing shifts, and new features roll out faster than teams can adapt. In this podcast, I’ll explain why the setup phase so often stalls, where responsibility breaks down, and the specific targets you can set this quarter to change that. First: where teams get stuck.Why Cloud Migrations Never Really EndWhen teams finally get workloads running in the cloud, there’s often a sense of relief—like the hard part is behind them. But that perception rarely holds for long. What feels like a completed move often turns out to be more of a starting point, because cloud migrations don’t actually end. They continue to evolve the moment you think you’ve reached the finish line. This is where expectations collide with reality. Cloud marketing often emphasizes immediate wins like lower costs, easy scalability, and faster delivery. The message can make it sound like just getting workloads into Azure is the goal. But in practice, reaching that milestone is only the beginning. Instead of a stable new state, organizations usually encounter a stream of adjustments: reconfiguring services, updating budgets, and fixing issues that only appear once real workloads start running. So why does that finish line keep evaporating? Because the platform itself never stops changing. I’ve seen it happen firsthand. A company completes its migration, the project gets celebrated, and everything seems stable for a short while. Then costs begin climbing in unexpected ways. Security settings don’t align across departments. Teams start spinning up resources outside of governance. And suddenly “migration complete” has shifted into nonstop firefighting. It’s not that the migration failed—it’s that the assumption of closure was misplaced. Part of the challenge is the pace of platform change. Azure evolves frequently, introducing new services, retiring old ones, and updating compliance tools. Those changes can absolutely be an advantage if your teams adapt quickly, but they also guarantee that today’s design can look outdated tomorrow. Every release reopens questions about architecture, cost, and whether your compliance posture is still solid. The bigger issue isn’t Azure itself—it’s mindset. Treating migration as a project with an end date creates false expectations. Projects suggest closure. Cloud platforms don’t really work that way. They behave more like living ecosystems, constantly mutating around whatever you’ve deployed inside them. If all the planning energy goes into “getting to done,” the reality of ongoing change turns into disruption instead of continuous progress. And when organizations treat migration as finished, the default response to problems becomes reactive. Think about costs. Overspending usually gets noticed when the monthly bill shows a surprise spike. Leaders respond by freezing spending and restricting activity, which slows down innovation. Security works the same way—gaps get discovered only during an audit, and fixes become rushed patch jobs under pressure. This reactive loop doesn’t just drain resources—it turns the cloud into an ongoing series of headaches instead of a platform for growth. So the critical shift is in how progress gets measured. If you accept that migration never really ends, the question changes from “are we done?” to “how quickly can we adapt?” Success stops being about crossing a finish line and becomes about resilience—making adjustments confidently, learning from monitoring data, and folding updates into normal operations instead of treating them like interruptions. That mindset shift changes how the whole platform feels. Scaling a service isn’t an emergency; it’s an expected rhythm. Cost corrections aren’t punishments; they’re optimization. Compliance updates stop feeling like burdens and become routine. In other words, the cloud doesn’t stop moving—but with the right approach, you move with it instead of against it. Here’s the takeaway: the idea that “done” doesn’t exist isn’t bad news. It’s the foundation for continuous improvement. The teams that get the most out of Azure aren’t the ones who declare victory when workloads land; they’re the ones who embed ongoing adjustments into their posture from the start. And that leads directly to the next challenge. If the cloud never finishes, how do you make use of the information it constantly generates? All that monitoring data, all those dashboards and alerts—what do you actually do with them?The Data Trap: When Collection Becomes BusyworkAnd that brings us to a different kind of problem: the trap of collecting data just for the sake of it. Dashboards often look impressive, loaded with metrics for performance, compliance, and costs. But the critical question isn’t how much data you gather—it’s whether anyone actually does something with it. Collecting metrics might satisfy a checklist, yet unless teams connect those numbers to real decisions, they’re simply maintaining an expensive habit. Guides on cloud adoption almost always recommend gathering everything you can—VM utilization, cross-region latency, security warnings, compliance gaps, and cost dashboards. Following that advice feels safe. Nobody questions the value of “measuring everything.” But once those pipelines fill with numbers, the cracks appear. Reports are produced, circulated, sometimes even discussed—and then nothing changes in the environment they describe. Frequently, teams generate polished weekly or monthly summaries filled with charts and percentages that appear to give insight. A finance lead acknowledges them, an operations manager nods, and then attention shifts to the next meeting. The cycle repeats, but workloads remain inefficient, compliance risks stay unresolved, and costs continue as before. The volume of data grows while impact lags behind. This creates an illusion of progress. A steady stream of dashboards can convince leadership that risks are contained and spending is under control—simply because activity looks like oversight. But monitoring by itself doesn’t equal improvement. Without clear ownership over interpreting the signals and making changes, the information drifts into background noise. Worse, leadership may assume interventions are already happening, when in reality, no action follows. Over time, the fatigue sets in. People stop digging into reports because they know those efforts rarely lead to meaningful outcomes. Dashboards turn into maintenance overhead rather than a tool for improvement. In that environment, opportunities for optimization go unnoticed. Teams may continue spinning up resources or ignoring configuration drift, while surface-level reporting gives the appearance of stability. Think of it like a fitness tracker that logs every step, heartbeat, and sleep cycle. The data is there, but if it doesn’t prompt a change in behavior, nothing improves. The same holds for cloud metrics: tracking alone isn’t the point—using what’s tracked to guide decisions is what matters. If you’re already monitoring, the key step is to connect at least one metric directly to a specific action. For example, choose a single measure this week and use it as the trigger for a clear adjustment. Here’s a practical pattern: if your Azure cost dashboard shows a virtual machine running at low utilization every night, schedule automation to shut it down outside business hours. Track the difference in spend over the course of a month. That move transforms passive monitoring into an actual savings mechanism. And importantly, it’s small enough to prove impact without waiting for a big initiative. That’s the reality cloud teams need to accept: the value of monitoring isn’t in the report itself, it’s in the decisions and outcomes it enables. The equation is simple—monitoring plus authority plus follow-through equals improvement. Without that full chain, reporting turns into background noise that consumes effort instead of creating agility. It’s not visibility that matters, but whether visibility leads to action. So the call to action is straightforward: if you’re producing dashboards today, tie one item to one decision this week. Prove value in motion instead of waiting for a sweeping plan. From there, momentum builds—because each quick win justifies investing time in the next. That’s how numbers shift from serving as reminders of missed opportunities to becoming levers for ongoing improvement. But here’s where another friction point emerges. Even in environments where data is abundant and the will to act exists, teams often hit walls. Reports highlight risks, costs, and gaps—but the people asked to fix them don’t always control the budgets, tools, or authority needed to act. And without that alignment, improvement slows to a halt. Which raises the real question: when the data points to a problem, who actually has the power to change it?The Responsibility MirageThat gap between visibility and action is what creates what I call the Responsibility Mirage. Just because a team is officially tagged as “owning” an area doesn’t mean they can actually influence outcomes. On paper, everything looks tidy—roles are assigned, dashboards are running, and reports are delivered. In practice, that ownership often breaks down the moment problems demand resources, budget, or access controls. Here’s how it typically plays out. Leadership declares, “Security belongs to the security team.” Sounds logical enough. But then a compliance alert pops up: a workload isn’t encrypted properly. The security group can see the issue, but they don’t control the budget to enable premium features, and they don’t always have the technical access to apply changes themselves. What happens? They make a slide deck, log the risk, and escalate it upward. The result: documented awareness, but no meaningful action. This is how accountability dead zones form. One team reports the problem but can’t fix it, while the team able to fix it doesn’t feel direct responsibility. The cycle continues, month after month, until things eventually escalate. That pattern can lead to audits, urgent remediation projects, or costly interruptions—but none of it is caused by a lack of data. It’s caused by misaligned authority. Handing out titles without enabling execution is like giving someone car keys but never teaching them to drive. That gesture might look like empowerment, but it’s setting them up to fail. The fix isn’t complicated: whenever you assign responsibility, pair it with three things—authority to implement changes, budget to cover them, and a clear service-level expectation on how quickly those changes should happen. In short, design role charters where responsibility equals capability. There’s also an easy way to check for these gaps before they cause trouble. For every area of responsibility, ask three simple questions out loud: Can this team approve the changes that data highlights? Do they have the budget to act promptly? Do they have the technical access to make the changes? If the answer is “no” to any of those, you’ve identified an accountability dead zone. When those gaps persist, issues pile up quietly in the background. Compliance alerts keep recurring because the teams that see them can’t intervene. Cost overruns grow because the people responsible for monitoring don’t have the budget flexibility to optimize. Slowly, what could have been routine fixes turn into larger problems that require executive attention. A minor policy misconfiguration drags on for weeks until an audit forces urgent remediation. A cost trend gets ignored until budget reviews flag it as unsustainable. These outcomes don’t happen because teams are negligent—they happen because responsibility was distributed without matching authority. As that culture takes hold, teams start lowering their expectations. It becomes normal for risks to sit unresolved. It feels routine to surface the same problems in every monthly report. Nobody expects true resolution, just more tracking and logging. That normalization is what traps organizations into cycles of stagnation. Dashboards keep getting updated, reports keep circulating, and yet the environment doesn’t improve in any noticeable way. The real turning point is alignment. When the same team that identifies an issue also has the authority, budget, and mandate to resolve it, continuous improvement becomes possible. Imagine cost optimization where financial accountability includes both spending authority and technical levers like workload rightsizing. Or compliance ownership where the same group that sees policy gaps can enforce changes directly instead of waiting for months of approvals. In those scenarios, problems don’t linger—they get surfaced and corrected as a single process. That alignment breaks the repetition cycle. Problems stop recycling through reports and instead move toward closure. And once teams start experiencing that shift, they build the confidence to tackle improvements proactively rather than reactively. The cloud environment stops being defined by recurring frustrations and begins evolving as intended—through steady, continuous refinement. But alignment alone isn’t the end of the story. Even perfectly structured responsibilities can hit bottlenecks when budgets dry up at crucial moments. Teams may be ready to act, empowered to make changes, and equipped with authority, only to discover the funding to back those changes isn’t there. And when that happens, progress stalls for an entirely different reason.Budget Constraints: The Silent SaboteurEven when teams have clear roles, authority, and processes, there’s another force that undercuts progress: the budget. This is the silent saboteur of continuous improvement. On paper, everything looks ready—staff are trained, dashboards run smoothly, responsibilities line up. Then the funding buffer that’s supposed to sustain the next stage evaporates. In many organizations, this doesn’t come from leadership ignoring value. It comes from how the budget is framed at the start of a cloud project. Migration expenses get scoped, approved, and fixed with clear end dates. Moving servers, lifting applications, retiring data centers—that stack of numbers becomes the financial story. What comes after, the ongoing work where optimization and real savings emerge, is treated as optional. And once it’s forced to compete with day-to-day operational budgets, money rarely makes it to the improvement pile. That’s where the slowdown begins. Migration is often seen as the heavy lift. The moment workloads are online, leaders expect spending to stabilize or even slide down. But the cloud doesn’t freeze just because the migration phase ends. Costs continue shifting. Optimization isn’t a one-time box to check—it’s a cycle that starts immediately and continues permanently. If budget planning doesn’t acknowledge that reality, teams watch their bills creep upward, while the very tools and processes designed to curb waste are cut first. What looks like efficiency in trimming those line items instead guarantees higher spend over time. Teams feel this pressure directly. Engineers spot inefficiencies all the time: idle resources running overnight, storage volumes provisioned far beyond what’s needed, virtual machines operating full-time when they’re only required for part of the day. The fixes are straightforward—automation, smarter monitoring, scheduled workload shutdowns—but they require modest investments that suddenly don’t have budget coverage. Leadership expects optimization “later,” in a mythical second phase that rarely gets funded. In the meantime, waste accumulates, and with no capacity to act, skilled engineers become passive observers. I’ve seen this pattern in organizations that migrated workloads cleanly, retiring data centers and hitting performance targets. The technical success was real—users experienced minimal disruption, systems stayed available. Yet once the initial celebration passed, funding for optimization tools was classified as an unnecessary luxury. With no permanent line item for improvement, costs increased steadily. A year later, the same organization was scrambling with urgent reviews, engaging consultants, and patching gaps under pressure. The technical migration wasn’t the problem; the lack of post-migration funding discipline was. Ironically, these decisions often come from the pursuit of savings. Leaders believe trimming optimization budgets protects the bottom line, but the opposite happens. The promise of cost efficiency backfires. The environment drifts toward waste, and by the time intervention arrives, remediation is far more expensive. It’s like buying advanced hardware but refusing to pay for updates. The system still runs, but each missed update compounds the limitations. Over time, you fall behind—not because of the hardware itself, but because of the decision to starve it of upkeep. Cloud expenses also stay less visible than they should. Executives notice when bills spike or when an audit forces a fix, but it’s harder to notice the invisible savings that small, consistent optimizations achieve. Without highlighting those avoided costs, teams lack leverage to justify ongoing budgets. The result is a cycle where leadership waits for visible pain before releasing funds, even though small, steady investments would prevent the pain from showing up at all. Standing still in funding isn’t actually holding steady—it’s falling behind. The practical lesson here is simple: treat optimization budgets as permanent, not optional. Just as you wouldn’t classify electricity or software licensing as temporary, ongoing improvement needs a recurring financial line item. A workable pattern to propose is this: commit to a recurring cloud optimization budget that is reviewed quarterly, tied to specific goals, and separated from one-time migration costs. This shifts optimization from a “maybe someday” item into a structural expectation. And within that budget, even small interventions can pay off quickly. Something as simple as automating start and stop schedules for development environments that sit idle outside business hours can yield immediate savings. These aren’t complex projects. They’re proof points that budget directed at optimization translates directly into value. By institutionalizing these types of low-cost actions, teams build credibility that strengthens their case for larger optimizations down the road. Budget decides whether teams are stuck watching problems grow or empowered to resolve them before they escalate. If improvement is treated as an expense to fight for every year, progress will always lag behind. When it’s treated as a permanent requirement of cloud operations, momentum builds. And that’s where the conversation shifts from cost models to mindset. Budget thinking is inseparable from posture—because the way you fund cloud operations reflects whether your organization is prepared to react or ready to improve continuously.The Posture That Creates Continuous ImprovementThat brings us to the core idea: the posture that creates continuous improvement. By posture, I don’t mean a new tool, or a reporting dashboard, or a line drawn on an org chart. I mean the stance an organization takes toward ongoing change in the cloud. It’s about how you position the entire operation—leadership, finance, and engineering—to treat cloud evolution as the default, not the exception. Most environments still run in reactive mode. A cost spike appears, and the reaction is to freeze spending. A compliance gap is discovered during an audit, and remediation is rushed. A performance issue cripples productivity, and operations scrambles with little context. In all these cases, the problem gets handled, but the pattern doesn’t change. The same incidents resurface in different forms, because the underlying stance hasn’t shifted. This is what posture really determines: whether you keep treating problems as interruptions, or redesign the system so change feels expected and manageable. I worked with one organization that flipped this pattern by changing posture entirely. Their monitoring dashboards weren’t just for leadership reports. Every signal on cost, performance, or security was tied directly to action. Take cost inefficiency—it wasn’t logged for later analysis. Instead, the team had already set aside a recurring pool of funds and scheduled space in the roadmap to address it within one to two weeks. The process wasn’t about waiting for budget approval or forming a new project. It was about delivering rapid, predictable optimizations on a fixed cadence. Security alerts followed the same rhythm: each one triggered a structured remediation path that was already resourced. The difference wasn’t better technology—it was posture, using metrics as triggers for action instead of as static indicators. So how do you build this kind of posture in practice? There are a few patterns you can adopt right away. Make measurement lead to action—tie each signal to a specific owner and a concrete adjustment. Co-locate budget and authority—make sure the team spotting an issue can also fund and execute its fix. Pre-fund remediation—set aside a small, recurring slice of time and budget to act on issues as soon as they crop up. And plan continuous adoption cycles—treat new cloud services and optimization steps as permanent roadmap items, not optional extras. These aren’t silver bullets, but as habits, they translate visibility into movement instead of noise. To validate whether your posture is working, focus on process-oriented goals instead of chasing hard numbers. One useful aspiration is to shorten the time between detection and remediation. If it used to take months or quarters to close issues, aim for days or weeks. The metric isn’t about reporting a percentage—it’s about confirming a posture shift. When problems move to resolution quickly, without constant escalations, that’s proof your organization has changed how it operates. Now, here’s the proactive versus reactive distinction boiled down. A reactive stance assumes stability should be the norm and only prepares to respond when something breaks. A proactive stance assumes the cloud is always shifting. So it deliberately builds recurring time, budget, and accountability to act on that movement. If your organization embraces that mindset, monitoring becomes forward-looking, and reports stop sitting idle because they feed into systems already designed to execute. To make it concrete: today, pick one monitoring signal, assign a team with both budget and authority, and schedule a short optimization sprint within the next two weeks. That’s how posture turns into immediate, visible improvement. The real strength of posture is that once it changes, the other challenges follow. Data stops piling up in unused reports, because actions are already baked in. Responsibility aligns with authority and budget, closing those accountability dead zones. Ongoing optimization is funded as a given, not something that constantly needs to be re-justified. One change in stance helps all the other moving parts line up. And the shift redefines how teams experience cloud operations. Instead of defense and damage control, they lean into cycles of improvement. Instead of being cornered by audits or budget crises, they meet them with plans already in place. Over time, that steadiness builds confidence—confidence to explore new cloud services, experiment with capabilities, and lead change rather than react to it. What started as a migration project evolves into a discipline that generates lasting value for the business. The point is simple: posture is the leverage point. When you design for change as permanent, everything else begins to align. And that’s what turns cloud from a source of recurring frustration into an engine that builds agility and savings over time.ConclusionThe real shift comes from treating posture as your framework for everything that follows. Think of it as three essentials: make measurement lead to action, align budget with authority, and turn monitoring into change that actually happens. If those three habits guide your cloud operations, you move past reporting problems and start closing them. So here’s the challenge—don’t just collect dashboards. Pick one signal, assign a team with the power and budget to act, and close the loop this month. I’d love to hear from you: what’s the one monitoring alert you wish always triggered action in your org? Drop it in the comments. And if this helped sharpen how you think about cloud operations, give it a like and subscribe for more guidance like this. Adopt a posture that treats change as permanent, and continuous improvement as funded, expected work. That simple shift is how momentum starts. Get full access to M365 Show - Microsoft 365 Digital Workplace Daily at m365.show/subscribe
    --------  
    20:56
  • Stop Using Entity Framework Like This
    If you’re using Entity Framework only to mirror your database tables into DTOs, you’re missing most of what it can actually do. That’s like buying an electric car and never driving it—just plugging your phone into the charger. No wonder so many developers end up frustrated, or decide EF is too heavy and switch to a micro-ORM. Here’s the thing: EF works best when you use it to persist meaningful objects instead of treating it as a table-to-class generator. In this podcast, I’ll show you three things: a quick before-and-after refactor, the EF features you should focus on—like navigation properties, owned types, and fluent API—and clear signs that your code smells like a DTO factory. And when we unpack why so many projects fall into this pattern, you’ll see why EF often gets blamed for problems it didn’t actually cause.The Illusion of SimplicityThis is where the illusion of simplicity comes in. At first glance, scaffolding database tables straight into entity classes feels like the fastest way forward. You create a table, EF generates a matching class, and suddenly your `Customer` table looks like a neat `Customer` object in C#. One row equals one object—it feels predictable, even elegant. In many projects I’ve seen, that shortcut is adopted because it looks like the most “practical” way to get started. But here’s the catch: those classes end up acting as little more than DTOs. They hold properties, maybe a navigation property or two, but no meaningful behavior. Things like calculating an order total, validating a business rule, or checking a customer’s eligibility for a discount all get pushed out to controllers, services, or one-off helper utilities. Later I’ll show you how to spot this quickly in your own code—pause and check whether your entities have any methods beyond property getters. If the answer is no, that’s a red flag. The result is a codebase made up of table-shaped classes with no intelligence, while the real business logic gets scattered across layers that were never designed to carry it. I’ve seen teams end up with dozens, even hundreds, of hollow entities shuttled around as storage shells. Over time, it doesn’t feel simple anymore. You add a business rule, and now you’re diffing through service classes and controllers, hoping you don’t break an existing workflow. Queries return data stuffed with unnecessary columns, because the “model” is locked into mirroring the database instead of expressing intent. At that point EF feels bloated, as if you’re dragging along a heavy framework just to do the job a micro-ORM could do in fewer lines of code. And that’s where frustration takes hold—because EF never set out to be just a glorified mapper. Reducing it to that role is like carrying a Swiss Army knife everywhere and only using the toothpick: you bear the weight of the whole tool without ever using what makes it powerful. The mini takeaway is this: the pain doesn’t come from EF being too complex, it comes from using it in a way it wasn’t designed for. Treated as a table copier, EF actively clutters the architecture and creates a false sense of simplicity that later unravels. Treated as a persistence layer for your domain model, EF’s features—like navigation properties, owned types, and the fluent API—start to click into place and actually reduce effort in the long run. But once this illusion sets in, many teams start looking elsewhere for relief. The common story goes: "EF is too heavy. Let’s use something lighter." And on paper, the alternative looks straightforward, even appealing.The Micro-ORM MirageA common reaction when EF starts to feel heavy is to reach for a micro-ORM. From experience, this option can feel faster and a lot more transparent for simple querying. Micro-ORMs are often pitched as lean tools: lightweight, minimal overhead, and giving you SQL directly under your control. After dealing with EF’s configuration layers or the way it sometimes returns more columns than you wanted, the promise of small and efficient is hard to ignore. At first glance, the logic seems sound: why use a full framework when you just want quick data access? That appeal fits with how many developers start out. Long before EF, we learned to write straight SQL. Writing a SELECT statement feels intuitive. Plugging that same SQL string into a micro-ORM and binding the result to a plain object feels natural, almost comfortable. The feedback loop is fast—you see the rows, you map them, and nothing unexpected is happening behind the scenes. Performance numbers in basic tests back up the feeling. Queries run quickly, the generated code looks straightforward, and compared to EF’s expression trees and navigation handling, micro-ORMs feel refreshingly direct. It’s no surprise many teams walk away thinking EF is overcomplicated. But the simplicity carries hidden costs that don’t appear right away. EF didn’t accumulate features by mistake. It addresses a set of recurring problems that larger applications inevitably face: managing relationships between entities, handling concurrency issues, keeping schema changes in sync, and tracking object state across a unit of work. Each of these gaps shows up sooner than expected once you move past basic CRUD. With a micro-ORM, you often end up writing your own change tracking, your own mapping conventions, or a collection of repositories filled with boilerplate. In practice, the time saved upfront starts leaking away later when the system evolves. One clear example is working with related entities. In EF, if your domain objects are modeled correctly, saving a parent object with modified children can be handled automatically within a single transaction. With a micro-ORM, you’re usually left orchestrating those inserts, updates, and deletes manually. The same is true with concurrency. EF has built-in mechanisms for detecting and handling conflicting updates. With a micro-ORM, that logic isn’t there unless you write it yourself. Individually, these problems may look like small coding tasks, but across a real-world project, they add up quickly. The perception that EF is inherently harder often comes from using it in a stripped-down way. If your EF entities are just table mirrors, then yes—constructing queries feels unnatural, and LINQ looks verbose compared to a raw SQL string. But the real issue isn’t the tool; it’s that EF is running in table-mapper mode instead of object-persistence mode. In other words, the complexity isn’t EF’s fault, it’s a byproduct of how it’s being applied. Neglect the domain model and EF feels clunky. Shape entities around business behaviors, and suddenly its features stop looking like bloat and start looking like time savers. Here’s a practical rule of thumb from real-world projects: Consider a micro-ORM when you have narrow, read-heavy endpoints and you want fine-grained control of SQL. Otherwise, the maintenance costs of hand-rolled mapping and relationship management usually surface down the line. Used deliberately, micro-ORMs serve those specialized needs well. Used as a default in complex domains, they almost guarantee you’ll spend effort replicating what EF already solved. Think of it this way: choosing a micro-ORM over EF isn’t wrong, it’s just a choice optimized for specific scenarios. But expect trade-offs. It’s like having only a toaster in the kitchen—perfect when all you ever need is toast, but quickly limiting when someone asks for more. The key point is that micro-ORMs and EF serve different purposes. Micro-ORMs focus on direct query execution. EF, when used properly, anchors itself around object persistence and domain logic. Treating them as interchangeable options leads to frustration because each was built with a different philosophy in mind. And that brings us back to the bigger issue. When developers say they’re fed up with EF, what they often dislike is the way it’s being misused. They see noise and friction, but that noise is created by reducing EF to a table-copying tool. The question is—what does that misuse actually look like in code? Let’s walk through a very common pattern that illustrates exactly how EF gets turned into a DTO factory, and why that creates so many problems later.When EF Becomes a DTO FactoryWhen EF gets reduced to acting like a DTO factory, the problems start to show quickly. Imagine a simple setup with tables for Customers, Orders, and Products. The team scaffolds those into EF entities, names them `Customer`, `Order`, and `Product`, and immediately begins using those classes as if they represent the business. At first, it feels neat and tidy—you query an order, you get an `Order` object. But after a few weeks, those classes are nothing more than property bags. The real rules—like shipping calculations, discounts, or product availability—end up scattered elsewhere in services and controllers. The entity objects remain hollow shells. At this point, it helps to recognize some common symptoms of this “DTO factory” pattern. Keep an ear out for these red flags: your entities only contain primitive properties and no actual methods; your business rules get pulled into services or controllers instead of being expressed in the model; the same logic gets re‑implemented in different places across the codebase; and debugging requires hopping across multiple files to trace how a single feature really works. If any of these signs match your project, pause and note one concrete example—we’ll refer back to it in the demo later. The impact of these patterns is pretty clear when you look at how teams end up working. Business logic that should belong to the entity ends up fragmented. Shipping rules, discount checks, and availability rules might each live in a different service or helper. These fragmented rules look manageable when the project is small, but as the system grows, nobody has a single place to look when they try to understand how it works. The `Customer` and `Order` classes tell you nothing about the business relationships they’re supposed to capture because they’ve been reduced to storage structures. From here, maintainability starts to slide. A bug comes in about shipping calculations. You naturally check the `Customer` class, only to discover it has no behavior at all. You then chase references through billing helpers, shipping calculation services, and controller code. Fixes require interpreting an invisible web of dependencies. Over time, slight differences creep in—two developers might implement the same discount rule in two different ways without realizing it. Those inconsistencies are almost guaranteed when logic isn’t centralized. Testing suffers too; instead of unit testing clear domain behaviors, you have to mock out service networks just to verify rules that should have lived right inside the entity. This structure also fuels the perception that EF itself is at fault. Teams often describe EF as “magical” or unpredictable, wondering why SaveChanges updated fields they thought were untouched, or why related entities loaded differently than expected. In practice, this unpredictability comes from using EF to track hollow objects. When entities are nothing but DTOs, their absence of intent makes EF’s behavior feel arbitrary. It isn’t EF misbehaving, it’s EF being asked to persist structures that never carried the business meaning they needed to. The broader consequence is a codebase stuck in procedural mode. Instead of entities that carry their responsibilities, you get layers of procedural scripts hidden in services that impersonate a domain model. EF merely pushes and pulls these data bags to the database, but offers no leverage because the model itself doesn’t describe the actual domain. It’s not that EF failed—it’s that the model was never allowed to succeed. The good news is that this pattern is not permanent. Refactoring away from EF-as-DTO means rethinking what goes into your entities. Instead of spreading behaviors across multiple services and controllers, you start to treat those objects as the true home for domain rules. The shift is concrete: order totals, eligibility checks, and shipping calculations live alongside the data they depend on. This change consolidates behavior into the model, making it discoverable, testable, and consistent. That naturally raises the big question: how do we move from a library of hollow DTOs to real objects that express business rules, without giving up EF in the process?Transforming into Proper OOP with EFTransforming EF into an object-oriented tool starts by flipping the perspective. Instead of letting a database schema dictate the shape of your code, you treat your objects as the real center of gravity and let EF handle the persistence underneath. That doesn’t mean adding layers of ceremony or reinventing architectures. It simply means designing your entities to describe what the business actually does, while EF works in the background to translate that design into rows and columns. For clarity, here’s the flow I’ll walk through in the demo: first, you’ll see a DTO‑style `Order` entity that only carries primitive properties. Then I’ll show you how the same `Order` looks once behavior is moved inside the object. Finally, we’ll look at how EF’s fluent API can persist that richer object without cluttering the domain class itself. Along the way, I’ll highlight three EF features that make this work: navigation properties, owned or value types, and fluent API configurations. Those are the practical tools that let you separate business intent from storage details. Let’s make it concrete. In the hollow DTO model, an `Order` might have just an `Id`, a `CustomerId`, and a list of line items. All the real thinking—like the total price of the order—is pushed out into a service or utility. But in an object‑oriented approach, the `Order` includes a method like “calculate total,” which sums up the included line items and applies any business rules. Placing that method on the object matters: you remove duplication, you keep the calculation close to the data it depends on, and future developers can discover the logic where they expect it. Instead of guessing which service hides a calculation, they can look at the order itself. Many developers hesitate here, worrying that richer domain objects will be harder to persist. That’s an understandable reaction if you’ve only seen EF used as a table‑to‑class mirroring tool. But persistence complexity is exactly what EF’s modern features are designed to absorb. Navigation properties handle relationships naturally. Owned types let you wrap common concepts like an Address or an Email into value objects without breaking persistence. And when you need precise control, the fluent API lets you define database‑specific rules—like decimal precision—without polluting your domain classes. The complexity doesn’t vanish, but it gets pushed into a clear boundary where EF can manage it directly. The fluent API in particular acts as a clean translator. Your `Order` class can focus entirely on the business—rules for adding products, enforcing a warehouse constraint, exposing a property for free shipping eligibility—while the mapping configuration files quietly describe how those rules translate to the database. This keeps your business model tidy and makes persistence more predictable, because all the storage rules sit in one place instead of leaking across entity code. If we scale the example up, the difference grows more obvious. Say an order has multiple line items, each tied to a product with its own constraints. In a DTO approach, you’d fetch the order and then pull in extra services to stitch everything together before applying rules. In a richer model, that work collapses into the entity itself. You can ask the order for its total, or check if it qualifies for free shipping, and the rules are applied consistently every time. EF persists the relationships behind the scenes, but you stay anchored in business logic rather than plumbing. The benefits cascade outward. Logical duplication fades because rules live in one place. Tests become simpler—no more wiring up half a dozen services to verify that discounts apply correctly. Instead, you test an order directly. Debugging also improves: business rules are discoverable inside the entity where they belong, not scattered across controllers and helpers. EF continues doing what it does best—tracking changes and generating SQL—but now it works in service of a model that actually represents your business. Here’s a small challenge you can try after watching: open one of your existing entities and ask yourself, “Could this responsibility live inside the object?” If the answer is yes, move one small piece of logic—like a calculation or a rule—into the entity and use EF mapping to persist it. That experiment alone can show the difference in clarity. Once you’ve seen how to give entities real behavior, the next natural question is why the shift matters over time. Rewriting classes isn’t free, so let’s look at the longer‑term impact of doing EF in a way that aligns with object‑oriented design.The Long-Term Value of Doing EF RightSo what do you actually gain when you stop treating EF as a DTO copier and start using it to back real objects? The long-term value comes down to three things: cleaner testing, less duplication to maintain, and far clearer code for the next developer who joins the project. Those three benefits may not feel dramatic in the short term, but over months and years they shape whether a codebase stays steady or drifts into constant rework. The first big gain is easier testing. When objects know their own rules, you can test them directly without scaffolding services or mocking dependencies that shouldn’t even exist. An `Order` that calculates its own total can be exercised in isolation, giving you consistent results in small, fast-running tests. Updates or new behaviors are easier to verify because the logic lives exactly where the test points. As projects evolve, this pays off repeatedly—small changes are less risky since testing effort doesn’t balloon with every rule adjustment. The second benefit is reducing duplication and scattered maintenance. In DTO-style systems, one business rule often gets repeated across multiple service methods and controllers. Change a discount formula in one place but forget another, and you’ve created a subtle bug. Centralizing logic inside the object removes that duplication at the source. Here’s a simple check you can try in your own project: when a business rule changes, count how many code files you edit. If the answer is more than one, you’ve likely fallen into duplication. That’s a measurable way to see if technical debt is creeping in. The third benefit is clarity for onboarding and debugging. When EF is only storing DTO shells, new team members have to hunt through services to discover where rules are hidden. That slows them down. By contrast, when behavior sits in the object itself, the path is obvious. Debugging also shifts from hours of tracing service code to dropping one breakpoint inside the object method that enforces the rule. Before, you crossed multiple files to follow the logic. After, you look in one class and see the rule expressed cleanly. That contrast alone saves an enormous amount of wasted time for any team. Performance is also tied to how you shape your models. With table clones, EF often drags back entire rows or related entities that you don’t even use. That costs memory and query time, particularly as data grows. But when the model reflects intent, you can project exactly what belongs in scope. Owned types let you model concepts like addresses without clutter, while selective includes load just what’s needed for the behavior you’re testing. The effect isn’t about micro-benchmarks; it’s the intuition that better-shaped models naturally lead to leaner queries. None of this guarantees a perfect outcome. But in many long-lived projects, I’ve seen that teams who invest early in placing behavior inside models avoid the slow creep of duplicated rules and fragile service layers. Their tests stay lighter, their change costs stay lower, and onboarding looks more like reading straightforward domain objects instead of navigating a maze of procedural code. Teams that skipped that step often end up with technical debt that costs more to untangle than the up-front modeling would have. The pattern shows up again and again. All of this feeds into the bigger picture: proper use of EF doesn’t just clean up the present, it improves how a project survives the future. Rich objects, backed by EF’s persistence features, create models that developers can trust, extend, and understand. That confidence saves teams from the churn of accidental complexity and restores EF to the role it was meant to play. And this leads to the final point. The problem was never that EF itself was too large or too slow—it’s that we often narrow it down into something it was never supposed to be.ConclusionSo here’s where everything comes together. EF works best when you use it to persist meaningful domain objects rather than empty DTO shells. If you reduce it to a table copier, you lose the advantages that make it worth using in the first place. Keep three takeaways in mind: stop relying on EF as a table-to-class generator, put behavior back into your entities, and use EF’s mappings to take care of persistence details. Here’s a small challenge—pick one entity in your project and comment below: “DTO” or “Model,” along with why. And if this kind of practical EF and .NET guidance helps, subscribe for more focused patterns and real-world practices. Get full access to M365 Show - Microsoft 365 Digital Workplace Daily at m365.show/subscribe
    --------  
    18:24

Więcej Wiadomości podcastów

O M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show
Strona internetowa podcastu

Słuchaj M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily, Raport o stanie świata Dariusza Rosiaka i wielu innych podcastów z całego świata dzięki aplikacji radio.pl

Uzyskaj bezpłatną aplikację radio.pl

  • Stacje i podcasty do zakładek
  • Strumieniuj przez Wi-Fi lub Bluetooth
  • Obsługuje Carplay & Android Auto
  • Jeszcze więcej funkcjonalności

M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily: Podcasty w grupie

Media spoecznościowe
v7.23.9 | © 2007-2025 radio.de GmbH
Generated: 9/17/2025 - 10:59:51 AM