
500B Spent on AI in 2025. Can You Spot the Value?
11.12.2025 | 32 min.
Tracy Lee and A.D. Slaton sit down on The Context Window to unpack a wild week in AI, starting with the eye popping 500 billion dollars spent on AI infrastructure in 2025 and why the Cognizant CEO still says enterprise value is missing. They dig into reports of ChatGPT going “code red” in response to Gemini 3, what that means for OpenAI, and what it means for everyday builders trying to ship real products. Along the way they touch on ByteDance, call out LiveKit as a key piece of infrastructure for voice, video, and physical AI agents, and flag IBM’s move to acquire Confluent as another signal of where data and AI are heading.What you will learn- Why 500B spent on AI infrastructure has not translated into clear enterprise value yet- What the Cognizant CEO’s comments really signal for teams building AI products- How Gemini 3’s launch is shaking up the landscape for ChatGPT and OpenAI- What a “Code Red” moment actually means for developers and companies relying on these platforms- How LiveKit powers voice, video, and physical AI agents and where it fits in the stack- Why IBM acquiring Confluent matters for data, streaming, and real time AI systems- How to stay grounded and make practical decisions when AI news makes reality feel unstable0:00 Intro0:53 Are we overspending on AI infrastructure and where’s the enterprise value2:54 Adoption gap, enablement work and why 100% AI generated code is still rare6:11 High touch AI training, workshops and scaling AI practices across teams8:58 Grok 4.22, AI trading experiments and quant style tools for everyone13:51 OpenAI “Code Red,” rising competition and what changes for Agile with agents20:37 ByteDance agentic phone, AR glasses and AI moving into the physical world23:20 LiveKit, voice cloning, AI podcasts and the problem of AI slop27:00 Thinking machines, social media’s role in AI and closing reflectionsTracy Lee on Linkedin: https://www.linkedin.com/in/tracyslee/A.D. Slaton on Linkedin: https://www.linkedin.com/in/adslaton/This Dot Labs Twitter: https://x.com/ThisDotLabsThis Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs Instagram: https://www.instagram.com/thisdotlabs/This Dot Labs Facebook: https://www.facebook.com/thisdot/This Dot Labs Bluesky: https://bsky.app/profile/thisdotlabs.bsky.socialSponsored by This Dot: https://ai.thisdot.co

How Varlock Fixes .env Vulnerabilities and Secures Your Secrets
10.12.2025 | 40 min.
Environment variables and secrets are usually a mess: out of sync .env files, scattered API keys, painful onboarding, and brittle CI configs. In this episode of the Modern Web Podcast, Rob Ocel talks with Varlock co-creators Phil Miller and Theo Ephraim about how Varlock turns .env files into a real schema with types, validation, and documentation, pulls secrets from tools like 1Password and other backends, and centralizes configuration across environments and services. They also dig into protecting secrets in an AI-heavy world by redacting them from logs and responses, preventing accidental leaks from agents, and pushing toward an open env-spec standard so configuration becomes predictable, portable, and actually pleasant to work with.What you will learn:- Why traditional .env files and copy paste workflows break down as teams, services, and environments grow.- How Varlock turns environment variables into a schema with types, validation, documentation, and generated TypeScript.- How to pull secrets from tools like 1Password and other backends without leaving them in plain text or scattering them across dashboards.- How to manage multiple environments such as development, staging, and production from a single, declarative configuration source.- How Varlock helps protect secrets in AI and MCP workflows by redacting them from logs and responses and blocking accidental leaks.- What the env spec standard is and how a common schema format can make configuration more portable across tools, templates, and platforms.Theo Ephraim on Linkedin: https://www.linkedin.com/in/theo-ephraim/Phil Miller on Linkedin: https://www.linkedin.com/in/themillman/Rob Ocel on Linkedin: https://www.linkedin.com/in/robocel/This Dot Labs Twitter: https://x.com/ThisDotLabsThis Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs Instagram: https://www.instagram.com/thisdotlabs/This Dot Labs Facebook: https://www.facebook.com/thisdot/This Dot Labs Bluesky: https://bsky.app/profile/thisdotlabs.bsky.socialSponsored by This Dot Labs: https://ai.thisdot.co/

The One Mindset That Will 10x Your Dev Career (and Keep You Ahead of AI)
21.10.2025 | 32 min.
Rob Ocel and Danny Thompson go deep on intentionality, the developer “superpower” that can speed up your growth, sharpen your judgment, and keep you from getting automated away in the AI era. Rob unpacks a simple loop (state intent → act → measure → review) with real stories, including the ticket he challenged on day one that saved a team six figures, and the “it seems to work” anti-pattern that shipped a mystery bug. Together they show how being deliberate before you write a line of code changes everything: scoping tickets, estimating work, documenting decisions, reviewing PRs, and speaking up, even as a junior.What you’ll learn: • The intentionality loop: how to set a hypothesis, capture outcomes, and improve fast • The exact moment to ask “Should we even do this ticket?” and how to push back safely • Why code is the last step: design notes, edge cases, and review context first • Estimation that actually works: start naive, compare to actuals, iterate to ±10% • How to avoid DRY misuse, “tragedy of the commons” code reviews, and stealth tech debt • Where to keep your working notes (GitHub, Notion, SharePoint) so reviewers can follow your logic • How juniors can question assumptions without blocking the room or their careerRob Ocel on Linkedin: https://www.linkedin.com/in/robocel/Danny Thompson on Linkedin: https://www.linkedin.com/in/dthompsondev/This Dot Labs Twitter: https://x.com/ThisDotLabsThis Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs Instagram: https://www.instagram.com/thisdotlabs/This Dot LabsFacebook: https://www.facebook.com/thisdot/This Dot LabsBluesky: https://bsky.app/profile/thisdotlabs.bsky.socialSponsored by This Dot Labs: https://ai.thisdot.co/

The Cloud Built AI. Can It Survive What AI Needs Next?
14.10.2025 | 33 min.
On this episode of the Modern Web Podcast, hosts Rob Ocel and Danny Thompson welcome Miles Ward, CTO of SADA, for an in-depth conversation about the intersection of cloud computing and AI. Miles shares his career journey from early days at AWS and Google Cloud to leading SADA through its acquisition by Insight, offering a rare perspective on the evolution of solutions architecture and cloud adoption at scale.The discussion covers the realities of cloud “repatriation,” why GPUs have shifted some workloads back on-prem or to niche “neo-cloud” providers, and how cloud infrastructure remains the backbone of most AI initiatives. Miles breaks down practical concerns for organizations, from token pricing and GPU costs to scaling AI features without blowing budgets. He also highlights how AI adoption exposes weak organizational habits, why good data and strong processes matter more than hype, and how developers should view AI as intelligence augmentation rather than replacement.Key Takeaways:- Miles Ward, former early AWS Solutions Architect, founder of the SA practice at Google Cloud, and now CTO at SADA (acquired by Insight), brings a deep history in scaling infrastructure and AI workloads.- Cloud repatriation is rare. The main exception is GPUs, where companies may rent from “neo-clouds” like CoreWeave, Crusoe, or Lambda, or occasionally use on-prem for cost and latency reasons, though data-center power constraints make this difficult.- Cloud remains essential for AI. Successful initiatives depend on cloud primitives like data, orchestration, security, and DevOps. Google’s integrated stack (custom hardware, platforms, and models) streamlines development. The best practice is to build in cloud first, then optimize or shift GPU inference later if needed.- Costs and readiness are critical. Organizations should measure AI by business outcomes rather than lines of code. Token spending needs calculators, guardrails, and model routing strategies. On-prem comes with hidden costs such as power, networking, and staffing. The real bottleneck for most companies is poor data and weak processes, not model quality.Miles Ward on Linkedin: https://www.linkedin.com/in/rishabkumar7/Rob Ocel on Linkedin: https://www.linkedin.com/in/robocel/Danny Thompson on Linkedin: https://www.linkedin.com/in/dthompsondev/This Dot Labs Twitter: https://x.com/ThisDotLabsThis Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs Instagram: https://www.instagram.com/thisdotlabs/This Dot Labs Facebook: https://www.facebook.com/thisdot/This Dot Labs Bluesky: https://bsky.app/profile/thisdotlabs.bsky.socialSponsored by This Dot Labs: https://ai.thisdot.co/

How NPM Auto-Updates & Post-Install Scripts Could Hijack Your Org
07.10.2025 | 36 min.
In this Modern Web Podcast, Rob Ocel and Danny Thompson break down the recent string of NPM supply chain attacks that have shaken the JavaScript ecosystem. They cover the NX compromise, the phishing campaign that hit libraries like Chalk, and the Shy Halood exploit, showing how small changes in dependencies can have massive effects. Along the way, they share practical defenses like using package lock and npm ci, avoiding phishing links, reviewing third party code, applying least privilege, staging deployments, and maintaining incident response plans. They also highlight vendor interventions such as Vercel blocking malicious deployments and stress why companies must support open source maintainers if the ecosystem is to remain secure.Key Points from this Episode:- Lock down installs. Pin versions, commit package-lock.json, use npm ci in CI, and disable scripts in CI (npm config set ignore-scripts true) to neutralize post-install attacks.- Harden people & permissions. Phishing hygiene (never click-through emails), 2FA/hardware keys, least-privilege by default, and separate/purpose-scoped publishing accounts.- Stage & detect early. Canary/staged deploys, feature flags, and tight observability to catch dependency drift, suspicious network egress, or monkey-patched APIs fast.- Practice incident response. Two-hour containment target: revoke/rotate tokens, reimage affected machines, roll back artifacts, notify vendors, and run a post-mortem playbook.Rob Ocel on Linkedin: https://www.linkedin.com/in/robocel/Danny Thompson on Linkedin: https://www.linkedin.com/in/dthompsondev/This Dot Labs Twitter: https://x.com/ThisDotLabsThis Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs Instagram: https://www.instagram.com/thisdotlabs/This Dot Labs Facebook: https://www.facebook.com/thisdot/This Dot Labs Bluesky: https://bsky.app/profile/thisdotlabs.bsky.socialSponsored by This Dot Labs: https://ai.thisdot.co/



Modern Web