Select the search type
  • Site
  • Web
Search
PROFESSIONAL TRAINING

Build Better Software Faster!
With AI You Actually Understand!

Practical AI, Scrum, agile delivery, and software development training for professionals who want usable skills, not hype.

Taught by Rod Claar, Certified Scrum Trainer, software development trainer and AI practitioner.

Why This Platform Exists

AI is changing how software gets built—but most education falls into two traps: treating AI like magic, or treating software like theory.

This site is built to bridge that gap. Here, AI is a powerful assistant, not a substitute for thinking. Software development is taught as a craft, not a checklist. Every lesson is grounded in real projects, real teams, and real tradeoffs—so you learn what works in practice, and why.

Who This Is For

If you write, test, or review code…

You want to use AI without sacrificing quality, apply design patterns intentionally, understand algorithms in practical terms, and stay relevant without chasing every new tool.

You'll learn AI-accelerated engineering you can trust.

If you guide teams, products, or architecture…

You want to turn conversations into clear requirements, improve delivery without creating chaos, make better technical decisions, and keep humans firmly in control.

You'll learn AI-enabled leadership with clarity and confidence.

If you're building—or rebuilding—your career…

You want fundamentals that don't expire, learning paths that reduce overwhelm, and real examples that build confidence.

You'll learn the foundations that make everything else easier.

What You'll Learn Here

AI for Software Professionals

Practical workflows, human-in-the-loop development, and responsible use in real systems.

Software Design Patterns

Why patterns exist, when they help, when they hurt—and how AI changes the tradeoffs.

Software Project & Product Management Using Scrum and Agile Practices

Requirements, planning, risk reduction, and delivery—enhanced by AI, not replaced by it.

Modern Development Practices

Testing, refactoring, architecture, and collaboration that improve outcomes.

Learn the Way That Fits You

Choose what fits your schedule and depth:

Free YouTube Lessons — practical, structured, and searchable

On-Demand Courses — deep dives you can take at your own pace

Live Workshops — interactive training with real-time Q&A

Subscriptions — ongoing learning, updates, and live sessions

Start free. Go deeper when you're ready.

Not Sure Where to Start?

Pick a Learning Path

Certified ScrumMaster - A Practical Preparation Path

Start This Path

Certified Scrum Product Owner - From Vision to Value

Start This Path

AI for Scrum Teams - Practical, Responsible Use

Start This Path

AI for Experienced Developers

A guided path to use AI confidently without compromising design, testing, or maintainability.

Start This Path

From Developer to Technical Leader

A practical route from implementation to architecture, decisions, and delivery outcomes.

Start This Path

Software Foundations in the Age of AI

A clear, calm path through fundamentals—so you're not dependent on hype or luck.

Start This Path

How This Is Taught

Clear explanations without jargon

Real systems, not toy examples

Tradeoffs explained, not hidden

AI used transparently

AI prompts displayed and available

No bias for tools or models

All questions answered

Respect for professional judgment

Start Where You Are

You don't need to be an expert.

You don't need to chase every trend.

You just need a clear place to start.

Search Results

11 May 2026

What Changed in Software Development This Week Because of AI for May 12, 2026

What Changed in Software Development This Week Because of AI for May 12, 2026

Author: Rod Claar  /  Categories: Free Articles,   / 

A lot happened this week. Anthropic gave AI agents the ability to learn from their own mistakes. Microsoft published the largest study of human-AI work patterns to date. OpenAI told the world exactly how it keeps its own coding agent inside safe boundaries. And two major platform announcements landed on the same day — May 11 — that will matter to every software team building on cloud infrastructure.

Below are five stories based on original announcements from the past seven days. No speculation. Just the facts and what they mean for Agile teams.

Anthropic Teaches Its AI Agents to “Dream”

 

Source: Anthropic — New in Claude Managed Agents · May 6, 2026

On May 6, at its Code with Claude developer conference, Anthropic launched a new feature called dreaming for Claude Managed Agents. The name comes from how it works: like the human brain during sleep, a scheduled background process reviews past agent sessions, finds patterns, and updates the agent’s memory before the next session begins.

This is not science fiction. Dreaming reads prior session transcripts alongside an existing memory store, merges duplicate information, removes stale entries, and surfaces recurring patterns — including repeated mistakes and workflow shortcuts that a single session would never notice on its own. Anthropic calls this preventing “memory rot,” the slow buildup of cluttered or conflicting notes that degrades long-running agents over time.

Developers control how much trust they give the process. Dreaming can update memory automatically, or it can queue changes for a human to review before anything is saved. The original session data is never changed during a dream run, so teams can reject updates they don’t like.

Two other features graduated to public beta the same day: outcomes, which lets a separate grader agent score an agent’s output against a written rubric before the agent tries again; and multiagent orchestration, which lets a lead agent break a job into parts and fan those parts out to specialist subagents running in parallel. Netflix is already using the parallel version to process logs from hundreds of build pipelines at once.

6× Task completion rate increase at Harvey

Harvey, a legal AI company, reported a 6× rise in task completion rates after implementing Claude’s new dreaming feature. Wisedocs, a medical document review company, cut document review time by 50% using the outcomes feature. (Source: VentureBeat, May 2026)

At the conference, Anthropic CEO Dario Amodei said Claude Platform API volume has grown 80× year over year — far outpacing the company’s internal 10× projection. The average Claude Code developer now spends 20 hours per week using the tool.

Scrum Team Signal

For Sprint Planning: Agents that learn across sessions reduce the amount of context you have to re-establish each time you pick up a long-running task. That changes how teams should think about agent-assisted work items. A task that required heavy setup last Sprint may need almost none this Sprint if an agent has carried the memory forward.

For the Definition of Done: The outcomes feature introduces a built-in review loop — one agent does the work, another grades it against a rubric. That is a machine version of acceptance criteria. Scrum teams writing clear acceptance criteria now have a direct way to hand that rubric to an AI reviewer, not just a human one.

For Retrospectives: Dreaming surfaces patterns across sessions the way good retrospectives surface patterns across Sprints. If your team runs agents repeatedly, review what the dreaming process logs. Those patterns are real data about where AI-assisted work is succeeding and where it keeps getting stuck.

Microsoft Surveyed 20,000 Workers. Here Is What They Found.

 

Source: Microsoft Official Blog — 2026 Work Trend Index · May 5, 2026

Microsoft released its 2026 Work Trend Index Annual Report on May 5. The company analyzed trillions of anonymized Microsoft 365 productivity signals, surveyed 20,000 AI-using workers across 10 countries, and reviewed more than 100,000 Microsoft 365 Copilot conversations. The finding that runs through all of it: the constraint is no longer what people can do. It is how work is structured around them.

The report describes four patterns of human-agent collaboration that software engineering teams moved through first — and that every function is now starting to follow:

Author: The worker produces the work and calls on AI for help as needed — a line of code, a quick analysis, a first draft.

Editor: The worker sets the intent; AI creates the first draft for review and approval.

Director: The worker creates a spec and hands off entire tasks for the agent to run in the background.

Orchestrator: The worker designs a system where multiple agents run in parallel, surfacing only exceptions for human review.

Microsoft notes that software development reached the Orchestrator pattern first because code has tight feedback loops — tests pass or fail, diffs are reviewable. Other functions are compressing the same progression into months instead of years.

Active agents on Microsoft 365 grew 15× year over year, rising to 18× in large enterprises. Among AI users, 58% say they are producing work they could not have done a year ago. That number rises to 80% among “Frontier Professionals,” the most advanced AI users in the study.

15× Year-over-year increase in active agents on Microsoft 365

Microsoft’s 2026 Work Trend Index found active agent usage grew 15× year over year, rising to 18× in large enterprises. 49% of all Copilot conversations involved cognitive work: analyzing information, solving problems, and thinking creatively. (Source: Microsoft 2026 Work Trend Index, May 5, 2026)

The report also identified a “Transformation Paradox.” While 65% of employees fear falling behind without AI, 45% say it feels safer to focus on current goals than to redesign how work gets done. Only 13% feel rewarded for reinventing work when immediate results are uneven. Organizations that rate highest on AI culture score more than 2× the impact of individual mindset and behavior combined.

Scrum Team Signal

The four patterns map directly onto the Scrum Team’s relationship with AI agents. Most teams are currently in Author mode — using AI for autocomplete and individual help. The high-value shift is to Director and Orchestrator: giving agents a Sprint Backlog item as a spec, then reviewing the output rather than writing every line. That requires well-written acceptance criteria and a strong Definition of Done — both things Scrum already demands.

The Transformation Paradox is a Sprint Retrospective topic. If your team is using AI tools but nobody is changing how the Sprint is structured around them, the Microsoft data says you are leaving most of the value on the table. The barrier is organizational, not technical. That makes it a conversation for the Scrum Master and the Product Owner, not just the developers.

OpenAI Published Its Safety Blueprint for Coding Agents in Production

 

Source: OpenAI — Running Codex Safely · May 8, 2026

On May 8, OpenAI published a detailed look at how it runs Codex — its own AI coding agent — inside its internal engineering workflows. This is the first time a major AI lab has shared its full operational playbook for governing a coding agent in a live production environment.

The document is worth reading because it treats coding agents not as a model quality problem, but as an infrastructure and governance problem. OpenAI’s stated principle is simple: productive inside a bounded environment, low-risk everyday actions should be frictionless, and higher-risk actions should stop for review.

Sandboxing sets what Codex can read, write, and access. Modes range from read-only (inspect but never modify) to workspace-write (read and write within a defined local folder) to a more permissive mode that removes most restrictions and requires explicit approval. OpenAI does not allow open-ended outbound network access. Managed network policies allow expected destinations, block destinations Codex should not reach, and require a human approval step for any unfamiliar domain.

Auto-review mode handles routine approvals automatically. A subagent reviews the planned action and recent context, then approves low-risk steps without interrupting the developer. Higher-risk or unexpected actions stop for a human decision.

Telemetry gives security teams a view into why the agent did something, not just what it did. Codex exports logs through OpenTelemetry — covering user prompts, tool approval decisions, execution results, MCP server usage, and network events. Enterprise and Edu customers can pull those logs through the OpenAI Compliance Platform for audits.

4M Developers using Codex every week as of mid-May 2026

OpenAI reported more than 4 million developers using Codex weekly in May 2026, up from 3 million just two weeks earlier. Enterprise customers include Virgin Atlantic (test coverage), Ramp (code review), Notion (feature development), Cisco (repository reasoning), and Rakuten (incident response). (Source: OpenAI, May 2026)

OpenAI also uses an AI-powered security triage agent to monitor Codex logs. The agent flags suspicious patterns and surfaces anomalies for human review — an example of AI being used to govern AI in production workflows.

Scrum Team Signal

This blueprint is a governance checklist for any Scrum team deploying AI coding agents. Before you ship AI-generated code to production, ask: Does your agent run in a sandboxed environment? Does your team have logs showing what the agent did and why? Do you have a clear approval policy for high-risk actions like writing to shared repositories or deploying to staging?

For the Product Owner: Security requirements for AI agents belong in the Backlog as formal acceptance criteria — not as an afterthought. The OpenAI model treats security as infrastructure, built in from the start. That is the right frame for any team definition of ready when an AI agent is part of the workflow.

For Scrum Masters: OpenAI’s auto-review mode mirrors the idea of tiered approval in Scrum. Not every change needs a full review. Defining low-risk vs. high-risk actions for your agent is the same kind of work as defining what needs the full team’s attention versus what can move forward without a meeting.

OpenAI Created a Company Dedicated to Enterprise AI Deployment

 

Source: OpenAI News · May 11, 2026

On May 11, OpenAI launched the OpenAI Deployment Company, a new entity focused entirely on helping businesses move from AI pilots to production. The same day, it announced Codex Labs — a hands-on service that brings OpenAI engineers directly into organizations to run workshops and working sessions with enterprise engineering teams.

OpenAI also confirmed global implementation partnerships with seven major systems integrators: Accenture, Capgemini, CGI, Cognizant, Infosys, PwC, and Tata Consultancy Services (TCS). These firms are already using Codex internally, and they will help enterprise customers move from pilots to production-ready deployments across the full software development lifecycle.

Capgemini described the current state of the shift plainly: “Our professionals are using Codex to move from static requirements to working solutions in hours, not weeks. It’s enabling rapid prototyping, real-time workflow redesign, and faster iteration across the development lifecycle.”

Enterprise adoption is already producing concrete results across industries. Virgin Atlantic is using Codex to increase test coverage and reduce technical debt. Ramp is using it to accelerate code review cycles. Notion is using it to ship new features faster. Cisco is using it to understand and reason across large, interconnected codebases. Rakuten is using it for incident response.

7 Global systems integrators now deploying Codex to enterprise customers

Accenture, Capgemini, CGI, Cognizant, Infosys, PwC, and Tata Consultancy Services have all formally partnered with OpenAI to deploy Codex inside enterprise software development teams. These firms are using Codex internally as well as delivering it to clients. (Source: OpenAI, May 11, 2026)

The combination of Codex Labs and the seven GSI partnerships signals something important: OpenAI is no longer treating enterprise adoption as an API problem. It is treating it as a change management problem — one that requires in-person expertise, workflow redesign, and organizational support to solve.

Scrum Team Signal

This announcement matters to Scrum teams because the bottleneck is shifting from “do we have the tool?” to “are we using the tool in a way that changes how work flows?” OpenAI has concluded that most enterprises cannot make that shift without structured help. If your organization has paid for AI coding tools and adoption is still low, that is a Sprint Retrospective topic, not a technology topic.

For Product Owners: If your organization engages one of these GSI partners for Codex deployment, the resulting workflow changes will touch Sprint cadence, backlog structure, and how acceptance criteria are written. Get in front of those conversations early. A new toolchain deployed without Agile context often leads to exactly the kind of “feature factory” that Scrum was designed to prevent.

Look at the real-world examples: Test coverage, code review, feature shipping, repository reasoning, incident response. Those are five categories of Scrum team work that enterprise customers are already automating with Codex. Evaluate which of those five categories your team could accelerate first — and write a user story for it.

Anthropic Brings the Full Claude Platform to AWS — Same Day, Every Feature

Source: Anthropic — Claude Platform on AWS · May 11, 2026

Also on May 11, Anthropic made the Claude Platform on AWS generally available. For the first time, AWS customers can access the full Claude API feature set — with every new feature and beta shipping the same day it reaches the main Claude Platform. Previously, AWS customers accessing Claude through Amazon Bedrock received features on a delayed schedule.

The Claude Platform on AWS includes: Managed Agents (the same cloud-hosted agent runtime that shipped dreaming on May 6), code execution (run Python directly in the API), web search and web fetch (real-time data access), the Skills system (reusable best-practice templates), the Advisor strategy (which lets a smarter model plan before a faster model executes), prompt caching, the Files API, batch processing, and the MCP connector for linking Claude to external tools without writing client code.

The key difference between this and Claude on Bedrock comes down to data handling. The Claude Platform on AWS is operated by Anthropic directly, with data processed outside the AWS boundary. Claude on Bedrock keeps AWS as the data processor and operates within the AWS boundary. Teams with strict regional data residency requirements will stay on Bedrock. Teams that want immediate access to every new Claude feature will use the Claude Platform on AWS.

80× Anthropic’s actual API growth vs. internal plan

Anthropic CEO Dario Amodei disclosed at the Code with Claude conference that the company projected 10× annual growth in API usage. Actual annualized growth came in at 80×. The company has signed compute agreements with SpaceX (300+ megawatts via the Colossus 1 data center), Amazon (up to 5 GW), and Google and Broadcom (5 GW beginning 2027) to handle the load. (Source: VentureBeat / Anthropic, May 2026)

On the same day, Anthropic also shipped Agent view in Claude Code, a redesigned interface that shows each subagent’s steps side by side in real time. Developers running multi-agent workflows can now see exactly what each agent is doing without switching between terminal windows.

Scrum Team Signal

For teams already on AWS infrastructure: You can now access every Claude capability — including the dreaming-based self-improving agents from Story 1 — on the same infrastructure your team already uses for compute, storage, and deployment. That removes a major reason teams have delayed AI adoption: the need to manage a separate vendor relationship and security review.

For the Sprint level: The Advisor strategy is particularly useful for Scrum teams. The architecture uses a smarter model to write a short plan (typically 400–700 tokens), then hands execution to a faster, cheaper model. That is the AI equivalent of a brief planning session before a coding session — the same pattern Scrum has always recommended. You can now build that pattern into your API workflows explicitly.

Watch the Agent view feature. If your team is running multi-agent workflows in Claude Code, the new side-by-side view makes it possible to do a Sprint Review of agent work — showing stakeholders what each agent actually did, step by step, rather than just presenting the final output. That transparency is essential for building trust in AI-assisted development inside any Agile team.

Coming Next Week

Google I/O 2026 opens May 19 — expect Gemini 4, dedicated agentic coding sessions, and new developer tools aimed directly at the Claude Code and Codex market. We will cover what matters for Scrum teams.

 

RC

Rod Claar

Scrum trainer, AI educator, and software development consultant with more than two decades of experience teaching Scrum, Agile, TDD, and software design patterns. Currently focused on AI’s practical impact on software teams. Publisher of AgileAIDev.com.

What Changed in Software Development This Week Because of AI

Print

Number of views (41)      Comments (0)

Tags:
Comments are only visible to subscribers.

Get the Practical AI Playbook

Short lessons, templates, and new training announcements—no noise.

 

Join the Newsletter 

Find What You Need

Search videos, articles, and courses by topic.

Browse by Topic

Categories

Explore AI, design patterns, algorithms, and delivery.

Live Training Calendar and Events

Upcoming events Events RSSiCalendar export

Contact Me

After decades of building software and teaching professionals, I’ve learned that tools change—but clear thinking doesn’t. This site is here to help you use AI thoughtfully, and build software you can stand behind.  - Rod Claar