Select the search type
  • Site
  • Web
Search

Path Steps

Follow these steps in order. Each one links to an EasyDNNnews article/video and gives you a quick, practical takeaway.

You’ll learn how to frame AI as a teammate that supports Scrum events and backlog work without replacing judgment or collaboration.
Do this exercise: Write a 3-sentence “AI usage policy” for your team (what you will use AI for, what you won’t, and what must be reviewed by a human).
You’ll learn repeatable prompt patterns to generate stories with clearer intent, constraints, and acceptance criteria.
Do this exercise: Take one messy request and prompt AI to produce (a) a user story, (b) 5 acceptance criteria, and (c) 3 key questions for the PO.
You’ll learn how to generate “plan options” (not commitments) and improve shared understanding of scope and dependencies.
Do this exercise: Ask AI for 2 sprint goal options based on your top backlog items, then pick one as a team and adjust wording together.
You’ll learn facilitation prompts that help teams extract insights, turn feedback into actions, and avoid “retro theatre.”
Do this exercise: Feed AI 5 bullet facts from the sprint and ask for (a) patterns, (b) 3 improvement experiments, and (c) 1 metric per experiment.
You’ll learn how to convert your best prompts and practices into a lightweight working agreement the team can actually follow.
Do this exercise: Create a “Prompt Library” page with 5 prompts: refinement, story writing, planning, review, retro—each with input/output examples.
 

Learning Path - Free

24 Feb 2026

Step 1: What AI Can (and Can’t) Do for Scrum Teams

AI is a productivity amplifier—not a Product Owner, not a Scrum Master, and not a Developer.

Used correctly, it accelerates learning, drafting, summarizing, and exploring options. Used poorly, it replaces thinking with automation theater.

This step helps your team position AI as a supporting teammate, not a decision-maker.

Author: Rod Claar
0 Comments
Article rating: No rating

24 Feb 2026

Step 2: Prompts That Produce Better User Stories

AI can help—but only if the prompt is structured.

This step introduces repeatable prompt patterns that improve:

  • Intent clarity

  • Constraints visibility

  • Acceptance criteria quality

  • PO alignment

Author: Rod Claar
0 Comments
Article rating: No rating

24 Feb 2026

Step 3: Backlog Refinement with AI (Without Losing the “Why”)

The Core Risk

When teams use AI in refinement, a common failure mode appears:

  • Stories get cleaner

  • Acceptance criteria get longer

  • Technical detail increases

  • Business intent becomes less visible

Scrum optimizes for value delivery, not documentation density.

AI must support the “why” behind the work.

Author: Rod Claar
0 Comments
Article rating: No rating

24 Feb 2026

Step 4: Sprint Planning Acceleration

The Key Principle

AI should propose:

  • Possible Sprint Goals

  • Possible scope groupings

  • Possible dependency flags

The team still decides:

  • What to commit to

  • What fits capacity

  • What aligns to product strategy

AI drafts.
The team commits.

Author: Rod Claar
0 Comments
Article rating: No rating

2 Apr 2026

Why Your AI Agent Fails 97.5% of Real Work — And the Fix Isn't More Code

Most AI agent projects fail not because of bad code or weak models — they fail because teams aim at the wrong part of the workflow. AI strategist Nate B. Jones argues that real work is only about 2.5% high-judgment "core" decisions, while the other 97.5% is mechanical edge work: data prep, QA, synthesis, handoffs, and packaging. Teams that try to automate the core first stall out fast. Teams that start with the edges — the boring stuff surrounding the valuable work — ship results in days, build organizational trust, and create a proven path toward eventually tackling the core. It's the same principle behind Agile: start small, deliver value fast, and expand from a foundation of demonstrated success. The fix isn't better AI. It's smarter strategy about where you start.

Author: Rod Claar
0 Comments
Article rating: No rating
RSS

Learning Path - Member

 
 
✓ Featured Content

AI for Scrum and Agile Teams
Videos

A curated playlist of specific YouTube content.

Search Results

28 Apr 2026

Rob Pike's 5 Rules — What They Mean for AI and Agents

Rob Pike's 5 Rules — What They Mean for AI and Agents

Author: Rod Claar  /  Categories: AI Coding  /  Rate this article:
No rating
Scrum & AI Insights

Rob Pike's 5 Rules —
What They Mean for AI and Agents

A Bell Labs legend wrote five simple rules back in 1989. They were about writing clean C code. Turns out they apply just as well to building AI systems and autonomous agents today.

Salem Fine Scrum & AI Practice 10 min read

Rob Pike is one of the creators of the Go programming language. He also worked at Bell Labs alongside Ken Thompson and Dennis Ritchie — the people who built Unix and C. In 1989, Pike wrote a short document called Notes on Programming in C. Inside it were five rules for writing better programs.

Those rules never really got old. Developers still share them today. And right now, as AI tools flood into our backlogs, our CI/CD pipelines, and our sprint reviews, Pike's words feel more useful than ever.

"The key insight is that programming is not about instructions for computers — it is about ideas for people."

— Context from Pike's broader writings on software design

In Scrum, we talk about delivering value in small, working increments. We inspect and adapt. We keep things simple. Pike was saying the same things about code thirty-five years ago. Let's walk through each rule and see what it means when your developer is a large language model, or when the worker in your pipeline is an autonomous AI agent.

Rule 1

You Cannot Tell Where a Program Spends Its Time

"You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second-guess and put in a speed hack until you've proven that's where the bottleneck is."
— Rob Pike, Notes on Programming in C, 1989

When you add an AI agent to your workflow, you expect it to save time on the obvious, boring stuff — writing boilerplate, triaging tickets, summarizing documents. But the real bottlenecks are rarely where you think they are.

Teams that rush to automate code generation often discover the real slowdown was never writing the code. It was reviewing it, understanding it, and deciding what to build next. AI speeds up the writing but may not touch the actual delay.

In Scrum terms: before your team celebrates because an AI assistant cut story-writing time in half, look at your flow metrics. Check your cycle time. Is the bottleneck actually in writing stories — or is it in refinement, review, or deployment? Measure first. Then decide where to apply AI.

Cycle Time Flow Metrics Backlog Refinement
Rule 2

Measure. Don't Tune for Speed Until You Have.

"Measure. Don't tune for speed until you've measured, and even then don't unless one part of the code overwhelms the rest."
— Rob Pike, Notes on Programming in C, 1989

This one hits differently with AI. There is a strong pull right now to add AI everywhere and optimize everything, all at once. Teams are spinning up agents for testing, for documentation, for code review, for deployment checks — before measuring whether any of it actually helps.

Pike's message was simple: measure first, optimize second. The same applies directly to AI adoption. Before your team changes its Sprint process to accommodate an AI code reviewer, run a few controlled Sprints. Measure velocity, defect rates, and review turnaround time. Then decide.

The Scrum framework already gives you the tools to do this. Your Sprint Review and your Retrospective exist exactly for this kind of inspection. Use them. Don't add AI because it feels fast. Add it because your data shows where it helps.

Sprint Velocity Retrospective Definition of Done
Rule 3

Fancy Algorithms Are Slow When n Is Small

"Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy."
— Rob Pike, Notes on Programming in C, 1989

A large language model is, by definition, a very fancy algorithm. It has enormous constants — in compute cost, in latency, in API pricing, and in the cognitive cost of managing its outputs. When the problem is small, the fancy approach loses.

Does your team need an AI agent to summarize a ten-line daily standup update? Probably not. Does it make sense to use a multi-step reasoning agent to answer a question that a simple regex or a SQL query would answer in milliseconds? No.

This rule teaches us to ask the right question before reaching for a powerful tool: Is n actually big here? For Scrum teams, AI starts to earn its keep on truly large inputs — analyzing hundreds of production defects to find patterns, suggesting relative effort estimates across a backlog of sixty or more items, or synthesizing user research from dozens of interviews. Keep small tasks small.

Story Estimation Defect Analysis Cost of AI
The Scrum Guide & Empiricism

The Scrum Guide (Schwaber & Sutherland, 2020) is built on three pillars: Transparency, Inspection, and Adaptation. Rules 1, 2, and 3 from Pike are essentially an engineering expression of those same three pillars. Don't guess where the cost is (Transparency). Measure before you optimize (Inspection). Don't apply heavy solutions to light problems (Adaptation).

The Scrum framework has never prescribed specific tools. It prescribes a mindset. AI is just a tool — and like any tool, it needs to earn its place in the process through observation and evidence, not enthusiasm.

Rule 4

Fancy Algorithms Are Buggier Than Simple Ones

"Fancy algorithms are buggier than simple ones, and they're much harder to implement. Use simple algorithms as well as simple data structures."
— Rob Pike, Notes on Programming in C, 1989

AI agents are not simple. They hallucinate. They produce confident, well-formatted, completely wrong answers. They can pass tests they should fail and fail tests they should pass. And because their reasoning is not visible the way traditional code is visible, their bugs are harder to find.

Pike wrote this rule to warn against complexity for its own sake. AI adds real complexity to any software system. That complexity needs to be justified by the value it delivers. If an AI agent writes a function that looks right but contains a subtle logic error, your team may ship that error into production — because AI-generated code can look more polished than code that has a bug hiding in it.

This is where Test Driven Development (TDD) and Acceptance Test Driven Development (ATDD) become critical. Write the test first. Let the AI write the code. Then let the test tell you if the output is correct. Without that safety net, AI-generated bugs are much harder to catch than bugs written by a human who knows what they intended to do.

  • Always pair AI code generation with automated test coverage
  • Human code review remains part of your Definition of Done
  • Keep agentic pipelines observable — log what the agent decided and why
TDD ATDD Code Review Observability
Rule 5

Data Dominates

"Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming."
— Rob Pike, Notes on Programming in C, 1989

This might be the most important rule in the age of AI — and the most ignored. AI models are, at their core, a reflection of the data they were trained on. Large language models generate outputs based on patterns in their training data. Agents retrieve, process, and act on the data you give them. The quality of that data determines everything.

In an Agile context, your Product Backlog is data. Your acceptance criteria are data. Your Definition of Done is data. If those are unclear, inconsistent, or poorly structured, an AI agent working with them will produce unclear, inconsistent, or poorly structured outputs — with great confidence and beautiful formatting.

Pike's rule translates directly: before you invest in a better AI model or a smarter agent, invest in better structured data. Clean up your Jira tickets. Write acceptance criteria in consistent formats. Structure your test cases so they can be read by a machine. When your data is good, even a simpler model will do impressive work. When your data is messy, no model saves you.

  • Well-structured user stories feed better AI suggestions
  • Consistent acceptance criteria format enables reliable agent parsing
  • Clean sprint history gives AI more accurate context for estimates
  • Data hygiene is now a team responsibility — not just a DBA problem
Data Quality Product Backlog Acceptance Criteria Context Window
# Pike's Rule AI & Agent Meaning Scrum Connection
1 Bottlenecks are surprising AI may not fix the real delay in your workflow Measure flow before automating
2 Measure before tuning Run controlled Sprints before scaling AI use Retrospective drives data-based adoption
3 Fancy is slow when n is small Don't use LLMs for work a simple query handles Right-size the tool to the story size
4 Fancy algorithms are buggier AI code needs TDD safety nets to catch its errors DoD must include AI output review
5 Data dominates Structure your backlog data before trusting AI output Well-written stories produce better AI results

Rob Pike was not writing about AI. He was writing about C programs in the late 1980s. But wisdom about complexity, measurement, simplicity, and data quality does not expire. If anything, it becomes more important when the complexity is coming from a system you didn't build and can't fully read.

AI agents and large language models are powerful. They are also expensive, opaque, and prone to confident mistakes. That combination requires exactly the discipline Pike was describing — measure before you optimize, keep things as simple as the problem allows, test rigorously, and treat your data as the foundation everything else rests on.

The Scrum framework gives your team the inspect-and-adapt rhythm to do all of this responsibly. The Sprint is your measurement unit. The Retrospective is your tuning cycle. The Product Backlog, when kept clean and well-structured, is your data layer. Pike's rules do not compete with Scrum — they reinforce it.

Before your team adds another AI tool to the pipeline, go back and read those five rules. Ask whether you've measured where the real bottleneck is. Ask whether n is actually big enough to justify the complexity. Ask whether your data is good enough for an AI to use. If the answers are yes, move forward. If the answers are not yet, you know what to work on first.

Ready to Apply This in Your Next Sprint?

Explore more Scrum and AI resources from Salem Fine.

© 2026 AgileAIDev.com · rod@agileaidev.com Source: Rob Pike, Notes on Programming in C, 1989 · Scrum Guide, Schwaber & Sutherland, 2020

 

Print

Number of views (100)      Comments (0)

More links

Search

Calendar

«April 2026»
SunMonTueWedThuFriSat
2930311234
567891011
12131415161718
19202122232425
262728293012
3456789

Upcoming events

Upcoming AI Training

17 Jun 2026

Author: Rod Claar
0 Comments
Article rating: No rating

20 May 2026

Author: Rod Claar
0 Comments
Article rating: No rating
RSS

Two Ways

Keep Learning — Two Ways

Choose the free track to get new lessons as they’re released, or go deeper with a structured course that puts everything into a repeatable playbook.

Free
Join updates / get new lessons

Get notified when new steps, templates, and examples are added—so you can keep improving your AI skills one sprint at a time.

Join updates
No spam. Practical lessons only. Unsubscribe any time.