Select the search type
  • Site
  • Web
Search

 
 
✓ Featured Content

AI for Product Owners Videos

A curated playlist of specific YouTube content.

Hands-on Workshop

Ready to Transform Your Scrum Team with AI?

Join the Generative AI for Scrum Teams Workshop

Stop wondering how AI fits into your Agile workflow. In this hands-on workshop, you'll learn exactly how to integrate AI tools into every sprint ceremony, backlog refinement session, and delivery cycle—without disrupting the Scrum framework that already works for your team.

What You'll Master:

  • AI-powered user story creation and refinement techniques
  • Automated test generation and code review strategies
  • Sprint planning acceleration with AI assistance
  • Real-world prompt engineering for development teams
  • Ethical AI integration within Scrum values

Perfect for: Scrum Masters, Product Owners, Development Teams, and Agile Coaches who want to boost productivity while maintaining team collaboration and quality.

Taught by Rod Claar, Certified Scrum Trainer with 30+ years of development experience and specialized AI-Enhanced Scrum methodology.

Search Results

Rob Pike's 5 Rules — What They Mean for AI and Agents
Rod Claar
/ Categories: AI Coding

Rob Pike's 5 Rules — What They Mean for AI and Agents

Rob Pike wrote five rules for writing clean C code in 1989. They hold up surprisingly well today — especially now that AI tools and autonomous agents are showing up in our Sprints, our pipelines, and our backlogs.

Scrum & AI Insights

Rob Pike's 5 Rules —
What They Mean for AI and Agents

A Bell Labs legend wrote five simple rules back in 1989. They were about writing clean C code. Turns out they apply just as well to building AI systems and autonomous agents today.

Salem Fine Scrum & AI Practice 10 min read

Rob Pike is one of the creators of the Go programming language. He also worked at Bell Labs alongside Ken Thompson and Dennis Ritchie — the people who built Unix and C. In 1989, Pike wrote a short document called Notes on Programming in C. Inside it were five rules for writing better programs.

Those rules never really got old. Developers still share them today. And right now, as AI tools flood into our backlogs, our CI/CD pipelines, and our sprint reviews, Pike's words feel more useful than ever.

"The key insight is that programming is not about instructions for computers — it is about ideas for people."

— Context from Pike's broader writings on software design

In Scrum, we talk about delivering value in small, working increments. We inspect and adapt. We keep things simple. Pike was saying the same things about code thirty-five years ago. Let's walk through each rule and see what it means when your developer is a large language model, or when the worker in your pipeline is an autonomous AI agent.

Rule 1

You Cannot Tell Where a Program Spends Its Time

"You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second-guess and put in a speed hack until you've proven that's where the bottleneck is."
— Rob Pike, Notes on Programming in C, 1989

When you add an AI agent to your workflow, you expect it to save time on the obvious, boring stuff — writing boilerplate, triaging tickets, summarizing documents. But the real bottlenecks are rarely where you think they are.

Teams that rush to automate code generation often discover the real slowdown was never writing the code. It was reviewing it, understanding it, and deciding what to build next. AI speeds up the writing but may not touch the actual delay.

In Scrum terms: before your team celebrates because an AI assistant cut story-writing time in half, look at your flow metrics. Check your cycle time. Is the bottleneck actually in writing stories — or is it in refinement, review, or deployment? Measure first. Then decide where to apply AI.

Cycle Time Flow Metrics Backlog Refinement
Rule 2

Measure. Don't Tune for Speed Until You Have.

"Measure. Don't tune for speed until you've measured, and even then don't unless one part of the code overwhelms the rest."
— Rob Pike, Notes on Programming in C, 1989

This one hits differently with AI. There is a strong pull right now to add AI everywhere and optimize everything, all at once. Teams are spinning up agents for testing, for documentation, for code review, for deployment checks — before measuring whether any of it actually helps.

Pike's message was simple: measure first, optimize second. The same applies directly to AI adoption. Before your team changes its Sprint process to accommodate an AI code reviewer, run a few controlled Sprints. Measure velocity, defect rates, and review turnaround time. Then decide.

The Scrum framework already gives you the tools to do this. Your Sprint Review and your Retrospective exist exactly for this kind of inspection. Use them. Don't add AI because it feels fast. Add it because your data shows where it helps.

Sprint Velocity Retrospective Definition of Done
Rule 3

Fancy Algorithms Are Slow When n Is Small

"Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy."
— Rob Pike, Notes on Programming in C, 1989

A large language model is, by definition, a very fancy algorithm. It has enormous constants — in compute cost, in latency, in API pricing, and in the cognitive cost of managing its outputs. When the problem is small, the fancy approach loses.

Does your team need an AI agent to summarize a ten-line daily standup update? Probably not. Does it make sense to use a multi-step reasoning agent to answer a question that a simple regex or a SQL query would answer in milliseconds? No.

This rule teaches us to ask the right question before reaching for a powerful tool: Is n actually big here? For Scrum teams, AI starts to earn its keep on truly large inputs — analyzing hundreds of production defects to find patterns, suggesting relative effort estimates across a backlog of sixty or more items, or synthesizing user research from dozens of interviews. Keep small tasks small.

Story Estimation Defect Analysis Cost of AI
The Scrum Guide & Empiricism

The Scrum Guide (Schwaber & Sutherland, 2020) is built on three pillars: Transparency, Inspection, and Adaptation. Rules 1, 2, and 3 from Pike are essentially an engineering expression of those same three pillars. Don't guess where the cost is (Transparency). Measure before you optimize (Inspection). Don't apply heavy solutions to light problems (Adaptation).

The Scrum framework has never prescribed specific tools. It prescribes a mindset. AI is just a tool — and like any tool, it needs to earn its place in the process through observation and evidence, not enthusiasm.

Rule 4

Fancy Algorithms Are Buggier Than Simple Ones

"Fancy algorithms are buggier than simple ones, and they're much harder to implement. Use simple algorithms as well as simple data structures."
— Rob Pike, Notes on Programming in C, 1989

AI agents are not simple. They hallucinate. They produce confident, well-formatted, completely wrong answers. They can pass tests they should fail and fail tests they should pass. And because their reasoning is not visible the way traditional code is visible, their bugs are harder to find.

Pike wrote this rule to warn against complexity for its own sake. AI adds real complexity to any software system. That complexity needs to be justified by the value it delivers. If an AI agent writes a function that looks right but contains a subtle logic error, your team may ship that error into production — because AI-generated code can look more polished than code that has a bug hiding in it.

This is where Test Driven Development (TDD) and Acceptance Test Driven Development (ATDD) become critical. Write the test first. Let the AI write the code. Then let the test tell you if the output is correct. Without that safety net, AI-generated bugs are much harder to catch than bugs written by a human who knows what they intended to do.

  • Always pair AI code generation with automated test coverage
  • Human code review remains part of your Definition of Done
  • Keep agentic pipelines observable — log what the agent decided and why
TDD ATDD Code Review Observability
Rule 5

Data Dominates

"Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming."
— Rob Pike, Notes on Programming in C, 1989

This might be the most important rule in the age of AI — and the most ignored. AI models are, at their core, a reflection of the data they were trained on. Large language models generate outputs based on patterns in their training data. Agents retrieve, process, and act on the data you give them. The quality of that data determines everything.

In an Agile context, your Product Backlog is data. Your acceptance criteria are data. Your Definition of Done is data. If those are unclear, inconsistent, or poorly structured, an AI agent working with them will produce unclear, inconsistent, or poorly structured outputs — with great confidence and beautiful formatting.

Pike's rule translates directly: before you invest in a better AI model or a smarter agent, invest in better structured data. Clean up your Jira tickets. Write acceptance criteria in consistent formats. Structure your test cases so they can be read by a machine. When your data is good, even a simpler model will do impressive work. When your data is messy, no model saves you.

  • Well-structured user stories feed better AI suggestions
  • Consistent acceptance criteria format enables reliable agent parsing
  • Clean sprint history gives AI more accurate context for estimates
  • Data hygiene is now a team responsibility — not just a DBA problem
Data Quality Product Backlog Acceptance Criteria Context Window
# Pike's Rule AI & Agent Meaning Scrum Connection
1 Bottlenecks are surprising AI may not fix the real delay in your workflow Measure flow before automating
2 Measure before tuning Run controlled Sprints before scaling AI use Retrospective drives data-based adoption
3 Fancy is slow when n is small Don't use LLMs for work a simple query handles Right-size the tool to the story size
4 Fancy algorithms are buggier AI code needs TDD safety nets to catch its errors DoD must include AI output review
5 Data dominates Structure your backlog data before trusting AI output Well-written stories produce better AI results

Rob Pike was not writing about AI. He was writing about C programs in the late 1980s. But wisdom about complexity, measurement, simplicity, and data quality does not expire. If anything, it becomes more important when the complexity is coming from a system you didn't build and can't fully read.

AI agents and large language models are powerful. They are also expensive, opaque, and prone to confident mistakes. That combination requires exactly the discipline Pike was describing — measure before you optimize, keep things as simple as the problem allows, test rigorously, and treat your data as the foundation everything else rests on.

The Scrum framework gives your team the inspect-and-adapt rhythm to do all of this responsibly. The Sprint is your measurement unit. The Retrospective is your tuning cycle. The Product Backlog, when kept clean and well-structured, is your data layer. Pike's rules do not compete with Scrum — they reinforce it.

Before your team adds another AI tool to the pipeline, go back and read those five rules. Ask whether you've measured where the real bottleneck is. Ask whether n is actually big enough to justify the complexity. Ask whether your data is good enough for an AI to use. If the answers are yes, move forward. If the answers are not yet, you know what to work on first.

Ready to Apply This in Your Next Sprint?

Explore more Scrum and AI resources from Salem Fine.

© 2026 AgileAIDev.com · rod@agileaidev.com Source: Rob Pike, Notes on Programming in C, 1989 · Scrum Guide, Schwaber & Sutherland, 2020

 

Previous Article Virtual Certified ScrumMaster Workshop - Half Day Option - Pacific Time - June 29-July 2, 2026
Next Article The Top 5 AI Changes Hitting Software Development for the Week of April 27, 2026
Print
76 Rate this article:
No rating
Please login or register to post comments.

Search

Calendar

«April 2026»
SunMonTueWedThuFriSat
2930311234
567891011
1213141516
1718
19202122232425
262728293012
3456789

Upcoming events Events RSSiCalendar export

AI News

Categories