Select the search type
  • Site
  • Web
Search

Live + Practical Product Owner Toolkit

Prompts for Product Owners

A field-tested prompt set for Product Owners to clarify product decisions, sharpen backlog conversations, and improve stakeholder alignment without turning product work into “prompt theater.”

Learning Path - Members

28 Apr 2026

Rob Pike's 5 Rules — What They Mean for AI and Agents

Rob Pike's 5 Rules — What They Mean for AI and Agents

Author: Rod Claar  /  Categories: AI Coding  /  Rate this article:
No rating
Scrum & AI Insights

Rob Pike's 5 Rules —
What They Mean for AI and Agents

A Bell Labs legend wrote five simple rules back in 1989. They were about writing clean C code. Turns out they apply just as well to building AI systems and autonomous agents today.

Salem Fine Scrum & AI Practice 10 min read

Rob Pike is one of the creators of the Go programming language. He also worked at Bell Labs alongside Ken Thompson and Dennis Ritchie — the people who built Unix and C. In 1989, Pike wrote a short document called Notes on Programming in C. Inside it were five rules for writing better programs.

Those rules never really got old. Developers still share them today. And right now, as AI tools flood into our backlogs, our CI/CD pipelines, and our sprint reviews, Pike's words feel more useful than ever.

"The key insight is that programming is not about instructions for computers — it is about ideas for people."

— Context from Pike's broader writings on software design

In Scrum, we talk about delivering value in small, working increments. We inspect and adapt. We keep things simple. Pike was saying the same things about code thirty-five years ago. Let's walk through each rule and see what it means when your developer is a large language model, or when the worker in your pipeline is an autonomous AI agent.

Rule 1

You Cannot Tell Where a Program Spends Its Time

"You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second-guess and put in a speed hack until you've proven that's where the bottleneck is."
— Rob Pike, Notes on Programming in C, 1989

When you add an AI agent to your workflow, you expect it to save time on the obvious, boring stuff — writing boilerplate, triaging tickets, summarizing documents. But the real bottlenecks are rarely where you think they are.

Teams that rush to automate code generation often discover the real slowdown was never writing the code. It was reviewing it, understanding it, and deciding what to build next. AI speeds up the writing but may not touch the actual delay.

In Scrum terms: before your team celebrates because an AI assistant cut story-writing time in half, look at your flow metrics. Check your cycle time. Is the bottleneck actually in writing stories — or is it in refinement, review, or deployment? Measure first. Then decide where to apply AI.

Cycle Time Flow Metrics Backlog Refinement
Rule 2

Measure. Don't Tune for Speed Until You Have.

"Measure. Don't tune for speed until you've measured, and even then don't unless one part of the code overwhelms the rest."
— Rob Pike, Notes on Programming in C, 1989

This one hits differently with AI. There is a strong pull right now to add AI everywhere and optimize everything, all at once. Teams are spinning up agents for testing, for documentation, for code review, for deployment checks — before measuring whether any of it actually helps.

Pike's message was simple: measure first, optimize second. The same applies directly to AI adoption. Before your team changes its Sprint process to accommodate an AI code reviewer, run a few controlled Sprints. Measure velocity, defect rates, and review turnaround time. Then decide.

The Scrum framework already gives you the tools to do this. Your Sprint Review and your Retrospective exist exactly for this kind of inspection. Use them. Don't add AI because it feels fast. Add it because your data shows where it helps.

Sprint Velocity Retrospective Definition of Done
Rule 3

Fancy Algorithms Are Slow When n Is Small

"Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy."
— Rob Pike, Notes on Programming in C, 1989

A large language model is, by definition, a very fancy algorithm. It has enormous constants — in compute cost, in latency, in API pricing, and in the cognitive cost of managing its outputs. When the problem is small, the fancy approach loses.

Does your team need an AI agent to summarize a ten-line daily standup update? Probably not. Does it make sense to use a multi-step reasoning agent to answer a question that a simple regex or a SQL query would answer in milliseconds? No.

This rule teaches us to ask the right question before reaching for a powerful tool: Is n actually big here? For Scrum teams, AI starts to earn its keep on truly large inputs — analyzing hundreds of production defects to find patterns, suggesting relative effort estimates across a backlog of sixty or more items, or synthesizing user research from dozens of interviews. Keep small tasks small.

Story Estimation Defect Analysis Cost of AI
The Scrum Guide & Empiricism

The Scrum Guide (Schwaber & Sutherland, 2020) is built on three pillars: Transparency, Inspection, and Adaptation. Rules 1, 2, and 3 from Pike are essentially an engineering expression of those same three pillars. Don't guess where the cost is (Transparency). Measure before you optimize (Inspection). Don't apply heavy solutions to light problems (Adaptation).

The Scrum framework has never prescribed specific tools. It prescribes a mindset. AI is just a tool — and like any tool, it needs to earn its place in the process through observation and evidence, not enthusiasm.

Rule 4

Fancy Algorithms Are Buggier Than Simple Ones

"Fancy algorithms are buggier than simple ones, and they're much harder to implement. Use simple algorithms as well as simple data structures."
— Rob Pike, Notes on Programming in C, 1989

AI agents are not simple. They hallucinate. They produce confident, well-formatted, completely wrong answers. They can pass tests they should fail and fail tests they should pass. And because their reasoning is not visible the way traditional code is visible, their bugs are harder to find.

Pike wrote this rule to warn against complexity for its own sake. AI adds real complexity to any software system. That complexity needs to be justified by the value it delivers. If an AI agent writes a function that looks right but contains a subtle logic error, your team may ship that error into production — because AI-generated code can look more polished than code that has a bug hiding in it.

This is where Test Driven Development (TDD) and Acceptance Test Driven Development (ATDD) become critical. Write the test first. Let the AI write the code. Then let the test tell you if the output is correct. Without that safety net, AI-generated bugs are much harder to catch than bugs written by a human who knows what they intended to do.

  • Always pair AI code generation with automated test coverage
  • Human code review remains part of your Definition of Done
  • Keep agentic pipelines observable — log what the agent decided and why
TDD ATDD Code Review Observability
Rule 5

Data Dominates

"Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming."
— Rob Pike, Notes on Programming in C, 1989

This might be the most important rule in the age of AI — and the most ignored. AI models are, at their core, a reflection of the data they were trained on. Large language models generate outputs based on patterns in their training data. Agents retrieve, process, and act on the data you give them. The quality of that data determines everything.

In an Agile context, your Product Backlog is data. Your acceptance criteria are data. Your Definition of Done is data. If those are unclear, inconsistent, or poorly structured, an AI agent working with them will produce unclear, inconsistent, or poorly structured outputs — with great confidence and beautiful formatting.

Pike's rule translates directly: before you invest in a better AI model or a smarter agent, invest in better structured data. Clean up your Jira tickets. Write acceptance criteria in consistent formats. Structure your test cases so they can be read by a machine. When your data is good, even a simpler model will do impressive work. When your data is messy, no model saves you.

  • Well-structured user stories feed better AI suggestions
  • Consistent acceptance criteria format enables reliable agent parsing
  • Clean sprint history gives AI more accurate context for estimates
  • Data hygiene is now a team responsibility — not just a DBA problem
Data Quality Product Backlog Acceptance Criteria Context Window
# Pike's Rule AI & Agent Meaning Scrum Connection
1 Bottlenecks are surprising AI may not fix the real delay in your workflow Measure flow before automating
2 Measure before tuning Run controlled Sprints before scaling AI use Retrospective drives data-based adoption
3 Fancy is slow when n is small Don't use LLMs for work a simple query handles Right-size the tool to the story size
4 Fancy algorithms are buggier AI code needs TDD safety nets to catch its errors DoD must include AI output review
5 Data dominates Structure your backlog data before trusting AI output Well-written stories produce better AI results

Rob Pike was not writing about AI. He was writing about C programs in the late 1980s. But wisdom about complexity, measurement, simplicity, and data quality does not expire. If anything, it becomes more important when the complexity is coming from a system you didn't build and can't fully read.

AI agents and large language models are powerful. They are also expensive, opaque, and prone to confident mistakes. That combination requires exactly the discipline Pike was describing — measure before you optimize, keep things as simple as the problem allows, test rigorously, and treat your data as the foundation everything else rests on.

The Scrum framework gives your team the inspect-and-adapt rhythm to do all of this responsibly. The Sprint is your measurement unit. The Retrospective is your tuning cycle. The Product Backlog, when kept clean and well-structured, is your data layer. Pike's rules do not compete with Scrum — they reinforce it.

Before your team adds another AI tool to the pipeline, go back and read those five rules. Ask whether you've measured where the real bottleneck is. Ask whether n is actually big enough to justify the complexity. Ask whether your data is good enough for an AI to use. If the answers are yes, move forward. If the answers are not yet, you know what to work on first.

Ready to Apply This in Your Next Sprint?

Explore more Scrum and AI resources from Salem Fine.

© 2026 AgileAIDev.com · rod@agileaidev.com Source: Rob Pike, Notes on Programming in C, 1989 · Scrum Guide, Schwaber & Sutherland, 2020

 

Print

Number of views (104)      Comments (0)

More links

Comments are only visible to subscribers.

PromtDatabase

🔒 Private • In-Browser • Fast

Prompt Database

A fast, private prompt manager that lives entirely in your browser. Save your best prompts, add tags and categories, version them, and copy with one click. No accounts, no servers, no waiting.

  • Local-first: uses your browser database (IndexedDB)—your prompts stay on your device.
  • Lightning search across titles, tags, categories, and full text.
  • Import/Export JSON for backup or team sharing.
Screenshot of Prompt Database app showing prompt list, categories, and editor
Runs locally in your browser. No sign-in required.

Why Prompt Database

Keep your best prompts organized and at your fingertips

Local-First & Private

All data is stored using your browser’s IndexedDB. Nothing leaves your machine unless you export it.

Turbo Search & Tags

Find prompts instantly by title, tags, categories, or full-text content. Create custom fields to fit your workflow.

Versioning & History

Iterate safely. Keep older versions, compare changes, and roll back anytime.

One-Click Copy

Copy to clipboard with smart formatting—ready for ChatGPT, Claude, Gemini, or your custom tools.

Import / Export

Backup your library or share with a team as portable JSON. Optional CSV export for auditing.

Works Offline

Open your browser and you’re good to go—even without an internet connection.

How it Works

Runs entirely in your browser using IndexedDB

// Minimal record
{
  "id": "uuid",
  "title": "AI Image for Visual Learning Books",
  "text": "Generate a full-page, professional-quality...",
  "tags": ["image-generation","education","visual-guide"],
  "category": "Image Generation",
  "updatedAt": "2025-10-22T10:00:00Z",
  "version": 7
}
  • IndexedDB provides fast, structured local storage with excellent performance for thousands of prompts.
  • Service Worker (optional) caches the app shell for offline use.
  • Export/Import to JSON ensures portability and team collaboration.
  • No vendor lock-in: your content is yours.

Start in 3 steps

Get your library organized today

  1. 1Open Prompt Database on AgileAIDev.com.
  2. 2Create your first Category (e.g., “Writing Tutors”, “System Prompts”, “Image Prompts”).
  3. 3Add a prompt, tag it, and Copy with one click when you need it.

Use Cases

Built for trainers, teams, and power users

Training & Workshops

Keep curated prompt sets per module. Share an export with students to accelerate practice.

Team Libraries

Standardize high-quality prompts across squads and roles—PM, Dev, QA, Marketing.

Personal Knowledge Base

Store your golden prompts, experiments, and variations in one place—searchable and ready.

FAQ

Answers to common questions

Where is my data stored?

Your prompts are stored in your browser’s local database (IndexedDB) on your device. Nothing is uploaded to a server.

Can I back up or move my prompts to another machine?

Yes. Use Export to save a JSON file and Import it on another device or share with teammates.

Does it work offline?

Yes. After the first load, the app can run offline (if caching is enabled). Your data is already local.

Is there a cost?

Core features are free. Advanced team features (cloud sync, roles) may be added later.

© Effective Agile Development · Built with a privacy-first, local-first philosophy.

Live Training Calendar and Events

Upcoming events Events RSSiCalendar export

Cohort Offer

Subscriber Exclusive • Cohort Offer

Advance From Theory to Mastery

You’ve seen the framework. Now implement it with guidance.

The 6-Week AI Scrum Cohort is a structured, hands-on program designed for experienced Scrum Masters who want to integrate AI into their leadership and delivery practices.

As a subscriber, you receive 20% off cohort tuition.
This is not a webinar. It’s applied learning with peer discussion, real use cases, and guided implementation.

6-Week Cohort • Subscriber Pricing Hands-on practice, peer review, and guided implementation.

Tip: If you use coupon codes, label it SUBSCRIBER20 and show it at checkout.