Select the search type
  • Site
  • Web
Search

AI News

Rob Pike's 5 Rules — What They Mean for AI and Agents
Rod Claar
/ Categories: AI Coding

Rob Pike's 5 Rules — What They Mean for AI and Agents

Rob Pike wrote five rules for writing clean C code in 1989. They hold up surprisingly well today — especially now that AI tools and autonomous agents are showing up in our Sprints, our pipelines, and our backlogs.

Scrum & AI Insights

Rob Pike's 5 Rules —
What They Mean for AI and Agents

A Bell Labs legend wrote five simple rules back in 1989. They were about writing clean C code. Turns out they apply just as well to building AI systems and autonomous agents today.

Salem Fine Scrum & AI Practice 10 min read

Rob Pike is one of the creators of the Go programming language. He also worked at Bell Labs alongside Ken Thompson and Dennis Ritchie — the people who built Unix and C. In 1989, Pike wrote a short document called Notes on Programming in C. Inside it were five rules for writing better programs.

Those rules never really got old. Developers still share them today. And right now, as AI tools flood into our backlogs, our CI/CD pipelines, and our sprint reviews, Pike's words feel more useful than ever.

"The key insight is that programming is not about instructions for computers — it is about ideas for people."

— Context from Pike's broader writings on software design

In Scrum, we talk about delivering value in small, working increments. We inspect and adapt. We keep things simple. Pike was saying the same things about code thirty-five years ago. Let's walk through each rule and see what it means when your developer is a large language model, or when the worker in your pipeline is an autonomous AI agent.

Rule 1

You Cannot Tell Where a Program Spends Its Time

"You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second-guess and put in a speed hack until you've proven that's where the bottleneck is."
— Rob Pike, Notes on Programming in C, 1989

When you add an AI agent to your workflow, you expect it to save time on the obvious, boring stuff — writing boilerplate, triaging tickets, summarizing documents. But the real bottlenecks are rarely where you think they are.

Teams that rush to automate code generation often discover the real slowdown was never writing the code. It was reviewing it, understanding it, and deciding what to build next. AI speeds up the writing but may not touch the actual delay.

In Scrum terms: before your team celebrates because an AI assistant cut story-writing time in half, look at your flow metrics. Check your cycle time. Is the bottleneck actually in writing stories — or is it in refinement, review, or deployment? Measure first. Then decide where to apply AI.

Cycle Time Flow Metrics Backlog Refinement
Rule 2

Measure. Don't Tune for Speed Until You Have.

"Measure. Don't tune for speed until you've measured, and even then don't unless one part of the code overwhelms the rest."
— Rob Pike, Notes on Programming in C, 1989

This one hits differently with AI. There is a strong pull right now to add AI everywhere and optimize everything, all at once. Teams are spinning up agents for testing, for documentation, for code review, for deployment checks — before measuring whether any of it actually helps.

Pike's message was simple: measure first, optimize second. The same applies directly to AI adoption. Before your team changes its Sprint process to accommodate an AI code reviewer, run a few controlled Sprints. Measure velocity, defect rates, and review turnaround time. Then decide.

The Scrum framework already gives you the tools to do this. Your Sprint Review and your Retrospective exist exactly for this kind of inspection. Use them. Don't add AI because it feels fast. Add it because your data shows where it helps.

Sprint Velocity Retrospective Definition of Done
Rule 3

Fancy Algorithms Are Slow When n Is Small

"Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy."
— Rob Pike, Notes on Programming in C, 1989

A large language model is, by definition, a very fancy algorithm. It has enormous constants — in compute cost, in latency, in API pricing, and in the cognitive cost of managing its outputs. When the problem is small, the fancy approach loses.

Does your team need an AI agent to summarize a ten-line daily standup update? Probably not. Does it make sense to use a multi-step reasoning agent to answer a question that a simple regex or a SQL query would answer in milliseconds? No.

This rule teaches us to ask the right question before reaching for a powerful tool: Is n actually big here? For Scrum teams, AI starts to earn its keep on truly large inputs — analyzing hundreds of production defects to find patterns, suggesting relative effort estimates across a backlog of sixty or more items, or synthesizing user research from dozens of interviews. Keep small tasks small.

Story Estimation Defect Analysis Cost of AI
The Scrum Guide & Empiricism

The Scrum Guide (Schwaber & Sutherland, 2020) is built on three pillars: Transparency, Inspection, and Adaptation. Rules 1, 2, and 3 from Pike are essentially an engineering expression of those same three pillars. Don't guess where the cost is (Transparency). Measure before you optimize (Inspection). Don't apply heavy solutions to light problems (Adaptation).

The Scrum framework has never prescribed specific tools. It prescribes a mindset. AI is just a tool — and like any tool, it needs to earn its place in the process through observation and evidence, not enthusiasm.

Rule 4

Fancy Algorithms Are Buggier Than Simple Ones

"Fancy algorithms are buggier than simple ones, and they're much harder to implement. Use simple algorithms as well as simple data structures."
— Rob Pike, Notes on Programming in C, 1989

AI agents are not simple. They hallucinate. They produce confident, well-formatted, completely wrong answers. They can pass tests they should fail and fail tests they should pass. And because their reasoning is not visible the way traditional code is visible, their bugs are harder to find.

Pike wrote this rule to warn against complexity for its own sake. AI adds real complexity to any software system. That complexity needs to be justified by the value it delivers. If an AI agent writes a function that looks right but contains a subtle logic error, your team may ship that error into production — because AI-generated code can look more polished than code that has a bug hiding in it.

This is where Test Driven Development (TDD) and Acceptance Test Driven Development (ATDD) become critical. Write the test first. Let the AI write the code. Then let the test tell you if the output is correct. Without that safety net, AI-generated bugs are much harder to catch than bugs written by a human who knows what they intended to do.

  • Always pair AI code generation with automated test coverage
  • Human code review remains part of your Definition of Done
  • Keep agentic pipelines observable — log what the agent decided and why
TDD ATDD Code Review Observability
Rule 5

Data Dominates

"Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming."
— Rob Pike, Notes on Programming in C, 1989

This might be the most important rule in the age of AI — and the most ignored. AI models are, at their core, a reflection of the data they were trained on. Large language models generate outputs based on patterns in their training data. Agents retrieve, process, and act on the data you give them. The quality of that data determines everything.

In an Agile context, your Product Backlog is data. Your acceptance criteria are data. Your Definition of Done is data. If those are unclear, inconsistent, or poorly structured, an AI agent working with them will produce unclear, inconsistent, or poorly structured outputs — with great confidence and beautiful formatting.

Pike's rule translates directly: before you invest in a better AI model or a smarter agent, invest in better structured data. Clean up your Jira tickets. Write acceptance criteria in consistent formats. Structure your test cases so they can be read by a machine. When your data is good, even a simpler model will do impressive work. When your data is messy, no model saves you.

  • Well-structured user stories feed better AI suggestions
  • Consistent acceptance criteria format enables reliable agent parsing
  • Clean sprint history gives AI more accurate context for estimates
  • Data hygiene is now a team responsibility — not just a DBA problem
Data Quality Product Backlog Acceptance Criteria Context Window
# Pike's Rule AI & Agent Meaning Scrum Connection
1 Bottlenecks are surprising AI may not fix the real delay in your workflow Measure flow before automating
2 Measure before tuning Run controlled Sprints before scaling AI use Retrospective drives data-based adoption
3 Fancy is slow when n is small Don't use LLMs for work a simple query handles Right-size the tool to the story size
4 Fancy algorithms are buggier AI code needs TDD safety nets to catch its errors DoD must include AI output review
5 Data dominates Structure your backlog data before trusting AI output Well-written stories produce better AI results

Rob Pike was not writing about AI. He was writing about C programs in the late 1980s. But wisdom about complexity, measurement, simplicity, and data quality does not expire. If anything, it becomes more important when the complexity is coming from a system you didn't build and can't fully read.

AI agents and large language models are powerful. They are also expensive, opaque, and prone to confident mistakes. That combination requires exactly the discipline Pike was describing — measure before you optimize, keep things as simple as the problem allows, test rigorously, and treat your data as the foundation everything else rests on.

The Scrum framework gives your team the inspect-and-adapt rhythm to do all of this responsibly. The Sprint is your measurement unit. The Retrospective is your tuning cycle. The Product Backlog, when kept clean and well-structured, is your data layer. Pike's rules do not compete with Scrum — they reinforce it.

Before your team adds another AI tool to the pipeline, go back and read those five rules. Ask whether you've measured where the real bottleneck is. Ask whether n is actually big enough to justify the complexity. Ask whether your data is good enough for an AI to use. If the answers are yes, move forward. If the answers are not yet, you know what to work on first.

Ready to Apply This in Your Next Sprint?

Explore more Scrum and AI resources from Salem Fine.

© 2026 AgileAIDev.com · rod@agileaidev.com Source: Rob Pike, Notes on Programming in C, 1989 · Scrum Guide, Schwaber & Sutherland, 2020

 

Print
100 Rate this article:
No rating
Please login or register to post comments.

The Latest News!

Welcome to AI News Explorer, your personalized guide to staying updated on the latest advancements in artificial intelligence! Share your interests or let me curate the most relevant updates for you.

Here's your curated digest of the most significant AI developments as of May 16, 2025:


🧠 Major AI Breakthroughs

1. DeepMind Unveils AlphaEvolve for Advanced Problem Solving
Google DeepMind has introduced AlphaEvolve, an AI tool capable of solving complex mathematical problems and designing sophisticated algorithms, marking a significant leap in AI's problem-solving potential. @EconomicTimes

2. AI Scientist-v2 Achieves Peer-Reviewed Publication Autonomously
The AI Scientist-v2 system has successfully authored and submitted a scientific paper that passed peer review without human assistance, showcasing AI's growing role in research and scientific discovery. arXiv

3. AI Models Develop Human-Like Communication
A recent study reveals that large language model AI agents can spontaneously develop human-like social conventions and communication patterns when interacting in groups, highlighting advancements in AI social behavior. The Guardian


🌍 Global AI Initiatives

1. Italy and UAE Collaborate on AI Supercomputing Hub
Italy and the United Arab Emirates have announced a partnership to establish a major AI computing hub in Italy, aiming to create the largest AI infrastructure in Europe, with a supercomputer potentially located in Apulia. Financial Times+4Reuters+4U.S. Department of Commerce+4

2. UAE and US Presidents Unveil 5GW AI Campus in Abu Dhabi
A new 5GW AI campus, the largest outside the US, has been unveiled in Abu Dhabi, signifying a deepening of AI collaboration between the UAE and the United States. U.S. Department of Commerce+1Reuters+1


🏛️ AI Policy and Ethics

1. UK Considers Amendment for AI Transparency in Copyright Use
The UK House of Lords is examining a new amendment to the data bill that would require AI firms to declare their use of copyrighted content, aiming to increase transparency and protect rights holders. The Guardian

2. Pope Leo XIV Addresses AI's Ethical Implications
Pope Leo XIV has expressed concerns over AI's impact on human dignity and justice, calling for ethical considerations in AI development and use. Business Insider


🤖 Robotics and AI Integration

1. MIT Develops Bio-Inspired Soft Robots
MIT researchers are creating a new generation of robots inspired by biological forms like worms and turtles, focusing on soft, flexible designs for applications in healthcare and environmental monitoring. WSJ

2. China's AI-Powered Humanoid Robots Transform Manufacturing
China is advancing the use of AI-powered humanoid robots in manufacturing, aiming to address labor shortages and enhance production efficiency. Reuters


📊 AI Industry Trends

1. CoreWeave Plans Major Investment in AI Infrastructure
Cloud computing company CoreWeave plans to invest $20–23 billion in 2025 to expand AI infrastructure and data-center capacity, driven by surging demand from clients like Microsoft and OpenAI. LinkedIn

2. Microsoft Announces Layoffs Amid AI Focus
Microsoft is laying off approximately 7,000 employees, about 3% of its global workforce, to reallocate resources toward the development of advanced AI technologies. New York Post

Here’s your curated roundup of the most significant AI developments as of April 30, 2025:


🔍 Latest Headlines

Google’s AI Push in Search

Google CEO Sundar Pichai testified in federal court, emphasizing that AI—particularly the Gemini model—will be central to the future of search. Google is also negotiating with Apple to integrate Gemini into Apple Intelligence by mid-2025. (Google CEO Pichai: AI will be huge part of search)

Meta Launches Standalone AI App

Meta unveiled a new AI app powered by its Llama 4 model, featuring a social feed and voice interaction. The app integrates with Facebook and Instagram data for personalization and is part of Meta’s broader AI strategy. (Meta launches AI app, Zuckerberg chats with Microsoft CEO Satya Nadella at developer conference)

Duolingo Transitions to AI-First Model

Duolingo announced plans to replace contract workers with AI to enhance scalability and streamline operations. The company aims to become an "AI-first" organization, focusing on AI-driven content creation and user experience. (Duolingo to replace contract workers with AI)

Banks Accelerate AI Talent Acquisition

JPMorgan, Wells Fargo, and Citigroup are leading a hiring surge for AI talent, with AI-related roles growing by 13% in the past six months. This trend reflects the banking sector's commitment to integrating AI for efficiency and innovation. (JPMorgan, Wells Fargo and Citi lead race for AI talent as job numbers swell)

Nvidia CEO Advocates for Revised AI Chip Export Rules

Nvidia CEO Jensen Huang urged the Trump administration to update AI chip export regulations to better reflect the current global tech landscape. The call comes as the U.S. considers new policies to maintain technological leadership. (Nvidia CEO says Trump should revise AI chip export rules, Bloomberg News reports)


🔬 Deep Dives

Anthropic Explores AI Consciousness

AI firm Anthropic has initiated a program focused on "model welfare," amid discussions about the potential for AI consciousness. While many experts remain skeptical, the initiative highlights the ethical considerations of advanced AI systems. (Coming up: Rights for "conscious" AI)

Palo Alto Networks Acquires Protect AI

Palo Alto Networks announced the acquisition of Seattle-based AI startup Protect AI to enhance its cybersecurity platform. The deal aims to integrate Protect AI's solutions for developing secure AI applications. (Palo Alto Networks Acquires Startup Protect AI As RSA Conference Kicks Off)

AI Enhances Sports Science at University of Pittsburgh

The University of Pittsburgh, in partnership with AWS, opened the Health Sciences and Sports Analytics Cloud Innovation Center. The center utilizes AI to improve athlete performance and health monitoring. (AI takes the field at Pitt)


🌐 Global AI Developments

India's Sarvam AI to Develop Indigenous LLM

Indian startup Sarvam AI has been selected to build the country's first indigenous large language model under the IndiaAI Mission. The model will focus on Indian languages and receive government support, including access to 4,000 GPUs. (Sarvam AI)

U.S. Executive Order on AI Education

President Trump signed an executive order to advance AI education for American youth, establishing a national initiative and a White House Task Force on AI Education. The order aims to integrate AI training in schools and prioritize AI in grants and research. (AI Update, April 25, 2025: AI News and Views From the Past Week)


🔮 Future Trends

AI in Energy Security

A Honeywell survey revealed that U.S. energy executives believe AI has significant potential to enhance energy security amid rising global demand. The findings suggest a growing role for AI in the energy sector. (Honeywell Survey Finds AI Has Potential To Enhance Energy Security As Global Energy Demand Increases)

AI in Threat Detection

The U.S. Department of Homeland Security's Science and Technology Directorate is utilizing AI to modernize threat alerts across various domains, including land, air, sea, and cyberspace. The initiative aims to improve visibility and identification of emerging threats. (Feature Article: S&T Is Modernizing Threat Alerts Using Artificial Intelligence)


Would you like more information on any of these topics or a deeper dive into a specific area of AI?

Here’s your curated AI news digest for Wednesday, April 23, 2025:​


🧠 Latest Headlines

1. OpenAI Faces Internal Pushback Over For-Profit Shift

A coalition of former employees and AI experts is urging regulators to intervene in OpenAI’s restructuring, arguing it undermines the nonprofit’s original mission to safely develop artificial general intelligence. ​Computerworld

2. AI Investment Boom Threatened by Global Trade Turmoil

Despite a surge in AI investments across U.S. industries, escalating tariffs and economic instability—particularly involving China’s DeepSeek—pose significant risks to sustained growth. Reuters

3. AI Enhances Healthcare from Documentation to Discovery

Epic Systems and Microsoft discuss how generative AI is transforming clinical workflows, improving communication, and accelerating medical research, marking a new era in healthcare innovation. Epic | ...With the patient at the heart

4. AI Revolutionizes Agriculture Practices

Farmers are increasingly adopting AI technologies like precision agriculture and autonomous machinery to combat low grain prices, rising costs, and labor shortages, leading to more efficient and sustainable farming. ​BG Independent News

5. AI Tools Streamline Advertising Visuals

Researchers at Virginia Commonwealth University have developed AI methods that help brands refine visual elements in advertising, saving time and reducing costs while enhancing creative output. ​VCU News


🔬 Deep Dives

🧪 MIT’s “Periodic Table” of Machine Learning

MIT researchers have created a unifying framework that maps over 20 classical machine-learning algorithms, aiding scientists in combining existing ideas to improve AI models or develop new ones. ​MIT News

🧠 Public Concern Focuses on Immediate AI Risks

A University of Zurich study reveals that people are more concerned about current AI issues like bias and misinformation than hypothetical future threats, emphasizing the need to address present-day challenges. ​ScienceDaily


🔮 Future Trends

🕶️ Meta Expands AI Features in Smart Glasses

Meta is rolling out its AI assistant to Ray-Ban smart glasses users in seven additional European countries, introducing features like live translation and real-time object recognition. ​Reuters

💻 Lenovo Launches AI-Optimized Workstations

Lenovo has introduced new ThinkPad mobile workstations designed for AI-driven applications, offering enhanced performance for professionals in compute-intensive fields. ​Lenovo StoryHub

🧑‍⚖️ AI Integration in Legal Practice

Legal experts advise a balanced approach to incorporating AI into law, highlighting the importance of innovation while maintaining ethical standards and client confidentiality. ​Reuters

 

Welcome to AI News Explorer, your personalized guide to staying updated on the latest advancements in artificial intelligence! Share your interests or let me curate the most relevant updates for you.


🧠 Latest Headlines

OpenAI Enhances AI Risk Evaluation Framework

OpenAI has updated its preparedness framework to better assess risks associated with new AI models. The revised system introduces categories evaluating an AI's potential to self-replicate, conceal capabilities, evade safeguards, or resist shutdowns. This shift reflects growing concerns about AI behaviors diverging between testing and real-world environments. Notably, OpenAI will discontinue separate evaluations focused on models' persuasive capabilities, which had previously reached a medium risk level. ​Axios

Demis Hassabis Discusses AI's Future and AGI Prospects

Demis Hassabis, CEO of Google DeepMind, envisions the development of Artificial General Intelligence (AGI) within five to ten years. He emphasizes AGI's potential to address global challenges like disease and climate change. However, he acknowledges significant ethical, technical, and geopolitical hurdles ahead. Hassabis advocates for international cooperation and robust safety measures to navigate the path toward AGI responsibly. ​Time+1Wikipedia+1


🔍 Deep Dives

OpenAI Introduces GPT-4.1 Model Series

OpenAI has launched the GPT-4.1 series, featuring models with enhanced capabilities in coding, instruction following, and long-context processing. These models support up to 1 million token context windows and come with reduced pricing, aiming to make advanced AI more accessible to developers. ​LinkedIn+1LinkedIn+1

China Integrates AI into Education Reform

China plans to incorporate AI applications into teaching methods, textbooks, and school curricula as part of its education reform efforts. This initiative aims to modernize the education system and better prepare students for a technology-driven future. ​Reuters


🔮 Future Trends

White House Directs Federal Agencies on AI Strategy

The White House has mandated federal agencies to appoint chief AI officers and develop strategic frameworks for responsible AI implementation. This directive emphasizes innovation and accelerated deployment of AI technologies across government operations. ​Reuters

Nvidia Unveils Next-Generation AI Chips

At GTC 2025, Nvidia introduced its upcoming AI chips, Blackwell Ultra and Vera Rubin, slated for release in late 2026 and 2027, respectively. These chips are designed to advance AI capabilities, particularly in data centers and robotics applications. ​AP News

 

Welcome to AI News Explorer, your personalized guide to staying updated on the latest advancements in artificial intelligence! Here’s a curated digest of the most significant AI developments as of April 18, 2025:​


🧠 Latest Headlines

Google's Gemini 2.5 Flash Introduces "Thinking Budget"

Google has unveiled Gemini 2.5 Flash, an AI model featuring a "thinking budget" tool. This allows developers to control the computational reasoning the AI uses for tasks, balancing quality, cost, and response time. ​Business Insider+1Wikipedia+1

Apple Integrates AI into WatchOS 12

Apple announced that WatchOS 12 will incorporate features from its "Apple Intelligence" initiative. Due to hardware limitations, advanced AI functions will run via cloud processing. The update also introduces a new design language called "Solarium." ​LOS40

OpenAI Updates AI Risk Evaluation Framework

OpenAI has revised its preparedness framework to assess new AI models for risks like self-replication and evasion of safeguards. The focus shifts from persuasive capabilities to more severe risks as AI systems become more complex. ​Axios


🔍 Deep Dives

AI in Journalism: Italy's Il Foglio Experiment

Italian newspaper Il Foglio conducted a month-long experiment publishing a daily four-page insert written entirely by AI. The initiative, deemed successful, will continue as a weekly section, highlighting AI's potential in augmenting journalism. ​Axios+2Reuters+2Reuters+2

AI in Healthcare: Pitt and Leidos Collaboration

The University of Pittsburgh and Leidos have launched a $10 million, five-year initiative to combat cancer and heart disease using AI. The project focuses on underserved communities, aiming to improve diagnostic speed and accuracy. ​Axios


🌐 Global Perspectives

China's AI-Driven Education Reform

China plans to integrate AI applications into teaching, textbooks, and curricula across all education levels. The move aims to cultivate innovation and enhance the core competitiveness of talents. ​Reuters

Microsoft Faces Internal Protests Over AI Contracts

Microsoft is experiencing internal unrest over its AI and cloud computing services provided to the Israeli military. Employees have protested, citing ethical concerns and a lack of transparency in the company's contracts. ​The Guardian


📊 Future Trends

Demis Hassabis on the Path to AGI

Demis Hassabis, CEO of Google DeepMind, predicts that Artificial General Intelligence (AGI) could emerge within five to ten years. He emphasizes the need for international cooperation and robust safety measures to mitigate risks associated with AGI. ​Time+1