Select the search type
  • Site
  • Web
Search

Live + Practical Product Owner Toolkit

Prompts for Product Owners

A field-tested prompt set for Product Owners to clarify product decisions, sharpen backlog conversations, and improve stakeholder alignment without turning product work into “prompt theater.”

Learning Path - Members

2 Apr 2026

Why Your AI Agent Fails 97.5% of Real Work — And the Fix Isn't More Code

Author: Rod Claar  /  Categories: Agent Teams, AI Learning Path,   /  Rate this article:
No rating

You built the agent. You wired up the tools. You wrote the prompts. You watched demos that made it look magical.

And then you pointed it at real work — and it fell apart.

Sound familiar? You are not alone. Smart engineers with good models and ambitious roadmaps are running into the same wall every week. But here's the thing: the reason agents fail isn't the model, and it almost certainly isn't the code. It's where you started.

That's the core argument that AI strategist and former Amazon Prime Video product leader Nate B. Jones makes in his compelling YouTube video, "Your AI Agent Fails 97.5% of Real Work. The Fix Isn't Coding." And after 30+ years in software development — and several years of watching teams struggle to get real ROI from AI — I think he's exactly right.


The 97.5% Problem

Here's the insight that reframes everything: when people picture "real work," they picture the judgment calls, the expertise, the decisions that require deep domain knowledge. That's the core — and yes, AI struggles there. But that core only makes up a tiny fraction of what work actually consists of.

The other 97.5%? It's the stuff surrounding the core. The prep work. The cleanup. The formatting. The handoffs. The synthesis. The packaging. The QA passes before the output goes anywhere meaningful.

Nate calls this the edges of a workflow — and they're where AI can win right now, today, without waiting for the models to get smarter.

The trap that burns most teams is what Nate calls core-first automation — launching straight at the most valuable, most complex, most judgment-heavy part of the workflow. That's exactly where AI is most likely to fail, and it's exactly where organizational trust is hardest to earn. Three months in, the project stalls, leadership gets frustrated, and the humans who were supposed to benefit have completely checked out.

Meanwhile, down the hall, another team automated three "boring" tasks and freed up 30% of their week.


What the Edges Actually Look Like

Nate identifies five categories of edge work that surround almost any workflow you can name:

1. Data Preparation — Cleaning, normalizing, formatting, and staging inputs before the real work begins. This is mechanical, time-consuming, and almost always handled by a skilled person who could be doing something better.

2. Quality Assurance — First-pass reviews, checklists, format validation, completeness checks. Not the final human judgment call — the triage that happens before the human ever looks at something.

3. Synthesis — Pulling together information from multiple sources into a structured format that a decision-maker can actually use. Summarizing meeting notes. Compiling status updates. Rolling up data from three different systems.

4. Handoffs — The work of passing something from one person or team to another. Routing, tagging, formatting for the next stage, writing the context note so the next person isn't starting from scratch.

5. Packaging — Taking completed work and getting it ready to go out the door. Reports, communications, exports, notifications, archiving.

None of these feel glamorous. That's exactly why they never get prioritized for automation — even though they consume enormous amounts of time and generate almost no unique human value.


Why Core-First Automation Fails

There's a structural reason why teams keep trying to automate the core first — and keep failing.

The core is where the stakes are highest and where the narrative value is most obvious. "We automated contract review!" or "Our agent handles underwriting decisions!" sounds compelling in a roadmap presentation. "We automated the data formatting step before contract review" sounds like a waste of everyone's time.

But Nate's observation — backed by what practitioners are seeing in the field — is that organizations are not just technical systems. They're trust systems. When you try to put AI into the most sensitive, most visible, most expert-dependent part of a workflow, you immediately trigger every skeptic in the room. One failure, one wrong output, and the whole initiative is done.

Edge-first automation solves this. When you start with the mechanical stuff surrounding the core, a few things happen:

  • The AI is operating on work that is lower-stakes and easier to verify
  • Humans see results fast — days, not months
  • Trust builds incrementally as the system proves itself
  • You earn the right to eventually move toward the core

This is an organizational trust exercise, not a technical project. The code is the easy part. The humans are the hard part.


The Connection to Agile and Scrum Values

As a Scrum practitioner and trainer, I can't help noticing how perfectly this maps to the Agile mindset.

Start small, deliver fast, inspect and adapt. Don't build the entire system before you know if it works. Don't automate the most complex cases before you've proven the approach on simple ones. Don't ask for organizational trust before you've demonstrated value.

The teams that fail at AI automation are making the same mistake that teams made before Scrum rescued them from waterfall: they're trying to deliver everything at once, on an ambitious timeline, targeting the hardest problem first.

The teams winning with AI agents are running sprints against the edges. They ship something that works in two weeks. They build trust. They expand from there.

Empiricism matters here just as much as it does in product development. You can't know in advance which edges are the highest-value targets. You have to go look. Nate's framework gives you a structured way to do that — map the workflow, find the friction, find where time and energy disappear, and that's where you start.


What This Looks Like by Role

Nate's framework isn't just abstract. It applies differently depending on where you sit in an organization.

For Engineering Leads: The edges in your world are code review prep, ticket formatting, documentation drafts, test case generation templates, and release note compilation. These don't require AI to understand your architecture — they just need to handle structured, repetitive text work that your engineers are spending real hours on every sprint.

For Product Managers: Your edges are meeting synthesis, status rollup, backlog grooming prep, stakeholder update formatting, and competitive research aggregation. An AI agent that reads your notes and produces a clean standup summary is not science fiction — it's a weekend project.

For Sales and Customer Success: Your edges are call summary generation, follow-up email drafts, CRM data entry, renewal risk flagging, and documentation of customer-specific requirements. These are exactly the tasks that your best people hate most and that consume the most non-selling hours in the week.

For Operations and Business Analysts: Your edges are report compilation, data normalization, exception flagging, process documentation, and handoff note generation. These are almost universally done manually, at high cost, and at a level of detail that makes human attention feel genuinely wasted.

The pattern is consistent across every role: the core is judgment. Everything around it is mechanical. Start with the mechanical.


The Path Inward

Here's what nobody tells you when they pitch edge-first automation: it's not a consolation prize. It's a strategy.

When you automate edges successfully, you do three things:

First, you free up the humans who were doing that work so they can apply more of their energy to the core. The core gets better because the people doing it are less depleted.

Second, you generate operational data about how the core actually works — because now the inputs and outputs are cleaner, more structured, and easier to observe. That data is exactly what you need to eventually automate parts of the core.

Third, you build the organizational muscle and trust infrastructure to attempt harder automation challenges. Teams that have shipped three edge automations are dramatically better positioned to tackle something closer to the core than teams attempting their first agent project.

Nate is explicit that edge-first is not the end state. It's the path to eventually touching the core — but on a foundation of demonstrated value, accumulated trust, and real operational knowledge.


The Readiness Question

Before you move inward toward the core, Nate's framework suggests an honest self-assessment. Have you genuinely proven the edges? Is leadership seeing real time savings? Are the humans who were doing that work now spending their time on higher-value activity? Do you have a track record of catching and correcting AI errors at the edges before they propagate?

If the answer to any of those is no, you're not ready to move toward the core — not because the technology isn't there, but because the organizational infrastructure isn't.

This is, again, deeply Agile. You don't scale until you've proven the pattern. You don't accelerate until you've stabilized.


What to Do Monday Morning

If you're sitting on an AI agent project that's stalled — or you're about to start one — here's a practical first move:

Map your workflow and find where time goes before and after the judgment calls.

Don't ask "what can AI do in this workflow?" Ask "where does work pile up, slow down, or get lost in translation?" That's the edge. That's where you start.

Pick one. Build something shippable in a week, not a quarter. Get it in front of the people who will use it. Let them break it, improve it, and start trusting it.

Then expand from there.

The model isn't the problem. The prompts aren't the problem. The place you started is the problem — and that's entirely within your control.


Final Thought

The title of Nate's video is provocative on purpose: "Your AI Agent Fails 97.5% of Real Work." But the point isn't that AI agents are bad. The point is that we've been aiming them at the wrong 2.5%.

The teams winning with AI right now aren't the ones with the most sophisticated agents or the most complex architectures. They're the ones who looked honestly at their workflows, found the boring mechanical work hiding in plain sight, and gave the machines something they could actually do well.

That's not a coding problem. That's a strategic problem. And the solution starts with where you look, not what you build.


Want to learn how to apply these principles inside a Scrum team context? Explore the AI-Enhanced Scrum curriculum at AgileAIDev.com — where we combine 30+ years of Agile coaching experience with practical AI implementation strategy.


Credit & Deep Thanks: This article is based on the outstanding YouTube video "Your AI Agent Fails 97.5% of Real Work. The Fix Isn't Coding." by Nate B. Jones, published on his channel AI News & Strategy Daily. Nate is a former Head of Product at Amazon Prime Video, an AI strategist, and one of the most practical, no-hype voices in the AI space today. His Substack newsletter, courses, and daily video breakdowns are an invaluable resource for anyone serious about applying AI to real work. Find Nate at natebjones.com and watch the original video here: https://www.youtube.com/watch?v=awV2kJzh8zk.

Print

Number of views (953)      Comments (0)

Tags:
Comments are only visible to subscribers.

PromtDatabase

🔒 Private • In-Browser • Fast

Prompt Database

A fast, private prompt manager that lives entirely in your browser. Save your best prompts, add tags and categories, version them, and copy with one click. No accounts, no servers, no waiting.

  • Local-first: uses your browser database (IndexedDB)—your prompts stay on your device.
  • Lightning search across titles, tags, categories, and full text.
  • Import/Export JSON for backup or team sharing.
Screenshot of Prompt Database app showing prompt list, categories, and editor
Runs locally in your browser. No sign-in required.

Why Prompt Database

Keep your best prompts organized and at your fingertips

Local-First & Private

All data is stored using your browser’s IndexedDB. Nothing leaves your machine unless you export it.

Turbo Search & Tags

Find prompts instantly by title, tags, categories, or full-text content. Create custom fields to fit your workflow.

Versioning & History

Iterate safely. Keep older versions, compare changes, and roll back anytime.

One-Click Copy

Copy to clipboard with smart formatting—ready for ChatGPT, Claude, Gemini, or your custom tools.

Import / Export

Backup your library or share with a team as portable JSON. Optional CSV export for auditing.

Works Offline

Open your browser and you’re good to go—even without an internet connection.

How it Works

Runs entirely in your browser using IndexedDB

// Minimal record
{
  "id": "uuid",
  "title": "AI Image for Visual Learning Books",
  "text": "Generate a full-page, professional-quality...",
  "tags": ["image-generation","education","visual-guide"],
  "category": "Image Generation",
  "updatedAt": "2025-10-22T10:00:00Z",
  "version": 7
}
  • IndexedDB provides fast, structured local storage with excellent performance for thousands of prompts.
  • Service Worker (optional) caches the app shell for offline use.
  • Export/Import to JSON ensures portability and team collaboration.
  • No vendor lock-in: your content is yours.

Start in 3 steps

Get your library organized today

  1. 1Open Prompt Database on AgileAIDev.com.
  2. 2Create your first Category (e.g., “Writing Tutors”, “System Prompts”, “Image Prompts”).
  3. 3Add a prompt, tag it, and Copy with one click when you need it.

Use Cases

Built for trainers, teams, and power users

Training & Workshops

Keep curated prompt sets per module. Share an export with students to accelerate practice.

Team Libraries

Standardize high-quality prompts across squads and roles—PM, Dev, QA, Marketing.

Personal Knowledge Base

Store your golden prompts, experiments, and variations in one place—searchable and ready.

FAQ

Answers to common questions

Where is my data stored?

Your prompts are stored in your browser’s local database (IndexedDB) on your device. Nothing is uploaded to a server.

Can I back up or move my prompts to another machine?

Yes. Use Export to save a JSON file and Import it on another device or share with teammates.

Does it work offline?

Yes. After the first load, the app can run offline (if caching is enabled). Your data is already local.

Is there a cost?

Core features are free. Advanced team features (cloud sync, roles) may be added later.

© Effective Agile Development · Built with a privacy-first, local-first philosophy.

Live Training Calendar and Events

Upcoming events Events RSSiCalendar export

Cohort Offer

Subscriber Exclusive • Cohort Offer

Advance From Theory to Mastery

You’ve seen the framework. Now implement it with guidance.

The 6-Week AI Scrum Cohort is a structured, hands-on program designed for experienced Scrum Masters who want to integrate AI into their leadership and delivery practices.

As a subscriber, you receive 20% off cohort tuition.
This is not a webinar. It’s applied learning with peer discussion, real use cases, and guided implementation.

6-Week Cohort • Subscriber Pricing Hands-on practice, peer review, and guided implementation.

Tip: If you use coupon codes, label it SUBSCRIBER20 and show it at checkout.