A client pinged me on Wednesday afternoon, three days before our sprint review. “I was thinking about it. The model already understands the call transcript. Could we add empathy scoring alongside the compliance check? Not instead of it, in addition.”
It wasn’t an unreasonable request. The model probably could detect empathy markers in conversation. The problem was that we’d built the entire evaluation framework around compliance metrics. Adding a second scoring dimension three days before demo meant either showing an incomplete empathy feature in the review or rushing it and shipping something that broke intermittently.
I didn’t say no. I said, “That’s a strong idea. Can I ask you three questions before I tell you when we can fit it in?”
He said yes.
Those three questions, that 5-minute conversation, is how I manage scope creep in AI projects. Not by refusing additions. By creating a structured pause between the idea and the commitment.
Why AI Projects Invite More Scope Creep
In a standard software build, requirements are set before engineering starts. The client describes what they need, the team estimates, both sides agree. Scope additions happen, but the original scope was abstract enough that clients don’t immediately see how much more is possible.
AI projects are different. Clients see what the model can do in week two.
The compliance checker we’d built was scoring calls with 94% agreement with human reviewers. The founder saw that demo and immediately thought: if this model understands what a rep said about mandatory disclosures, can it also understand whether the rep sounded apologetic when a customer complained? The capability was visible and real. He wasn’t being difficult. He was being curious.
This is the core tension in AI delivery: demos accelerate trust faster than timelines can absorb new requirements. A client who sees something working in sprint two is not the same client who signed the original scope in week zero. They’ve updated their model of what’s possible, and their requests update with it.
I’ve run about 30 AI development projects in the last two years. Scope creep from demos is the most consistent pattern I’ve seen across all of them. Not bad clients, not vague requirements, not changing priorities. Just the fundamental fact that a model’s performance in week two makes people think of things they didn’t think of in week zero.
The Three Questions
After the first few times I absorbed mid-sprint additions silently and watched the sprint goal slip, I built a 5-minute script for every “can we also add X?” conversation.
Question 1: Does this block anything we’re demoing this sprint?
Most additions don’t block the current sprint’s deliverable. They’re extensions, not dependencies. If the answer is no, I say, “Let me scope this properly and add it to the next sprint backlog. I’ll have an estimate for you by end of week.”
That alone redirects about 60% of mid-sprint additions without any conflict. The client asked, I took it seriously, and they know it’s tracked.
Question 2: Is this in addition to, or instead of, something we planned?
This question reveals priority. Clients who say “in addition to” often haven’t thought about what the new work would displace. The conversation that follows is useful: “We have 15 development days left in this sprint and they’re fully committed. If we add empathy scoring, what would you be okay pushing back?”
Most of the time, the client looks at the sprint plan and says something like, “I didn’t realize the agent dashboard was in this sprint. Keep that. The empathy scoring can wait.”
What clients actually want, in my experience, is for their ideas to be heard and tracked. Not necessarily implemented immediately. Question 2 gives them a way to decide that without feeling like I’m saying no.
Question 3: Can you write it down in one sentence?
This one is subtle. Vague additions are the most dangerous. “Can we add empathy scoring?” is a sentence, but what it means in engineering terms can range from a simple keyword heuristic to a full secondary evaluation pipeline with its own rubric, test cases, and accuracy benchmarks.
Asking clients to write it down does two things: it forces them to be concrete, and it creates an artifact we can scope against. I keep a running document for every project called the Idea Log, a simple list of everything a client has mentioned that isn’t in the current sprint. When the idea is in writing, the client feels heard. When it’s scoped, they have a number. When it moves to sprint planning, it has a history.
The three questions take five minutes. They don’t require negotiation, they don’t create conflict, and they’ve eliminated at least half the mid-sprint scope additions I used to absorb.
How Sprint 0 Sets the Foundation
The three-question script works because there’s a foundation under it: every project we run starts with a written scope document in Sprint 0.
Sprint 0 is a one-week planning phase before we write any production code. It covers tech stack decisions, data access agreements, and the success criteria for each deliverable. The part that matters most for scope management is a two-column table at the bottom of the Sprint 0 brief: In Scope and Out of Scope.
Out of scope isn’t a rejection list. It’s a shared acknowledgment that the things on it are real ideas we’re choosing not to build in this phase. Common items for an AI project:
- Exporting results to a custom PDF format (in scope: JSON output via API)
- Real-time scoring during live calls (in scope: post-call scoring within 30 minutes)
- A mobile interface (in scope: web app only)
When a client says “can we add X?” and X is already on the out-of-scope list, the conversation becomes easy. “We talked about this in Sprint 0 and agreed it was for phase two. Has that priority changed? If yes, let’s discuss what it moves.”
The written record is not about being right. It’s about having a common starting point. Clients’ memories of Sprint 0 conversations tend to be optimistic, so the document isn’t confrontational. It’s clarifying.
This is also where timeline commitments connect to scope. I only commit to sprint timelines after Sprint 0 is locked, and I apply the same discipline to additions that I apply to the original estimate: if I don’t have enough information to scope it, I don’t commit to a number. More on why in this post on timeline commitments.
The Conversation When It Gets Tense
Sometimes a client pushes back. “I know it’s outside what we planned, but this one really matters to us. Can we make it work?”
I’ve learned to be direct here, and most clients respect it.
“I want to make it work too. Here’s what I need from you: tell me what this replaces in the current sprint, or tell me you’re okay extending the timeline by the days we’d need. I can’t do both the original scope and this addition in the same window without one of them being rushed. What would you prefer?”
This gives the client a real decision. Extend the timeline. Swap something out. Or move it to the next sprint. Presenting all three keeps the conversation factual rather than adversarial.
The one phrase I never use: “That’s out of scope.” By itself it sounds like a refusal. “That’s out of scope, but here’s how we handle it” is a different conversation entirely.
Another phrase to avoid: “We’ll figure it out as we go.” That’s how four-week projects become twelve-week conversations. Founders deserve to know what it will take to add something, not just that it can be added eventually.
When Scope Creep Is Actually Good News
Not every scope addition is a problem. Some of them are signals.
When a client asks for more after seeing what we built in sprint two, that usually means the original work landed well. They’re engaged, they’re thinking about what’s next, and they have enough trust in the team to raise ideas even when they know the timing is complicated.
The scope additions I’ve tracked across 30 projects break into roughly three buckets:
Bucket 1 (about 40%): Ideas that go into the backlog and ship in a later sprint. These are good additions that just needed structure. The Atlassian sprint backlog guide covers the formal mechanics of this if you want a framework reference.
Bucket 2 (about 35%): Ideas that get scoped but deprioritized because the client decides they’re not worth the sprint trade-off. These conversations are actually useful. They reveal what the client cares about most when forced to choose.
Bucket 3 (about 25%): Ideas that become a second project. These are the best outcome. A client who starts a second project isn’t a scope-creep problem. They’re a long-term relationship. Three of the second projects I’ve run in the last year came directly from Idea Log items that didn’t fit into the first one.
Scope creep managed well becomes pipeline. Absorbed silently, it becomes a broken timeline and a strained relationship.
FAQ
What’s the best way to prevent scope creep before it starts?
Write the out-of-scope section during kickoff. Most teams document what they’ll build but not what they won’t. The out-of-scope list makes implicit agreements explicit and it’s the most useful reference document when additions come up mid-sprint. The discovery call checklist we use to scope AI projects covers how we gather the information for that list before Sprint 0.
How do I estimate a mid-sprint addition quickly enough to give the client a number?
For AI features, I use a three-component estimate: data prep time (how clean is the input data for this new feature?), model iteration time (how many evaluation cycles to hit acceptable accuracy?), and integration time (what does the client’s system need to change to support this output?). I estimate each in days and add 20% for unknowns. For anything longer than 3 days total, I run a proper technical spike before committing to a number.
What do you do when a client keeps adding scope every week?
This pattern usually means Sprint 0 wasn’t thorough enough. When it happens, I call a 30-minute reset meeting. Not to revisit everything, but to ask one question: “What does a successful end to this project look like in your mind?” The answer usually reveals that the client has updated their success criteria since kickoff. Getting that on paper and agreeing on it again resets the relationship and typically slows the additions considerably. The PMI’s guide to change control in projects covers the formal process for programs where you need a documented approval chain.
Is it ever okay to absorb a small addition without a change request?
Occasionally. If something takes less than a few hours and doesn’t affect sprint goals, absorbing it quietly builds goodwill. The risk is that it sets a precedent: if the first addition disappears silently into the sprint, the client assumes all additions work that way. I still log every absorbed addition in the Idea Log with a note that we built it without a change request, so the scope record stays accurate.
How do you handle a client who insists something is in the original scope when it isn’t?
I refer to the Sprint 0 document, not to memory. “I want to make sure I understand what you’re describing. Looking at the Sprint 0 brief, compliance scoring is in scope. Empathy scoring is in the out-of-scope section. Is the new request compliance scoring, or is it empathy scoring?” Most of the time, clients are conflating two things. The document separates them without turning the conversation into a debate about who said what.
Scope questions come up on every project, usually by week three. If you’re wondering whether your AI build needs clearer scope management before engineering starts, book a 30-minute discovery call. We’ll tell you what to lock down first.