Software Estimates Are Not Promises
Software Estimates Are Not Promises – They Are the Operating System of Predictable Delivery
Let me start with a room you probably know well.
It’s the Monday leadership meeting.
Slides are on the TV. Numbers look… okay. Then someone asks the question that freezes the room:
“So, when will this be done?”
Everyone turns to you.
Product glances at Engineering. Engineering glances back at Product. A date is thrown on the table. It’s a mix of optimism, pressure, and a tiny bit of fear.
People nod. The meeting moves on.
And everyone in that room knows there is a good chance that date is wrong.
No one says it out loud.
But you feel it in your stomach.
We build spreadsheets. We color Gantt charts. We negotiate scope. We rewrite Jira tickets until the timelines “fit”. We convince ourselves that this time, with these estimates, things will be different.
And then the pattern repeats:
- The date comes closer.
- The unknowns start to surface.
- Teams work nights and weekends.
- Quality gets traded for speed.
- At the end, somebody is blamed.
It’s exhausting. And worse: it feels dishonest.
Not because people are lying on purpose, but because the game itself is built on a wrong assumption:
We think software can be managed like a construction project.
We treat estimates like pouring concrete.
Once it’s on paper, it must harden.
No change. No movement. No surprise.
The problem is simple:
software does not behave like concrete.
Software behaves much more like a laboratory.
You walk into the lab with a clear intention:
“I want to cure this disease.”
But you don’t know exactly which sequence of experiments will get you there.
You try something.
You observe.
You adjust.
In a real lab, nobody says:
“Please promise which exact experiment will work by June 17th at 3:42 PM.”
Yet in software, we do that every week.
We promise specific dates and detailed scope in an environment full of moving parts, unknown dependencies, and humans learning as they go. Then we are surprised when reality refuses to obey the plan.
When you zoom out, the pattern becomes clear:
- We use estimates as promises, not as tools for thinking.
- We make them too early, with too little information.
- We lock everything: scope, date, and expectations.
- Then we fight reality instead of updating the plan.
This is why so many teams live inside the same loop:
- Overconfident planning
- Hidden doubts
- Crunch and shortcuts
- Blame and distrust
- Repeat with a new PowerPoint template
The more you tighten the screws on the plan, the more fragile your delivery becomes.
There is another way.
If you look at how modern SaaS companies and big tech teams operate, you see a very different mindset:
- They don’t treat estimates as sacred promises.
- They treat them as signals inside a system.
- They don’t try to remove uncertainty.
- They build an operating system that works with uncertainty.
In that world:
- Scope is flexible.
- Dates are constraints, not commandments.
- Product and Engineering share ownership of outcomes.
- Delivery is continuous, not a single “big launch day”.
- Estimates are hypotheses, refined as the team learns.
Predictability doesn’t come from “better guessing”.
It comes from better systems.
This article is about that shift.
Not a new agile buzzword. Not a fancy estimation technique.
But a deeper belief:
Software estimates are not promises – they are part of the operating system that makes delivery predictable without burning people or destroying product quality.
If you change this belief, everything else starts to move:
- How you plan
- How you slice scope
- How you talk to stakeholders
- How your team feels about deadlines
- How you measure success
In the next sections, I’ll walk through:
- Why the “construction-site” mindset is breaking your teams
- How to see software as a laboratory without losing accountability
- A practical “Estimation OS” you can install in your company
- How Product and Engineering can finally stop fighting and start co-owning reality
If you’ve ever sat in that Monday meeting with a date stuck in your throat, this is for you.
1. Software Is a Laboratory, Not Civil Construction
Let’s keep the same meeting in mind, but change the context.
Imagine you’re talking to an architect building a bridge.
If you ask, “When will it be done?”, they can answer with surprising precision.
Why?
Because almost everything is known:
- The laws of physics are stable
- The materials are standardized
- The environment is relatively predictable
- The blueprint is detailed before concrete touches the ground
Civil construction lives in a world where uncertainty is low and the cost of change is huge.
That’s why it makes sense to lock the plan early and defend it at all costs.
Now compare that to building a new product feature.
You might know the problem in theory.
You might have some user interviews.
You might have a draft of the UX flow.
But you don’t know:
- How users will behave when it’s real
- Which edge cases will appear in production
- Which dependencies will break at the worst possible time
- What your CEO or the market will ask for three weeks from now
You are not just “building something you already understand”.
You are discovering what works while building it.
That is not construction.
That is a laboratory.
In a lab, the game is different.
A scientist doesn’t walk in and say:
“I promise that this exact experiment will cure the disease by June 17th.”
They walk in with something else:
- A clear direction: “We want to cure this disease.”
- A set of hypotheses: “We think this pathway is promising.”
- A sequence of experiments: “We’ll test A, then B, then C, and adjust as we learn.”
Each experiment is designed to reduce uncertainty.
Some will fail. Some will partially work. A few will move the needle a lot.
The scientist is accountable.
But not for a specific outcome on a specific day.
They are accountable for:
- Running good experiments
- Learning fast
- Adjusting direction based on reality
This is what most software teams are actually doing, whether they admit it or not.
They are running experiments on:
- User behavior
- System performance
- Edge cases and failure modes
- Market reactions
The problem is that we are doing lab work while pretending we’re pouring concrete.
Think about the last time your team launched something big.
Before launch:
- The plan looked clean
- The mocks looked beautiful
- The dependencies “seemed” manageable
After launch:
- Users behaved differently than expected
- Metrics told a different story
- A “tiny” edge case became a major incident
- Stakeholders asked, “Can we just add this one more thing?” halfway through
This is normal.
This is what happens when you are discovering.
The lab metaphor doesn’t remove responsibility.
It names the type of work you are actually doing.
Once you accept that you are in a lab, a few things become obvious:
- You cannot know everything upfront
- You will learn things mid-flight
- Some experiments will be wrong
- You must design your process to absorb change without chaos
That’s the core of the new belief:
You are not failing to run a construction project. You are failing to acknowledge that you are running a laboratory.
When you adopt the laboratory mindset, the role of estimates changes completely.
In construction mode:
- An estimate is a promise
- It must be defended
- Change is treated as deviation or failure
In lab mode:
- An estimate is a hypothesis
- It can and should be refined
- Change is treated as information
You move from:
“We commit to deliver Feature X by June 10th.”
To:
“Based on what we know now, we believe this slice of Feature X will take around 6–8 weeks with 80% confidence, assuming these constraints stay stable.”
That small shift in language reflects a deeper shift in attitude:
- You acknowledge uncertainty
- You express confidence instead of illusion
- You keep room for new information
You are still accountable.
But now you’re accountable for running a good system, not for winning a guessing contest.
Let me give you a simple story.
Two teams receive the same mission:
“Build a new onboarding experience that improves activation by 20%.”
Team A: Construction mindset
- Spends weeks writing detailed specs
- Commits to a big-bang launch in 3 months
- Locks scope early (“We need all these steps to be complete.”)
- Treats the estimate as a promise they must hit
What happens?
- Halfway through, they discover a major dependency
- A key designer becomes unavailable for 2 weeks
- The first internal demo shows that users don’t understand the new flow
But the date is already on the CEO’s slide.
So they compress testing, cut corners, and launch something that “technically” matches the spec but doesn’t really move activation.
Everyone is tired. Trust goes down a little bit more.
Team B: Laboratory mindset
- Starts with a clear outcome: “Improve activation by 20% over 3–6 months.”
- Breaks the work into small experiments:
- Experiment 1: new copy and fewer steps
- Experiment 2: different success screen
- Experiment 3: contextual help
- Uses estimates as hypotheses for each slice:
- “This first experiment looks like 2–3 weeks with current capacity.”
They ship the first experiment early.
The metrics show a small uplift, but not enough. They learn exactly where users are dropping off. They adjust the next experiment accordingly.
Three months later:
- They may have tested 4–5 variations
- Not all were “on time”, because some were cut and replaced
- But the system did what it was supposed to do: move activation in the right direction
Which team is truly more “predictable”?
The one that hit the original date with low impact?
Or the one that iterated their way to the actual outcome?
Seeing software as a lab does not mean giving up structure.
Labs are highly structured environments:
- Clear protocols
- Safety rules
- Defined roles
- Strong documentation
- Reproducible experiments
The difference is where the structure lives.
Instead of trying to make reality obey a fixed plan, you:
- Build structure around feedback loops
- Build structure around how you slice work
- Build structure around how you refine estimates as you learn
- Build structure around how you negotiate scope, risk, and time
This is where the idea of an Estimation OS comes in.
You stop thinking:
“We just need better estimates.”
And start thinking:
“We need a better operating system where estimates, scope, delivery, and feedback work together.”
The metaphor changes everything:
- Construction mindset → control through rigidity
- Laboratory mindset → control through learning and adaptation
In the next section, we’ll zoom into this Estimation OS and break it into practical components: vision, small batches, honest signals, negotiated constraints, and continuous learning.
That’s where predictability really comes from.
2. The Estimation OS: A System for Predictable Delivery
Once you accept that you’re in a lab, the natural next question is:
“Okay, but how do we run this lab without chaos?”
You don’t fix this with a new Jira workflow or a fancier spreadsheet.
You fix it by changing the system around your estimates.
I like to think of it as an Estimation OS – the operating system that quietly runs under everything: planning, delivery, communication, and expectations.
At a high level, this OS does five things:
- Keeps the vision stable and the path flexible
- Works in small batches to keep risk low and learning fast
- Uses estimates as honest signals grounded in history
- Treats scope, time, and quality as negotiated constraints, not illusions
- Runs on feedback loops that make the system smarter over time
Let’s walk through this in a more human way.
A clear destination, a flexible path
Every healthy system starts with a clear “north”.
In software, that’s your vision and outcomes:
- “Increase activation by 20%.”
- “Reduce churn in the first 90 days.”
- “Launch a new billing system without revenue leaks.”
This part is not supposed to change every week.
It’s the direction.
The path, on the other hand, must be flexible.
The old mindset mixes both into one line:
“We’ll improve activation by building exactly this flow, with these features, by this date.”
The Estimation OS separates them:
- Vision: “Improve activation.”
- Path: “Here’s the best sequence of bets we know right now. We reserve the right to change it when reality speaks.”
When you glue vision and path together, any change feels like failure.
When you separate them, change becomes part of being honest.
As a leader, your job is to make the destination non-negotiable and the route negotiable.
If your team knows exactly what “good” looks like, they can adjust the “how” and “when” without losing the plot.
Shrinking the unit of work
Now imagine a lab where each experiment takes 6 months.
If it fails, you lose half a year.
If something breaks in the middle, everything stalls.
If the hypothesis is wrong, you discover that after burning the whole budget.
That’s how many teams still ship software.
Big projects.
Big releases.
Big surprises.
The Estimation OS starts by shrinking the unit of work.
Instead of a 6-month “onboarding revamp”, you aim for:
- 2–3 week slices
- Each slice shippable
- Each slice testing a specific assumption or delivering a small piece of value
So you move from:
“We’ll deliver the whole new onboarding by September.”
To:
“In the next 2–3 weeks, we’ll ship this specific change that tests whether fewer steps improve completion.”
This changes everything:
- Estimates get easier: “2–3 weeks” is more grounded than “7 months”.
- Risk is limited: being wrong on a small piece is survivable.
- Progress is visible: the team sees real movement, not just “in progress” status.
And once your work items are small and roughly similar in size, you unlock a powerful trick:
You can plan by throughput instead of micro-estimating everything.
Something like:
- “We usually ship 5–7 meaningful items per week.”
- “This initiative is around 12 items.”
- “So we’re looking at roughly 2–3 weeks, assuming similar complexity and no major fires.”
You still estimate, but at the system level.
Less guessing. More flow.
Estimates as signals, not performances
In the construction mindset, estimates are a performance.
You’re expected to sound confident.
You’re rewarded for saying what people want to hear.
You’re punished later when reality refuses to cooperate.
In the Estimation OS, estimates are signals inside the system.
A good signal has three pieces:
-
A range, not a single date
- “6–8 weeks” is a range.
- “June 10th” is a single point that pretends uncertainty doesn’t exist.
-
A confidence level
- “Roughly 80% confidence.”
- Or even just: low / medium / high.
-
A link to your history
- “This feels similar to the billing refactor we did in Q1, which took 7 weeks.”
Most teams skip that last part and estimate from memory and mood.
You can do better with something very simple: track how your system behaves.
- Throughput – how many meaningful items you finish per week.
- Cycle time – how long, on average, an item takes from “in progress” to “done”.
You don’t need a data warehouse to start.
A simple weekly export or lightweight dashboard is enough.
Now, when someone asks, “When will this be done?”, your answer sounds more like:
“For similar work, we usually complete 3–5 items per week. This initiative has around 10 items. So we’re looking at roughly 2–4 weeks, depending on interruptions and surprises.”
Still uncertain, but honest. And that honesty is the foundation of real trust.
Data doesn’t make you a prophet.
It makes your guesses less delusional.
You can’t fix everything at once
This is the part nobody likes to admit.
Most slides and roadmaps quietly assume that you can fix:
- Time
- Scope
- Quality
All at the same time.
It sounds like this:
- “We must ship all of this by June 1st.”
- “We can’t cut any features; they’re all critical.”
- “And we also can’t compromise on quality.”
On paper, it looks decisive.
In practice, it’s a slow-motion accident.
In the Estimation OS, there’s a simpler rule:
You cannot fix time, scope, and quality. At least one has to be negotiable. Usually, it’s scope.
So the conversation shifts.
Instead of:
“This is the committed scope and date.”
You say:
“If June 1st is a hard constraint, here is the realistic scope we can deliver with our current capacity and quality bar. If we want more, we either push the date or knowingly accept more risk.”
This isn’t being difficult.
It’s being explicit about trade-offs.
Product and Engineering stop playing tug-of-war and start designing the puzzle together:
- Product brings business context: what matters most, what’s risky to delay, what’s actually optional.
- Engineering brings reality: what’s feasible, where the technical landmines are, what kind of quality is safe.
Estimates are the entry point to this negotiation, not the final word.
That’s what co-ownership of outcomes looks like in real life.
Feedback: the lab report for your system
A lab that never looks at its results is just playing with colorful liquids.
Your delivery system works the same way.
You can have all the right ingredients:
- Clear outcomes
- Small batches
- Honest estimates
- Negotiated constraints
But if you never look back and ask, “What actually happened?”, you will relive the same problems every quarter with different names.
The Estimation OS runs on two types of feedback.
1. Feedback from users and reality
- Feature flags and gradual rollouts, instead of all-or-nothing launches
- Product metrics: activation, retention, click-throughs, error rates
- Support tickets, user interviews, direct feedback from customers
This answers the question:
“Did what we shipped really do what we hoped?”
2. Feedback from the delivery system itself
- Retrospectives at the end of sprints or milestones
- Post-mortems when deadlines slip or incidents happen
- Regular reviews of throughput, cycle time, and estimate accuracy
This answers another question:
“How did our system behave? Where did it work, and where did it lie to us?”
The tone here is everything.
If every retro turns into “Who messed up?”, people will hide information.
If every delay becomes a personal failure, your estimates will become political survival tools.
High-trust teams treat this like a lab report:
- What did we expect?
- What actually happened?
- Which assumption was wrong?
- What can we change in how we slice, plan, or estimate so this becomes less likely next time?
Then they pick one concrete improvement and apply it in the next cycle.
Repeat this for a year and you don’t just have “better estimates”.
You have a smarter system.
This is what the Estimation OS really is:
- A clear destination that doesn’t change every week
- A path made of small, testable bets
- Estimates used as honest signals, not propaganda
- Trade-offs made visible instead of hidden
- Feedback loops that make your system a little wiser each month
In the next part, we’ll put this side by side with the traditional “construction site” mindset.
When you see them contrasted, it becomes much easier to spot where your current process is quietly working against you—and where to start making the shift.
3. Traditional vs Modern Principles Across the Lifecycle
Before comparing practices, we need to talk about something deeper: principles.
Tools, ceremonies, and frameworks are the visible layer.
Principles are the invisible layer that decides how people actually behave when things get hard.
You can “do Scrum”, “do Kanban”, “do SAFe” on paper.
But day to day, what really drives decisions is:
- What you believe about control
- What you believe about people
- What you believe about uncertainty and change
If your underlying principle is:
“We can and must control everything upfront.”
Then it doesn’t matter which board or sprint template you use.
You will:
- Lock scope early
- Defend dates at all costs
- Treat change as a problem
- Turn estimates into promises
If your principle is:
“We cannot remove uncertainty, but we can build a system that learns fast.”
Then you will naturally:
- Plan in shorter loops
- Ship in small pieces
- Use estimates as signals, not contracts
- Treat change as information
The principles are the operating system.
The practices are just apps running on top.
When results are bad—chronic delays, burnout, poor quality—it’s rarely because you “picked the wrong framework”.
It’s because the principles beneath the framework are misaligned with reality.
So instead of a “Scrum vs Kanban” debate, let’s compare the traditional principles most of us grew up with, and the modern principles used by high-performing product teams.
To make this concrete, I’ll walk through key moments in the lifecycle of work and show the contrast.
Planning: Blueprint vs Evolving Map
Traditional principle:
“Good planning means deciding everything upfront and sticking to it.”
In this world:
- Planning is long, heavy, and front-loaded.
- Big Gantt charts, detailed specifications, frozen scope.
- A sense that “if we think hard enough now, we won’t be surprised later.”
What happens?
- You spend weeks or months building a beautiful blueprint.
- Then reality arrives and doesn’t care.
- Every change feels like an attack on the plan.
Modern principle:
“Good planning means deciding just enough now, and updating the plan as we learn.”
Here:
- The vision is stable (“We want to hit this outcome”).
- The plan is an evolving map for the next quarter or current cycle.
- You commit to direction, not to every step.
You still plan. But you treat planning as a continuous process, not a one-time event.
The result is subtle but powerful:
People stop being loyal to the original slide deck and start being loyal to reality.
Delivery: Big Bang vs Continuous Flow
Traditional principle:
“It’s safer to bundle everything and release it once, when it’s ‘done’.”
This leads to:
- Few, large releases.
- Long “integration phases”.
- Users seeing the product only near the end.
It feels safe because you “control” what goes out.
In practice, it’s the opposite:
- You discover integration issues late.
- You discover misalignment with users late.
- If you’re wrong, you’re wrong big.
Modern principle:
“It’s safer to deliver in small, reversible steps and learn along the way.”
So teams:
- Release continuously or very frequently.
- Use feature flags to decouple deploy from launch.
- Integrate early and often.
The question changes from:
“When is the big launch?”
To:
“What is the next safe, meaningful increment we can ship?”
The risk doesn’t disappear.
It’s just sliced into pieces small enough that you can detect and correct issues before they explode.
Estimates: Promise vs Hypothesis
Traditional principle:
“An estimate is a commitment we must hit.”
So:
- Estimates are requested early, when uncertainty is highest.
- Often given by people who won’t actually do the work.
- Once the number is public, it becomes a promise.
If the team surfaces new information later, they’re seen as:
- “Not reliable”
- “Not committed”
- “Bad at estimating”
The predictable outcome:
People start padding, hiding risk, and saying what others want to hear.
Modern principle:
“An estimate is a hypothesis that should be refined as we learn.”
So estimates:
- Are given by the people who will execute.
- Come with ranges and confidence levels.
- Are connected to historical data: throughput, cycle time, similar past work.
A sentence like:
“We’re looking at 6–8 weeks, medium confidence, similar to the billing refactor from Q1”
Is not a weakness. It’s a truthful signal.
The point is not to sound precise.
The point is to give the best possible picture today, and then update it without shame when new information appears.
Deadlines: Sacred Date vs Useful Constraint
Traditional principle:
“Dates are sacred. Hitting the date is success.”
It doesn’t matter if the date is:
- Arbitrary
- Politically motivated
- Based on wishful thinking
Once it’s on a slide, it becomes holy.
When reality pushes back, you have only three levers:
- Crunch (more hours)
- Cut corners (less quality)
- Quietly change scope without saying it
The system rewards people who “make the date” even if the product is weak and the team is burnt out.
Modern principle:
“Dates are constraints we design around, not commandments we pretend to obey.”
Here, not all dates are equal:
- Some are hard (events, regulatory requirements, contractual obligations).
- Others are soft (internal goals, planning anchors).
The conversation sounds more like:
“If this date is truly fixed, here is the realistic scope we can deliver within our quality bar. If we want more scope, we either push the date or increase risk knowingly.”
Success is not “we shipped on the day we wrote six months ago”.
Success is “we used the date to make good decisions about what to ship, at what quality, with which trade-offs.”
The date becomes a decision point, not a trap.
Scope: Frozen Contract vs Primary Lever
Traditional principle:
“Scope is fixed. If we change scope, we lose control.”
That’s why you see:
- Heavy change-request processes
- Long lists of “must-haves” where everything is priority 1
- Anger when removing features, even unused ones
Time and budget are treated as variables, but scope is sacred.
So when things get tight, the main moves are:
- Extend the timeline
- Blow up the budget
- Or just push people harder
Modern principle:
“Scope is the main lever to balance time, quality, and impact.”
Scope is expected to move.
The key questions become:
- “What is the minimum slice that still solves a real problem?”
- “What can safely be moved to a second phase without lying to ourselves?”
- “Which parts are experiments, and which parts are non-negotiable?”
This mindset is what allows teams to say:
- “We won’t have the full vision by the conference date, but we’ll have a solid, coherent slice that users can use and we’re not ashamed of.”
Scope stops being a contract and becomes a design tool.
Responsibility: Blame Game vs Co-Ownership
Traditional principle:
“Product decides what and when; Engineering must figure out how.”
On paper, it sounds like clear ownership.
In reality, it creates a quiet war.
Patterns you may recognize:
- Product commits dates to stakeholders before alignment with Engineering.
- Engineering feels like an order-taker and resists in subtle ways.
- When things slip, each side has a story about why it’s the other’s fault.
Even when people are kind, the structure itself breeds conflict.
Modern principle:
“Product and Engineering are co-owners of the outcome.”
The split looks more like this:
- Product owns: problem, outcome, narrative, high-level scope and priorities.
- Engineering owns: technical approach, effort, risk, quality, operability.
Timeline emerges from a conversation, not a decree.
Product says:
“Here’s the outcome we need and why this quarter matters.”
Engineering says:
“Here’s what’s realistic given our stack, debt, and capacity. Here are three options with different scope/risk trade-offs.”
Both sides are jointly responsible for what actually ships and what it does in the real world.
You don’t get Product “winning” while Engineering “loses”, or vice versa.
You either win together or lose together.
Culture and Success: On Time vs Real Outcomes
At the end of the day, nothing reveals your principles more clearly than what you celebrate.
Traditional principle:
“Success = on time, on budget, on scope.”
So you celebrate:
- Hitting the deadline, even if usage is low
- Finishing the roadmap, even if the product didn’t move key metrics
- People who “save the day”, even if they’re cleaning up after a bad plan
Over time, teams learn the real rule:
“It’s better to look good on the spreadsheet than to tell the truth early.”
Modern principle:
“Success = real outcomes with sustainable execution.”
So you measure and reward things like:
- Activation, retention, revenue, customer satisfaction
- Reduced incidents, more stability, better developer experience
- Teams that surface risks early and propose realistic options
Shipping late but with a strong, validated outcome is seen as better than shipping on time with something nobody uses.
This doesn’t mean dates don’t matter.
It means they are placed in the right context: useful, but not ultimate.
When you look across the lifecycle, you start to see a pattern:
Traditional principles try to control uncertainty with rigidity.
Modern principles accept uncertainty and build systems to work with it.
If your company keeps experiencing the same problems—chronic delays, low trust, weak products—it’s worth asking:
- Which of these principles are really running our OS today?
- Where do we say “modern” things but behave in traditional ways?
- Where are we still secretly treating software like concrete instead of experiments?
In the next section, we’ll move from comparison to implementation.
We’ll talk about how to install this modern set of principles in a real organization—without needing a full revolution on day one.
4. Implementation Guide: How to Install the Estimation OS
At this point, it’s easy to nod along and think:
“Yes, this makes sense in theory. But my company is a mess. How do I actually do this?”
Good question.
You don’t install a new operating system for your team in one weekend.
You don’t need a reorg. You don’t need a new framework.
You need a series of small, deliberate moves that change how people think and behave around estimates, scope, and deadlines.
Think of it like refactoring a legacy codebase:
- You don’t rewrite everything from scratch.
- You choose a small area.
- You make it cleaner, safer, more testable.
- Then you expand from there.
We’ll do the same with your Estimation OS.
Below is a practical path you can follow. Adjust names and tools to your reality, but keep the principles.
Step 1 – Map Your Current Reality (Without Blame)
Before changing anything, you need to see clearly how things work today.
This sounds obvious, but many leaders skip this and jump straight into “solutions”.
Take one or two recent initiatives. Not the disaster, not the miracle. Just something normal.
For each one, ask:
- How was the scope defined?
- Who decided the deadline, and how?
- How were estimates created? By whom? With which context?
- How did we track progress?
- When things changed, how did we respond?
- What happened to quality?
You can do this as a short workshop with Product and Engineering leads.
The goal is not to blame anyone. The goal is to surface patterns like:
- Dates being promised before Engineering is involved
- Scope being treated as fixed even when new info appears
- Estimates never being revisited after the first slide
- Quality silently sacrificed at the end
Write it down. Treat it like a system diagram:
“This is how work really flows here today.”
Once you see it, you can decide where to intervene first.
Step 2 – Create a Simple Product–Engineering Working Agreement
The Estimation OS lives or dies in the relationship between Product and Engineering.
If one side is promising and the other side is cleaning up, nothing we’ve discussed will stick.
You don’t need a 20-page RACI. You need a clear, shared agreement like:
- Product owns:
- Problem and outcome
- Business context and priorities
- High-level scope and why it matters now
- Engineering owns:
- Technical approach and architecture
- Effort and risk
- Quality, stability, and operability
And together, they own:
- Timeline
- Trade-offs between scope, time, and quality
- The final result in the hands of users
Make this explicit.
For example, you can literally write:
“We will never treat date, scope, and quality as all fixed.
If estimates and business constraints collide, we will adjust scope or date, not pretend the numbers fit.”
This is not a legal contract.
It’s a social contract.
Share it with leadership. Talk about it in planning meetings. Refer back to it when the old reflexes come back:
- “Remember our agreement: this is a scope vs date trade-off conversation, not a ‘work harder’ conversation.”
Step 3 – Start Slicing Work Smaller
If you only change one thing, change this.
Big, vague projects make honest estimation almost impossible.
They hide risk. They invite fantasy.
Your goal is to make small, coherent slices of work that:
- Can be delivered in 1–3 weeks
- Have clear “done” criteria
- Deliver value or learning on their own
Take a big initiative on your roadmap and ask:
- “What is the smallest version of this that would still be useful?”
- “What could we release first that would already help someone or validate a key assumption?”
- “What can safely be moved to a second or third phase?”
You will feel resistance at first.
Stakeholders will say: “But we need everything.”
Teams will say: “It’s hard to slice.”
Leaders will say: “This looks less impressive.”
That’s normal. You’re rewiring how people think about progress.
Use concrete examples:
- Instead of “New onboarding experience”, slice into:
- Fewer steps in the form
- New success screen
- Clearer error handling
- Contextual help on the most confusing step
- Instead of “Refactor billing system”, slice into:
- Migrate read model first
- Add new billing engine in parallel, behind a flag
- Migrate 5% of users
- Then 100%
The smaller the slices, the more honest your estimates become, because each slice is understandable.
Step 4 – Turn Estimates into Ranges Grounded in History
Once you have smaller slices, you can change how you estimate.
Stop giving single-date promises for fuzzy work.
Start giving ranges + confidence.
For each slice, encourage the team to answer:
- “Roughly how long do we think this will take?”
- “What is the low-end and high-end range?”
- “How confident do we feel? Why?”
A simple format works:
“We estimate 1–2 weeks, medium confidence.
Similar in complexity to [previous feature] which took about 8 days of focused work.”
Now add a minimal layer of historical data.
You don’t need perfect tracking. Start with:
- How many meaningful items do we finish per week (on average)?
- What’s our average cycle time for these kinds of tasks?
Even if the numbers are rough, they do something subtle:
They move the conversation from fantasy to reality.
Instead of:
“It feels like we can do 10 big things this quarter.”
You get:
“We usually finish about 5–7 items per week.
This plan assumes we’ll suddenly double that. Does that seem realistic?”
Now the gap becomes visible.
You’re not just arguing opinions.
You’re confronting the system with its own behavior.
Step 5 – Fix the Delivery Pipes Just Enough
You cannot talk about predictable estimates if your delivery pipeline is chaos.
If every deploy is painful, if tests are flaky, if merging is dangerous, you will always be surprised.
You don’t need a perfect DevOps setup on day one.
But you do need minimum stability:
- A CI pipeline that runs tests on every change
- Automatic deploys to at least a staging environment
- A simple way to put features behind flags
- Basic monitoring and alerting in production
Think of it like this:
- Estimates answer “How long will this take if the system behaves normally?”
- A broken pipeline means “the system never behaves normally.”
So part of installing the Estimation OS is admitting:
“We can’t be honest about timelines while our basic engineering hygiene is this fragile.”
You’re allowed to make infra work part of the scope.
You can say:
“If you want this feature on this timeline, we need 2 weeks first to fix our deployment pipeline, or every estimate will continue to be a lie.”
It feels uncomfortable in the short term.
In the long term, it’s the only way to get out of the constant-surprise loop.
Step 6 – Install Lightweight Feedback Loops
Finally, you need loops that make the system smarter over time.
Think of them as your lab reports.
Two simple rituals can change a lot:
-
Regular check-ins during execution
Once a week (or sprint), answer as a team:
- “What did we plan vs what happened?”
- “Did new information appear?”
- “Do we need to update estimates or scope?”
Make it normal to say:
“We were wrong. Here’s what we misjudged. Here’s our new range.”
No drama. Just reality.
-
Short retros after meaningful work
When a project ends or a big milestone completes, ask:
- “Where were our estimates far off?”
- “Was the problem slicing, unknown dependencies, unclear scope, technical debt…?”
- “What is one concrete change we’ll make in how we plan or slice next time?”
Keep it small. One improvement per cycle is enough.
You’re not trying to predict the future perfectly.
You’re trying to learn faster than your environment punishes you.
Over a few months, these loops will:
- Expose chronic bottlenecks
- Reveal the real unit of work your team can handle
- Make your estimates less “performative” and more useful
And the tone matters again.
If these meetings turn into blame sessions, the OS dies.
If they stay focused on systems and patterns, the OS grows.
Where to Start (If Everything Feels Broken)
If your instinct right now is: “We’re too far from this,” here’s a simple starting path:
- Pick one team and one upcoming initiative.
Treat it as your “lab inside the lab”. - Create a mini working agreement just for that team and Product counterpart.
Clarify who owns what and how scope/date/quality will be negotiated. - Slice the initiative into 1–3 week chunks.
Be aggressive about simplifying and deferring. - Estimate each slice with ranges and confidence.
Track how long they actually take. - Run weekly check-ins and a retro at the end.
Document what you learned about your system. - Share the story.
Not as a “look how amazing we are” tale, but as:
“Here’s what happened when we treated estimates as part of a system, not as promises.”
Leaders respond to stories grounded in data and outcomes.
If you can show:
- Less chaos
- Fewer surprises
- More honest conversations
- And a result that actually moved a real business metric
You will have open doors to expand this way of working.
Installing the Estimation OS is not about perfection.
It’s about telling the truth earlier, designing smaller bets, and letting your system learn.
In the next section, we’ll talk about how to live with real deadlines, demanding stakeholders, and non-negotiable constraints, without throwing all of this out the window every time someone sends an urgent email.
5. Working with Deadlines, Stakeholders, and Risk
This is where the whole conversation becomes real.
It’s one thing to talk about labs, small batches, and ranges inside the engineering bubble.
It’s another thing to stand in front of a CEO, a sales leader, or a big customer and say:
“We don’t know yet. Here are the ranges. Here are the trade-offs.”
Deadlines, stakeholders, and risk are where your principles collide with:
- Money
- Reputation
- Contracts
- Ego
If you don’t have a clear way to deal with this, your Estimation OS will die the first time someone important sends an urgent email.
Let’s go deep into how to live in reality without returning to the old game of pretending.
First: Not All Deadlines Are the Same
We throw the word “deadline” around as if everything had the same weight.
It doesn’t.
There are at least three different species:
-
Hard deadlines
- Regulatory changes
- Contractual obligations with penalties
- Public events (conference keynote, launch with partners)
If you miss these, there is a visible cost.
-
Business deadlines
- Quarter boundaries
- OKR cycles
- Board meetings
- Marketing campaigns that could move
These are important, but it’s the business choosing a moment, not the universe enforcing it.
-
Emotional deadlines
- “It would be nice if we had this by X.”
- “We told the team we’d do it this year.”
- “We hinted this date to a customer in a Zoom call.”
These feel real because someone said them out loud. But they are often made without full information.
If you treat all three as equally sacred, your system will always be in panic mode.
Part of your job as a technical leader is to name the type of deadline in each conversation.
Literally:
“This is a real hard deadline (regulation).
This one is a business decision (Q3 OKR).
This other date is more of a preference, not a commitment.”
When you name them, you unlock different strategies for each.
Designing Around Real Deadlines (Instead of Lying to Them)
Let’s start with the scariest ones: hard deadlines.
A regulation changes on January 1st.
A public announcement is booked.
A partner integration must be ready for their event.
You don’t get to move those dates.
The traditional instinct is:
“Okay, then we must do everything by that date.”
The modern instinct is:
“Okay, then we must decide what is truly essential by that date.”
You keep three truths in view:
- The date is fixed.
- The minimum quality bar is fixed (no broken core flows in production).
- Therefore, scope must be the main lever.
So you sit with Product and stakeholders and draw a very simple picture:
- Must-have by the date
- Nice-to-have if time allows
- Not for this date
And you enforce one rule:
“If we discover we’re slipping, we cut from the bottom of this list, not from quality.”
You can even pre-define the “sacrifice list”:
- Features that are useful but not critical
- Nice UI details that can come in v2
- Internal tools that are helpful but not mandatory for day one
Your estimates are now used to:
- Decide what fits into the must-have bucket
- Reveal when the must-have bucket is too big
- Trigger scope cuts early, not in the last week at 2am
You are not negotiating with the date anymore.
You are negotiating what shows up on that date.
That is a very different game.
Soft Deadlines: Decision Points, Not Traps
Now let’s talk about business deadlines: end of quarter, end of year, planning cycles.
These dates matter for coordination.
But they are often used as if they were laws of physics.
The Estimation OS treats them as decision points.
Instead of:
“We must ship this exact thing by the end of Q3 or we failed.”
You use language like:
“By the end of Q3, we want to reach this outcome or at least know whether this bet is working. Here is what we think we can ship by then. We’ll adjust as we learn.”
Then, as you approach the date, you ask:
- “Given what we know now, does it make sense to ship what we have?”
- “Is it better to delay 2–4 weeks to hit the outcome with quality?”
- “Or do we ship a smaller cut now and continue evolving next quarter?”
The date stops being a cliff.
It becomes a checkpoint where you consciously choose the next move.
This is where honesty pays off.
If you’ve been updating estimates and surfacing risk early, these conversations are calm and grounded.
If you waited until the last week to say, “By the way, we’re not going to make it,” of course people will panic.
Deadlines are not the enemy.
The enemy is silence until it’s too late.
Talking to Stakeholders in Options, Not in Yes/No
Most leaders don’t actually need “certainty”.
They need predictability, options, and time to react.
What scares them is not “we don’t know yet”.
What scares them is “we don’t know yet, and we only admit that at the last moment.”
One of the most powerful communication habits you can develop is to talk in options.
Instead of answering:
“Can we deliver all of this by June 1st?”
With:
“Yes.”
or
“No.”
You answer like this:
“Here are three options:
- Option A: Keep June 1st, reduce scope to this smaller set. Risk: low.
- Option B: Keep full scope, move date to late June. Risk: medium.
- Option C: Try to keep both scope and date, but accept higher risk of incidents and rework.”
My recommendation is A, because it protects quality and team health while still giving us a strong story for June 1st.”
Now you’ve done three things:
- You anchored the conversation on trade-offs, not miracles.
- You invited the stakeholder into the decision, instead of defending yourself.
- You showed leadership by making a recommendation.
This works with CEOs.
It works with Sales.
It works with big customers.
Because you’re respecting what they really care about:
“No surprises. No hidden bombs. Real choices.”
Aligning with Sales: Stop Selling Fiction
Sales is often perceived as the enemy of realistic estimates.
In many companies, they pre-sell dates and features to close deals, then throw the grenade over the wall.
But underneath the behavior, Sales is usually responding to incentives:
- They are paid to close deals.
- They are punished for losing them.
- They are rarely given safe, honest packages they can sell.
If you want to change this dynamic, you have to give Sales better tools than fiction.
For example:
- Define a few standard offering bundles with known implementation profiles:
- “Basic integration: 4–6 weeks, these features.”
- “Advanced integration: 8–12 weeks, these extra features.”
- Link those bundles to real historical data and guardrails:
- “We do not promise custom features with fixed dates unless they fit into these boxes.”
- Involve a technical person in late-stage deal conversations where scope and dates are being discussed.
The message becomes:
“We will help you win deals, but we will not sell lies. Here are the configurations we can stand behind, and here is the risk level for each.”
At first, Sales might resist.
Over time, they will appreciate not having to constantly apologize to clients for broken promises.
Predictability is also a sales asset.
Making Risk Explicit (Instead of Letting It Bite You in the Dark)
Most organizations treat risk like a ghost.
Everyone feels it, few people name it, and then everyone acts shocked when it materializes.
An Estimation OS treats risk as a first-class citizen.
Risk has many faces:
- Technical unknowns
- New integrations with external systems
- Legacy areas of the codebase nobody fully understands
- Dependencies on other teams who are already overloaded
- People risk: critical knowledge in one brain, key person on vacation
If you ignore these, your estimates will always look better than reality.
So you build a very simple habit:
- For each slice of work, ask:
- “What are the main risks here?”
- “What would we need to spike or validate early?”
- “What’s the worst place this could fail, and how painful would that be?”
You don’t need a risk management bureaucracy.
You need visibility.
You can even use a simple RAG pattern:
- Green: Known, routine work.
- Amber: Some unknowns, moderate complexity.
- Red: High uncertainty, new tech, gnarly legacy, heavy coupling.
Then your estimate is not just “2–3 weeks”.
It becomes:
“2–3 weeks, amber risk. We’re confident on the core work, but there’s a 20% chance of delays due to this external dependency.”
With red items, you plan spikes:
- 1–3 days to prototype, test assumptions, or explore a risky area of the code.
- After the spike, you re-estimate.
You are not removing risk.
You are pulling it forward, where you can see it and make decisions early.
Over time, stakeholders learn to trust this language:
- Green means “this is boring but safe”.
- Amber means “possible surprises, but we’ve thought about them.”
- Red means “we’re doing something new and we need space to discover.”
This is another way of saying:
“We refuse to pretend all work is equal.”
The Inner Game: Courage, Identity, and Saying the Hard Thing Early
There’s a deeper layer here that we don’t talk about enough.
Telling the truth early has a cost.
Especially if you are the CTO, Head of Engineering, or tech lead.
You will have moments where:
- You are the only person in the room saying, “These numbers don’t add up.”
- Everyone wants the comforting lie.
- You know that pushing back will make you look “negative” or “not a team player.”
This is a character decision as much as a technical one.
You have to decide what game you’re playing:
- The game of looking good in the next meeting
- Or the game of building a company that can be trusted long term
In practice, that means:
- Saying “I don’t know yet” when you truly don’t.
- Refusing to commit to impossible plans, even if others are nodding.
- Bringing options instead of just criticism.
- Being willing to own your part when your system underestimates or misjudges risk.
You’re not just protecting your team.
You’re protecting the company from self-deception.
Stakeholders may resist at first.
But over time, people learn:
“When this person says something will happen, it’s grounded in reality.
If they say it’s risky, we listen.”
That is how reputation is built.
Not by always delivering the impossible, but by being the person who refuses to build castles on sand.
Deadlines, stakeholders, and risk don’t go away in a mature Estimation OS.
They are still there.
They still create pressure.
They still matter.
The difference is:
- You name the type of deadline you’re facing.
- You design scope and strategy around real constraints.
- You talk to stakeholders in options and trade-offs, not fantasies.
- You treat risk like an object in the room, not a ghost in the dark.
- You choose to tell the truth earlier, even when it’s uncomfortable.
In the next section, we’ll face the most common objections head-on.
Because even if all this makes sense to you, there are always three resistance layers:
- “This system won’t work here.”
- “My team can’t do this.”
- “My environment will never allow it.”
We’ll go through each one and show how to navigate them without giving up the core principles.
6. The Resistance You’ll Meet (And How to Handle It)
Everything we’ve talked about so far makes sense on paper.
But the moment you try to change how your company treats estimates, you will hit resistance from three places:
- The system itself (“This way of working won’t work here.”)
- Your own team (“We’re not ready for this.”)
- The environment around you (“Our leadership / clients will never accept this.”)
On top of that, there are a few classic ways even good intentions quietly fall apart.
Let’s walk through these forces one by one, in a more human way.
“This is nice theory, but our reality is different”
This is usually the first reaction.
You present a more honest way to plan and deliver, and someone says:
“Look, this is great in big tech, but we are a [insert special snowflake here]. We need hard dates. We need commitments.”
Underneath this sentence, there’s a fear:
- Fear of chaos
- Fear of losing control
- Fear that “flexible” means “anything goes”
A useful way to respond is to anchor in their reality, not in theory.
Take a recent project everyone remembers. Ask:
- “Did we hit the original date?”
- “If we did, what did it cost in terms of weekends, shortcuts, or quality?”
- “If we didn’t, when did we realize we were off?”
- “What information could have helped us decide sooner?”
You’re not arguing agile vs waterfall.
You’re holding a mirror to a concrete story they lived.
Most of the time, people see:
- The dates were not real.
- The plan didn’t survive first contact.
- Nobody felt safe to say “this is not going to fit” early.
From there, you can propose something modest:
“I’m not asking to change everything. I’m asking that for the next important initiative, we try smaller slices, real ranges, and earlier conversations about trade-offs. If it doesn’t help, we can throw it away. But I don’t think the current way is actually working.”
You’re not selling a philosophy.
You’re offering an experiment.
“My team is not senior enough for this”
Sometimes the pushback comes from the inside.
Leads tell you:
- “We can’t estimate.”
- “We always get it wrong.”
- “People are too junior to think in ranges and risk.”
It’s true that some teams are early in their journey.
But often, the problem is not skill. It’s the environment around them.
If every estimate is treated as a promise, people will either:
- Pad everything
- Or say what others want to hear and hope for a miracle
That behavior would show up even with very senior people.
So you start by changing the safety around the numbers:
- Praise when someone updates a previous estimate because they learned something new.
- Normalize sentences like:
- “We misjudged this. Here’s what we learned.”
- “Our confidence was low and it should have been a signal to spike earlier.”
Then you teach some very basic skills:
- How to slice a big thing into 1–3 week chunks
- How to use similar past work as a reference
- How to tag items as low / medium / high risk
- How to say “I don’t know yet, here’s what we need to learn before we commit”
Treat this like training, not like a test they’re failing.
You wouldn’t give a junior engineer a critical refactor with zero guidance and then call them “bad” if they struggle.
Estimation and planning are skills like any other.
They grow when there’s room to practice and feedback that doesn’t punish honesty.
“Our leadership / clients will never accept this”
This is the most understandable fear.
You imagine yourself saying to a CEO or a big client:
“We don’t have a fixed date yet. Here are ranges and trade-offs.”
And your brain immediately plays a movie where you lose your job.
The key here is how you bring the conversation.
If all you say is “we can’t commit”, you’re just bringing a problem.
If you bring options, you’re bringing leadership.
For example, instead of:
“We can’t deliver all this by June 1st.”
You say:
“We see three paths:
- A: Keep June 1st, ship this smaller scope with high quality.
- B: Move to late June to keep more scope, but accept the delay.
- C: Try to force full scope for June 1st, with higher risk of bugs and instability.
Based on what we know today, I recommend A. It protects quality and gives us something solid to talk about on June 1st.”
You’re doing three things at once:
- Respecting the business need
- Refusing to sell a fantasy
- Offering a way forward
Over time, people start to distinguish between:
- The person who always says “no” and blocks everything
- The person who shows up with trade-offs, data, and clear recommendations
You want to be the second.
You may still hear “just make it happen” sometimes.
But you will have planted a seed: the idea that reality is not optional.
Classic ways this breaks (and what to watch for)
Even with good intentions, there are a few failure modes that quietly kill a healthier way of working.
Here are some that show up often.
1. Asking for estimates without changing any decision
This is when leadership says:
“We want estimates. We want ranges. We want transparency.”
The team does the work. They slice better. They bring numbers and scenarios.
And then, when the numbers don’t fit the already-dreamed date, nothing changes.
No scope cut.
No date movement.
No renegotiation.
People just hear:
“Okay, thanks, but we’re still presenting the original plan.”
The lesson they learn is simple:
“This is theater. They don’t want truth, they want permission.”
If this is happening, call it gently but clearly:
“We’re doing the work to bring a realistic view. If we never use it to adjust scope or expectations, the team will stop taking this seriously. How can we make sure estimates actually influence decisions?”
You’re not attacking anyone.
You’re protecting the integrity of the process.
2. Keeping slices big and vague
Another common pattern:
- Everyone nods when you talk about small batches.
- Then the roadmap still has items like “New onboarding” or “Rebuild billing” as single units.
If the unit of work is huge and fuzzy, no amount of estimating tricks will save you.
This is the equivalent of saying “Let’s eat healthier” with no concrete plan.
Push for a simple rule of thumb:
- If something can’t be described as a 1–3 week, coherent slice, it’s too big.
- If you can’t write a clear “done” for it, it’s too vague.
Treat slicing as a first-class skill, not as an afterthought.
Review slices together. Turn it into a craft, not a chore.
3. Turning every retro into a trial
Retros and post-mortems are where the Estimation OS either grows or dies.
If every time a deadline slips, the meeting becomes:
- “Who underestimated?”
- “Why didn’t you think of this?”
- “You need to be more committed.”
People will learn to:
- Hide problems
- Pad numbers
- Avoid taking on anything risky
You can feel this in the room: people stop being curious and start being defensive.
Redirect those conversations from “Who” to “What in the system?”
- “What did we not see early enough?”
- “Was this a slicing problem? A dependency problem? A technical debt problem?”
- “What one change to our process would make this less likely?”
Hold individuals accountable for behavior (lack of communication, obvious negligence), but keep the focus on system behavior, not personal shame.
4. Selling the change as an ideology
Sometimes the resistance appears because the way the change is communicated feels ideological.
If you show up with:
- “We are going to be like Spotify.”
- “We are implementing #NoEstimates because X said so.”
- “Big tech does it this way, we should too.”
You’ll trigger a natural defense:
“We are not Spotify. We are not Google. Our world is different.”
A better frame is:
- “Here is where our current way of working is hurting us.”
- “Here is a small experiment we want to run on one team.”
- “Here is how we’ll know if this experiment is helping.”
You’re not asking people to convert to a religion.
You’re asking them to try a practical change and judge it by its results.
Choosing the kind of leader you want to be
Under all the techniques, there is a simple decision:
Do you want to be the person who polishes the story,
or the person who tells the truth early?
Working with estimates in a healthy way is not just about charts and ranges.
It’s about identity and courage.
It means:
- Saying “this doesn’t fit” when everyone wants to hear “we’ll make it work”.
- Admitting your own system underestimated, and committing to learn.
- Protecting your team and your company from delusion, even when that’s unpopular.
Over time, that posture builds a very specific kind of reputation:
- Stakeholders know that when you say “yes”, it means something.
- Your team knows you won’t sell them into impossible promises.
- You become the person associated with clarity, reality, and dependable results, not just beautiful slides.
In the final part, we’ll close the loop.
We’ll bring everything back to one core belief and one simple line you can carry with you into every roadmap meeting, every quarterly planning, and every uncomfortable conversation about “when it will be done.”
7. From Fear-Based Promises to Systemic Predictability
Let’s go back to that room.
Slides on the screen.
Leadership around the table.
Someone asks: “So, when will this be done?”
In many companies, that question doesn’t really ask for insight.
It asks for reassurance.
“Tell me something that will make me feel safe.
Tell me a date that looks good on the slide.
Tell me this won’t blow up in my face.”
So we do what humans do when we’re afraid of disappointing people:
we over-promise and hope reality will be kind.
This is how fear quietly runs the operating system:
- Fear of losing credibility
- Fear of losing a deal
- Fear of looking “negative”
- Fear of being the one who says “this doesn’t fit”
Fear-based promises feel safe in the moment.
But they plant a delayed bomb in the system.
Weeks or months later, it explodes as:
- Crunch and burnout
- Quiet resentment between Product and Engineering
- Fragile releases and band-aid fixes
- Distrust in leadership and in each other
Everyone suffers.
And yet, next quarter, we play the same game again.
Systemic predictability is the opposite of that.
It’s not “perfect accuracy”.
It’s not “no surprises ever”.
It’s a culture where:
- Truth arrives early, even when it’s uncomfortable
- Estimates are signals, not performances
- Scope, time, and quality are negotiated consciously, not in the shadows
- Deadlines are treated as real constraints, not as magic spells
- The system gets a little smarter every month
You move from:
“We promise this date and pray.”
To:
“Here is our best current view.
Here is the range.
Here is the risk.
Here are your options.
And here is what we recommend.”
It’s quieter.
Less dramatic.
More adult.
At the center of this shift is a simple belief:
Software is a laboratory, not civil construction.
Once you really accept that, a lot of things stop making sense:
- Freezing scope for a 9-month plan
- Treating every date like a law of physics
- Shaming teams for learning mid-flight
- Rewarding people for heroic saves instead of calm, boring stability
And other things suddenly become obvious:
- Work must be sliced smaller
- Feedback must be built into the process
- Product and Engineering must share ownership of reality
- Historical data must inform, not decorate
- Risk must be named, not buried
Predictability becomes a property of the system, not a talent for guessing.
If you’re a CTO, Head of Engineering, or product leader, this is the real invitation:
Stop trying to be the hero who “makes it happen” against all odds.
Become the architect of a system where truth can move freely.
That might look like:
- In the next planning cycle, refusing to fix scope, time, and quality at once—and saying that out loud.
- In the next roadmap review, insisting on ranges and confidence levels instead of single dates.
- In the next retro, steering the conversation from “Who messed up?” to “What in the system made this outcome likely?”
- In the next sales conversation, framing the answer in options and trade-offs instead of a pressured “yes”.
None of these are huge revolutions.
But each one sends the same signal:
“We are done pretending. We will negotiate with reality, not against it.”
Over time, people start to feel it:
- Teams are less afraid to surface bad news early.
- Stakeholders feel less ambushed and more included in decisions.
- Releases feel less like cliff jumps and more like controlled steps.
- Your own stress changes flavor: less panic, more stewardship.
This is not just an “engineering thing”.
It’s a character thing.
I’ll end with this:
You don’t fix late projects by yelling at estimates.
You fix late projects by changing what estimates are inside your company.
From decorative numbers → to honest signals.
From fear-based promises → to part of an operating system that actually respects how software and people behave.
Or, in one line you can carry into every “when will it be done?” meeting:
Software will never behave like concrete. The day you stop pretending it does is the day real predictability becomes possible.
Best,
Linecker Amorim