This the second part of How I Was Able To Be Successful Even When Forced To Use Waterfall

## Rule #1: take your time

Luckily, your estimation meeting will be much more fortunate than mine (see Part I).

In my previous Scrummerfall experience, since I was forced to produce a *big-planning-up-front* phase, I was used to always plan 2 or 3 days for it. I was asking for an estimation when my ignorance of the problem was at the maximum level, hence I needed a lot of analysis.

Actually, even if your using Agile, **the moment you play the story planning game is the exact moment you know the less about your stories**. Dan North, with his Deliberate Discovery, is very right.

The first estimation you would be asked for is the Feasibility study: will you be able to deliver, given the current conditions?

In the meeting I described, my boss said “*Don’t be afraid to give an estimation, it’s not a commitment: it’s just a feasibility study*“.

Wrong, wrong! That’s exactly the point: the first estimates are the most important ones. The following ones will handle details, they will be tactic: on the opposite, the first estimations are the ones the Business will use to design the Corporate strategy. They are much more delicate. Budget will be based on them. You could be much more rough with detail estimations. Surely, not with feasibility ones.

## Rule #2: ask the worker

I was once asked to deliver a big project. The manager gave me requirements and delivery date.

I asked my developers to do an estimation. According to the produced estimation, we weren’t able to honor the deadline.

I had three choices: cut features, ask for a deadline deferment or resign my dismissal.

I talked to the manager. I was ready to resign (after all, if he believed we could deliver and I didn’t, it meant he wasn’t trusting my opinion, then I had no reason to keep my role any more).

Luckily, the deadline was deferred and I kept my role.

A team leader should always ask to her developers for an estimation, since this will translate into a team commitment.

A team leader has not all the detail to be able to produce an estimation by herself. She must rely on the judgement by the ones who will actually work on the task.

**In the estimation phase, developers judgement wins. Never ever reject their result**. As the project or team leader, you are acting just as a proxy between business and workers, hence between the Company’s wish and the actual Company’s capability to deliver.

**Never, ever force a deadline date against developers’ estimation**. I know it’s a hard decision. **Be prepared to resign, if your boss won’t understand**.

## Rule #3: everyone must agree

Do a breakdown of the project. When the project is broken down into stories that developers can estimate, **ask them to reach a perfect agreement on each story**.

Why couldn’t you just calculate an average value?

Because if your developers don’t agree on an estimate, it means **at least one of them forecast a risk**. Risks, that is unknown events, are the only enemies of estimation. An agreement among developers is the only effecting weapon you have to enforce an estimate.

Never accept an estimation until all developers agree on it.

Calculating the average of estimates simply hides disagreements, and increases the risk your estimation wil fail.

Sometimes programmers are overly optimistic in their estimates. Sometimes they are overly pessimistic in their estimates.

Know what? I think they have their reasons: I don’t want to remove or hide these biases; when there’s a disagreement, I think no technique can automatically solve the dilemma and a human interaction is needed.

Let programmers discuss, they know reasons that the Excel `AVG()` function doesn’t know.

## Rule #4: Fibonacci is your friend

Ask your developers to estimate using Fibonacci cards. That means that estimating a single story, each developer can bet it will require 1 unit or 1 + 1 = 2 units, or 1 + 2 = 3 units or 2 + 3 = 5 units or 3 + 5 = 8 units or 5 + 8 = 13 units and so on, where a unit can be “one hour” or “one day” or what ever (consistent) unity of time (or effort) you agree with your team.

The concept is simple: low granularity can be discussed when talking about tasks that will require few units of time; the biggest the task, the fewer information a team has to discuss about details.

Just print an ordinary planning cards set and let your developers find an agreement.

Note the presence of the 0 card: when a developer estimate with 0, she’s communicating she believes the feature is already been developed.

Cards are only a trick. Soon your developers will learn to estimate without them. They are just a visual game good to communicate the rule.

## Rule #5: the optimistic developer wins

If two developers don’t agree, be happy: it means they need a more accurate analysis. Reaching an agreement will require a discussion. And your estimation will surely benefit from this.

In order to avoid a never ending discussion, or, worst, a withdrawal by one of the developers, set this rule: **if more developers don’t agree, the one who bet the less is the winner**.

Well, there’s no winner at all, since the estimation is won only when all developers agree.

Let’s say that, in a discussion, **the developer who bet the highest estimation must convince the others**.

If you adopt the opposite law, you raise the risk that developers would accept a higher estimation even if they don’t agree, because they will be sure to be able to complete the task on time. On the opposite, the developer who bet a lower estimation could be convinced in the case the other developers saw a risk he didn’t notice.

## Rule #6: do both pessimistic and optimistic estimations

Sometimes things go wrong. Actually, Murphy was an optimistic. Things often go bad.

After having estimated each story, I used to ask my developers to **switch their mindset and assume the worst**, then estimate each story again.

I used to tell them to ask themselves the questions I extracted by The Art Of Agile Development

- Imagine it’s a year after the project’s disastrous failure and you are being interviewed about what went wrong: what happened?
- Imagine your best dreams for the project, then write down the opposite
- How could the project fail without anyone being at fault?
- How could the project fail if it were the stakeholders’ faults? The customers’ fault? Testers? Programmers? Management? Your fault?
- How could the project succeed but leave one specific stakeholder unsatisfied or angry?

I noticed it was much better to estimate all the stories with an optimistic mindset, then all of them with a pessimistic one, rather than switching from optimistic to pessimistic each single story estimation: in fact, switching the mindset is not easy, it requires a little setup time and, if required too many times, tend give the developer the illusion she just needs to double the optimistic estimation.

On the contrary, when estimating with the optimistic mindset, developers should suppose the best case: no impediments, no delay, no unexpected events.

None of pessimistic and optimistic estimations should be seen as “the most probable”: they just draw a sort of upper and lower reasonable limit.

That means that you should **use range estimates rather than single point ones**.

In his great post Stop Using Single Point Estimates Wyatt Greene suggests several arguments in favor to range estimates.

He writes:

Estimating how long it will take to develop software is difficult. Fortunately, as an industry we’ve moved away from big-planning-up-front, exhaustive Gantt charts and toward a more agile approach. Unfortunately, we’ve stuck with single point estimates which have some significant disadvantages when compared to range estimates.

[…]

Single Point Estimates Hide Useful Information.

There is only one piece of information in a single point estimate. There are three pieces of information in a range estimate. In addition to the happy and sad estimates, the length of the range is valuable information. It tells how much risk or uncertainty is in the time estimate. Why throw away this useful information?

In Range Estimation versus Point Estimation Jordan Bortz proposes a very convincing argument in favor of range estimates:

Imagine, if you will, two types of gunsights. One the familiar cross type, and the other with a small circle around where the middle of the cross would be.

The odds that the bullet, will go exactly into the middle of the cross, is very unlikely for a number of factors, but the odds that it would go somewhere into a small circle, are in fact, quite high.

This is called the circular error probability. It is the same when playing golf. You may not hit it to a particular spot, 175 yds away, but you might reliably hit it in a circle somewhere between 170 and 180 yds away.

## Rule #7: measure developer’s bias

According to Wyatt Greene

Range Estimates Remove Bias

It’s quite funny, since I’m using range estimates exactly for the opposite reason:

Range Estimates Measure Bias

Suppose a developer gave her two estimations: the worst case and the best case. Should I get the average value? What meaning has this value?

I believe: none.

Instead, I used to ask: “*after the best and the worst case, please, declare your bias; let’s measure how much are you pessimistic or optimistic*.

Well, Actually, that was not the question. It was

Please, estimate the most probable case.

Do you think 3 estimations are too much work?

I don’t. I think 3 weekends working because of a wrong estimation are worst.

## Rule #8: estimation is a probabilistic distribution

Estimations aren’t just ranges: by estimating the stories and then estimating the worst case for each of them, you claim the time needed to produce the software is *probably* something between the lower value and the higher value.

You cannot exclude you will need even more than the worst case. In fact, the estimation produced with the pessimistic mindset is not the “worst” case at all. There’s no limit to “worst”.

You can assume things are well described by a normal distribution.

I’m not saying “*things go like a normal distribution*“. They don’t. I just think normal distribution is a good enough approximation.

Now an important point (actually, another point I and Wyatt see differently): **the range you communicate is not [best case, worst case]**.

Since an estimation is a probabilistic problem, I wish to now:

Given the best case, the worst case and a measurement of developers’ opinion of how things will go,

what’s the range that represents a reasonable probability of success?

I used to follow the suggestion by Putnam and Myers and Roger S. Pressman, and I ended up with spreadsheet to calculate the range containing 2 standard deviations: that is, **a range representing the 68% of probability to guess right**. Fatally, if you wish to have a higher probability of success, the range will grow. It has a sense.

Find here a Google SpreadSheet sample. Here the Excel version.

They use the formula

S = (Sopt + 4Sprob + Spess)/6 where S is the expected value Sopt the optimistic estimate Sprob the problable one Spess the pessimistic

Cheers.

Very good post!

My 2 cents:

– estimation is an iterative process, don’t feel forced to create a “definitive” estimation the first time. Instead, it is better to have an estimation for the tasks where developers agree + time boxed activities to investigate task where there is disagreement. The complete estimation will then be created after these activities.

– measure actual performances and compare with estimations. Try hard to capture the reason for wrong estimation (retrospective?). Without this kind of feedback, estimation is just a measure of the developers feeling.

Exactly. You are right.

As a matter of fact, in this post I missed the argument you are raising: estimation

isan iterative process. My motto is “an estimation cannot be true or false. It just can be increasingly refined“.Very good point.

About your second note, it’s true as well. I believe it is worth of a separate post.

Thanks, Gian Marco