XNSIO
  About   Slides   Home  

 
Managed Chaos
Naresh Jain's Random Thoughts on Software Development and Adventure Sports
     
`
 
RSS Feed
Recent Thoughts
Tags
Recent Comments

Story Points and Velocity: Recipe for Disaster

Every day I hear horror stories of how developers are harassed by managers and customers for not having predictable/stable velocity. Developers are penalized when their estimates don’t match their actuals.

If I understand correctly, the reason we moved to story points was to avoid this public humiliation of developers by their managers and customers.

Its probably helped some teams but vast majority of teams today are no better off than before, except that now they have this one extract level of indirection because of story points and then velocity.

We can certainly blame the developers and managers for not understanding story points in the first place. But will that really solve the problem teams are faced with today?

Please consider reading my blog on Story Points are Relative Complexity Estimation techniques. It will help you understand what story points are.

Assuming you know what story point estimates are. Let’s consider that we have some user stories with different story points which help us understand relative complexity estimate.

Then we pick up the most important stories (with different relative complexities) and try to do those stories in our next iteration/sprint.

Let’s say we end up finishing 6 user stories at the end of this iteration/sprint. We add up all the story points for each user story which was completed and we say that’s our velocity.

Next iteration/sprint, we say we can roughly pick up same amount of total story points based on our velocity. And we plan our iterations/sprints this way. We find an oscillating velocity each iteration/sprint, which in theory should normalize over a period of time.

But do you see a fundamental error in this approach?

First we said, 2-story points does not mean 2 times bigger than 1-story point. Let’s say to implement a 1-point story it might take 6 hrs, while to implement a 2-point story it takes 9 hrs. Hence we assigned random numbers (Fibonacci series) to story points in the first place. But then we go and add them all up.

If you still don’t get it, let me explain with an example.

In the nth iteration/sprint, we implemented 6 stories:

  • Two 1-point story
  • Two 3-point stories
  • One 5-point story
  • One 8-point story

So our total velocity is ( 2*1 + 2*3 + 5 + 8 ) = 21 points. In 2 weeks we got 21 points done, hence our velocity is 21.

Next iteration/sprit, we’ll take:

* Twenty One 1-point stories

Take a wild guess what would happen?

Yeah I know, hence we don’t take just one iteration/sprint’s velocity, we take an average across many iterations/sprints.

But its a real big stretch to take something which was inherently not meant to be mathematical or statistical in nature and calculate velocity based on it.

If velocity anyway averages out over a period of time, then why not just count the number of stories and use them as your velocity instead of doing story-points?

Over a period of time stories will roughly be broken down to similar size stories and even if they don’t, they will average out.

Isn’t that much simpler (with about the same amount of error) than doing all the story point business?

I used this approach for few years and did certainly benefit from it. No doubt its better than effort estimation upfront. But is this the best we can do?

I know many teams who don’t do effort estimation or relative complexity estimation and moved to a flow model instead of trying to fit thing into the box.

Consider reading my blog on Estimations Considered Harmful.


    Licensed under
Creative Commons License