XNSIO
  About   Slides   Home  

 
Managed Chaos
Naresh Jain's Random Thoughts on Software Development and Adventure Sports
     
`
 
RSS Feed
Recent Thoughts
Tags
Recent Comments

Planning Poker and The Specializing Specialists

Sunday, December 9th, 2012

Planning Poker is certainly a fun and engaging way to do one of the most boring things on earth (STORY POINT ESTIMATION.)

If I’m not wrong, James Grenning originally came up with this idea. And my understanding is that Planning poker can best be explained by using fundamental premise of Wisdom of Crowd (Why the Many Are Smarter Than the Few). Jack Treynor’s Jelly Bean Jar Experiment is a classic example. The average guess of the group is better than the best guess of any individual.

Can we apply this concept to software estimation? May be. But there are a few small difference. In the case of the jelly bean experiment

  • Everyone in the group is taking their best guess.There isn’t much prior experience or historical data to look at.
  • There is zero consequences if they go wrong. They will not be held liable to this number.
  • Also, everyone, more are less, are equally good at the task at hand (counting.)

Now when we talk about applying the same to highly specialized software development where each skill is quite diverse and the involvement of each person is quite varied. What do we get? A complete mess IMHO!

One way to address this issue is to embrace the generalizing specialist approach. If nothing else, in the long run, these folks would become great assets to the company.

But if you don’t want to go down the generalizing specialist route, and you have people in silos with deep pockets of knowledge/skill, can planning poker work? After trying it for 3-4 years, I did not seen it working. But you should certainly give it an honest try. After all, <sarcasm>its the industry best practice.</sarcasm>

The other argument I often hear in favor of planning poker is that when the teams are doing this, they learn so much from each other. I agree 100% to this observation. Certainly getting a group of people together and asking them to estimate (commit) to something, gets them to talk to each other, share their view points and it leads to knowledge sharing. IMHO knowledge sharing is extremely important. Why not have focused meeting for just knowledge sharing? Brown-bag sessions, show-n-tell sessions, code-walk-thrus, idea-fest, scratch your personal itch days, refactoring-fest and many more. Why morph something so important under the estimation/planning meeting banner, where the priorities are different? People don’t go into a planning session saying “Wow, today all of us are going to learn something new.”

Also I would suggest reading my blog on: Story Points and Velocity: Recipe for Disaster.

Story Points and Velocity: Recipe for Disaster

Saturday, December 4th, 2010

Every day I hear horror stories of how developers are harassed by managers and customers for not having predictable/stable velocity. Developers are penalized when their estimates don’t match their actuals.

If I understand correctly, the reason we moved to story points was to avoid this public humiliation of developers by their managers and customers.

Its probably helped some teams but vast majority of teams today are no better off than before, except that now they have this one extract level of indirection because of story points and then velocity.

We can certainly blame the developers and managers for not understanding story points in the first place. But will that really solve the problem teams are faced with today?

Please consider reading my blog on Story Points are Relative Complexity Estimation techniques. It will help you understand what story points are.

Assuming you know what story point estimates are. Let’s consider that we have some user stories with different story points which help us understand relative complexity estimate.

Then we pick up the most important stories (with different relative complexities) and try to do those stories in our next iteration/sprint.

Let’s say we end up finishing 6 user stories at the end of this iteration/sprint. We add up all the story points for each user story which was completed and we say that’s our velocity.

Next iteration/sprint, we say we can roughly pick up same amount of total story points based on our velocity. And we plan our iterations/sprints this way. We find an oscillating velocity each iteration/sprint, which in theory should normalize over a period of time.

But do you see a fundamental error in this approach?

First we said, 2-story points does not mean 2 times bigger than 1-story point. Let’s say to implement a 1-point story it might take 6 hrs, while to implement a 2-point story it takes 9 hrs. Hence we assigned random numbers (Fibonacci series) to story points in the first place. But then we go and add them all up.

If you still don’t get it, let me explain with an example.

In the nth iteration/sprint, we implemented 6 stories:

  • Two 1-point story
  • Two 3-point stories
  • One 5-point story
  • One 8-point story

So our total velocity is ( 2*1 + 2*3 + 5 + 8 ) = 21 points. In 2 weeks we got 21 points done, hence our velocity is 21.

Next iteration/sprit, we’ll take:

* Twenty One 1-point stories

Take a wild guess what would happen?

Yeah I know, hence we don’t take just one iteration/sprint’s velocity, we take an average across many iterations/sprints.

But its a real big stretch to take something which was inherently not meant to be mathematical or statistical in nature and calculate velocity based on it.

If velocity anyway averages out over a period of time, then why not just count the number of stories and use them as your velocity instead of doing story-points?

Over a period of time stories will roughly be broken down to similar size stories and even if they don’t, they will average out.

Isn’t that much simpler (with about the same amount of error) than doing all the story point business?

I used this approach for few years and did certainly benefit from it. No doubt its better than effort estimation upfront. But is this the best we can do?

I know many teams who don’t do effort estimation or relative complexity estimation and moved to a flow model instead of trying to fit thing into the box.

Consider reading my blog on Estimations Considered Harmful.

Story Points: Relative Complexity Estimate

Saturday, December 4th, 2010

If we have 2 user stories (A and B), I can say A is smaller than B hence, A is less story points compared to B.

But what does “smaller” mean?

  • Less Complex to Understand
  • Smaller set of acceptance criteria
  • Have prior experience doing something similar to A compared to B
  • Have a rough (better/clearer) idea of what needs to be done to implement A compared to B
  • A is less volatile and vague compared to B
  • and so on…

So, A is less story points compared to B. But clearly we don’t know how much longer its going to take to implement A or B.
Hence we don’t know how much more effort and time will be required to implement B compared to A. All we know at this point is, A is smaller than B.

It is important to understand that Story points are relative complexity estimate NOT effort estimation (how many people, how long will it take?) technique.

Now if we had 5 stories (A, B, C, D and E) and applied the same thinking, we can come up with different buckets in which we can put these stories in.

Small, Medium, Large, XL, XXL and so on….

And then we can say all stories in the small bucket are 1 story point. All stories in medium bucket are 3 story points, 5 story points, 8 story points and so on…

Why do we give these random numbers instead of sequential numbers (1,2,3,4,…)?

Because we don’t want people to get confused that medium story with 2 points is 2 times bigger than a small story with 1 point.

We cannot apply mathematics or statistics to story points.

    Licensed under
Creative Commons License