XNSIO
  About   Slides   Home  

 
Managed Chaos
Naresh Jain's Random Thoughts on Software Development and Adventure Sports
     
`
 
RSS Feed
Recent Thoughts
Tags
Recent Comments

Introducing Churn

Saturday, October 22nd, 2011

How to destroy a team by introducing various forms of churn?

  • Have the Product Management change high-level vision and priority frequently.
  • Stop the teams from collaborating with other teams, Architects and important stakeholders.
  • Make sure testing and integration is done late in the cycle.
  • As soon as a team member gains enough experience on the team move him/her out of the team to play other roles.
  • In critical areas of the system, force the team to produce a poor implementation.
  • Structure the teams architecturally to ensure there is heavy inter-dependencies between them.
  • Very closely monitor team’s commitment and ensure they feel embarrassed for estimating wrongly.
  • Ensure the first 15-30 mins of every meeting is spent on useless talk, before getting to the crux of the matter.
  • Measure Churn and put clear process in place to minimize churn 😉

Story Points and Velocity: Recipe for Disaster

Saturday, December 4th, 2010

Every day I hear horror stories of how developers are harassed by managers and customers for not having predictable/stable velocity. Developers are penalized when their estimates don’t match their actuals.

If I understand correctly, the reason we moved to story points was to avoid this public humiliation of developers by their managers and customers.

Its probably helped some teams but vast majority of teams today are no better off than before, except that now they have this one extract level of indirection because of story points and then velocity.

We can certainly blame the developers and managers for not understanding story points in the first place. But will that really solve the problem teams are faced with today?

Please consider reading my blog on Story Points are Relative Complexity Estimation techniques. It will help you understand what story points are.

Assuming you know what story point estimates are. Let’s consider that we have some user stories with different story points which help us understand relative complexity estimate.

Then we pick up the most important stories (with different relative complexities) and try to do those stories in our next iteration/sprint.

Let’s say we end up finishing 6 user stories at the end of this iteration/sprint. We add up all the story points for each user story which was completed and we say that’s our velocity.

Next iteration/sprint, we say we can roughly pick up same amount of total story points based on our velocity. And we plan our iterations/sprints this way. We find an oscillating velocity each iteration/sprint, which in theory should normalize over a period of time.

But do you see a fundamental error in this approach?

First we said, 2-story points does not mean 2 times bigger than 1-story point. Let’s say to implement a 1-point story it might take 6 hrs, while to implement a 2-point story it takes 9 hrs. Hence we assigned random numbers (Fibonacci series) to story points in the first place. But then we go and add them all up.

If you still don’t get it, let me explain with an example.

In the nth iteration/sprint, we implemented 6 stories:

  • Two 1-point story
  • Two 3-point stories
  • One 5-point story
  • One 8-point story

So our total velocity is ( 2*1 + 2*3 + 5 + 8 ) = 21 points. In 2 weeks we got 21 points done, hence our velocity is 21.

Next iteration/sprit, we’ll take:

* Twenty One 1-point stories

Take a wild guess what would happen?

Yeah I know, hence we don’t take just one iteration/sprint’s velocity, we take an average across many iterations/sprints.

But its a real big stretch to take something which was inherently not meant to be mathematical or statistical in nature and calculate velocity based on it.

If velocity anyway averages out over a period of time, then why not just count the number of stories and use them as your velocity instead of doing story-points?

Over a period of time stories will roughly be broken down to similar size stories and even if they don’t, they will average out.

Isn’t that much simpler (with about the same amount of error) than doing all the story point business?

I used this approach for few years and did certainly benefit from it. No doubt its better than effort estimation upfront. But is this the best we can do?

I know many teams who don’t do effort estimation or relative complexity estimation and moved to a flow model instead of trying to fit thing into the box.

Consider reading my blog on Estimations Considered Harmful.

Iterations are High Ceremony

Saturday, January 3rd, 2009

Myth: If you do iterations or sprints, you are Agile!

Few companies like Industrial Logic, Yahoo,etc and Individuals like David Anderson, Arlo Belshee, Fred George and many more have paved the way for iteration less agile. The Poppendiecks and the Lean community have been a big influence. So its perfectly fine if you think iterations or sprint are high ceremony.

Back in 2003-2004 when I was working on an offshore maintenance project, we got rid of iterations and it helped us great deal. Since then I’ve been on multiple projects where we’ve done Iterations/Sprints and I’ve always felt that we were wasting a lot of time. On some projects I’ve been able to influence the team to get rid of iterations and start focusing on our throughput. And those teams have not looked back since.

But then the question is:

Why does XP and Scrum have the whole iteration/sprint business?

As defined by original Scrum, Sprint is a fixed time box (1 month) at the end of which you deliver working software to the customers. What this really means is you deploy working software that the customer can use. So this is as good as a release. In some cases it might be internal release instead of a public release. Irrespective, I believe that Sprint == Release (some form) as originally intended.

While XP has the concept of iterations and releases. Which means at the end of the iteration you don’t necessarily release software to your customers. But you have a demo at to the customer at end of the iteration. Sometimes a demo is also a working session.

Never the less, as I understand the advantage of having a fixed time box (iteration/sprint) is to take x amount of time when the team is not really disturbed and in return the team commits to deliver x features (belonging to one or more goal). To be able to commit, the team needs to do some form of analysis and estimation to be comfortable with their commitment.

If you talk to Jeff Sutherland about why he decided to have Sprints as fixed time boxes, he will tell you that at that point the developers were constantly being interrupted by their customers/managers, leading to a context switch. This was yielding very poor productivity and lack of focus. The time box addressed this issue by keeping the customers and managers “out of the room” for the time box duration and also helped the developers to set a clear focus on what needs to be achieved.

Makes sense?

So if you don’t really have these 2 fundamental issues (or if you can educate people about them), why would you want to go thru all this ceremony of planning and estimating the time boxes? Esp. when you are not even using these Sprints as actual releases.

The time boxed way of doing things, forces you towards push scheduling. While this model is successful, it has some drawbacks and hence more and more people (esp. the lean software community) is moving towards pull scheduling. Where there are no artificial time boxes. As and when the features are ready (fully tested), they are deployed. We don’t batch a bunch of features (user stories) together as we do in Sprints. Instead we focus on the smooth flow of the features from inception to deployment. Basically trying to avoid waiting time and thus reducing inventory. Also simply applying queuing theory, you’ll understand that smaller the batch size, better the throughput.

In my case the batch size is couple of thin-sliced features. This really helps us keep the focus and we are not unnecessarily spending time context switching. Its important to note that the time it takes to build these features vary hence we don’t try to put an artificial sprint boundary around them. Whenever they are ready (ideally a couple of days), they are deployed. Zero waiting time. In some cases deployment could be internal and in some cases it could be public releases.

I strongly recommend :

Projecting Velocity is Useless

Saturday, January 3rd, 2009

How do you measure velocity?
Based on number of functionally tested features deployed OR based on number of estimated story points (or whatever your estimation unit is) that were completed in a time box?

If you are using story points (Nebulous Units of Times {NUTS}), then I don’t think projecting your velocity and drawing conclusions based on the projection makes any sense. Because:

  • you are tracking against what you thought at point X in time. And lots of things have changed since then.
  • this method makes a fundamental assumption that the % error in all your estimates is approximately same. If we are 10% behind schedule, we’ll stay 10% behind schedule as far as we develop things at this rate. Personally I don’t think this holds good. Because your estimation errors vary across your estimates.

In my experience velocity is not a straight line, its filled with curves. As the project progresses, you don’t necessarily have a linear velocity. Estimates make you believe that its linear. In fact what I’ve seen is that the velocity seems to slow down and pick up. One needs to regularly invest in paying off the technical debt and then they see the velocity go up again. There are also other reasons why your velocity will vary and won’t follow a straight line. So projecting a line based on your velocity might be misleading.

By looking at the progress without making any explicit projection, a team can get an impression of how the project is doing. Usually I try to do a breadth-first approach of all the important features. What this means is that I don’t have a feature fully 100% ready while others are 0% complete. I work on the very basic needs of all the top most priority features and later add more sophistication as needed. This gives me a big advantage. Now I don’t just vary scope by adding or removing entire feature, but I can also vary scope by increasing or reducing the sophistication levels of different feature.

This approach in my experience reduces a lot of delivery and expectations related risk. If this approach is followed, in the worst case, on the day of delivery, I’ll have, if not all, most features available with varying sophistication instead of varying internal quality. Best case I have the right set of features (without any extra/useless features) fully functional ahead of the promised time.

Estimation Considered Harmful

Thursday, December 25th, 2008

For years I thought I was a poor developer because I could not estimate well. Spending more time and effort on estimation did not really help either. (Predictability Paradox). One day it struck me that may be this whole practice is flawed and I’m not the only one who finds it difficult to estimate. Its been 6 years now and I’ve never looked back again.

Estimates are a hang-over from the waterfall world. For the last 6 years, I’ve been very happy and successful building products and delivering projects without all the estimation related ab-ra-ca-dab-ra. No more real-time, ideal-time, story point, function point; non-sense. I’ve realized the key is to focus on the flow of the deliverable and not whether your are delivering according to the estimates.

It turns out that most people don’t like estimation, but they do it because their Management needs it. Usually when Management asks for estimates, my question to them is how will this estimate really help? What if I said 6 months and I delivered in 12 months? What will really change? And if something really changes, will you do something about it now or you can handle that a little later? Again do you really need to know now or you can watch the rate at which we deliver useful features and plan those other activities later? And BTW how many times have your team delivered according to their estimates? If they did, did the thought ever cross your mind that the estimates were heavily buffered? (The bloat effect!)

I also commonly use the “throwing the dice” exercise to educate people about estimates. In this exercise, I ask the estimate obsessed person to estimate how long it will take them to throw 3 consecutive Six on a dice. Guess what, the person says how can I predict? Well there is probability theory and other great work that can be used, but ….

In most cases, it turns out that Estimation smells of lack of trust (because of past experience) and urge to push your risk on to someone else (so that you can point your finger to someone else). IMHO Management emphasizes on estimates for wrong reasons, rather than any true value those numbers provide.

Another thing to consider when we talk about estimates is, I can take 1 day to build a login screen or I can take 1 month to build it. The real question is how much sophistication your users need? In most cases you don’t know this until you see your users (at least proxy users) use the feature. So why estimate upfront and miss the opportunity to collaborate with your users?

When you start thinking about sophistication, then all of sudden you start thinking from a budgeting point of view rather than estimation. Most people get confused between the two things and interchangeably use one for another as if they were the same concept. What I see myself doing is regularly budgeting features by playing around with the sophistication knobs on each feature. This is yet another way to have your scope negotiable.

When we talk about estimation and it’s evils, I guess you must be aware of Student Syndrome.

So think about the real value of estimates and the effort spent on it. Is it really worth all the pain?

Does this mean we should embrace uncertainty? Yes and No. To some extent you can’t really predict the future so go with the flow. But on the other hand, you setup short (really short) feedback points to correct your direction and hence keep the chaos under check.

    Licensed under
Creative Commons License