different units of estimation

A while ago the structure of our project team changed, making it distributed: several very senior developers in another country joined us in order to assist in the implementation of certain features crucial to the project.

This was a reasonable decision because the new developers had deep knowledge of the underlying technology, and that knowledge could now be spread across all team members. Since they were actually consultants brought in by the customer, they were now located close to the customer and could communicate with them often, which was beneficial for us as well since with their help we could now get answers to our questions quickly.

However, when we started estimating our first sprint together, we found very soon that there were different expectations about the amount of scope that could be delivered by team members per day.

In our former collocated team, we used perfect engineering hours to estimate backlog items. Perfect hours are hours of net time spent on a task, without interruption for meetings, breaks, providing assistance to other team members, or anything else. The average number of perfect hours that a team can do during a sprint is a metric that is highly specific to every team, and it only becomes relatively stable after the team does 3-5 sprints together.

The new guys, however, used real (calendar) hours. To illustrate our predicament, consider that the new developers had the assumption that they would be able to implement tasks estimated at 6 real hours within one business day. At the same time, our team’s technical lead used perfect hours as a unit of measure, and his informed assumption was that our team members would not be able to implement tasks estimated at more than 3 perfect hours in one day.

You can see how different team members using different units of measures and different expectation about velocity could be confusing. While the new developers used real hours and anticipated relatively low percentage of their time being spent on meetings, our developers used perfect hours, which allowed them to reserve time not only for communicating with one another, but also for some refactoring and unit tests implementation.

When the management was presented with our estimations, their first reaction was that our team looked very slow in comparison to the new guys, and that we needed to find a way to increase our team’s velocity to match their 6 hours/day level. We had to explain to them the difference between estimation units we use. Eventually we were given the green light to continue with existing estimates.

To avoid confusion, we organizing several virtual sub-teams within our Scrum team:

  • Dev team A (senior developers from our team and the consultant who joined us before the sprint)
  • Dev team B (other developers from our team)
  • Testing team

We also had to reorganize the sprint backlog in order to share it between the two development teams. The testing team had to verify implementation of features developed by both teams. During sprint planning, each development team provided estimates for their part of scope using their estimation units.

In order to track sprint progress, we used 3 sprint burn-up charts, one for each team. The testing team’s burn-up chart was used to track overall sprint progress, because testers had to verify tasks implemented by both development teams:

(To clarify any potential misunderstanding, what is meant here by “Actual scope” is the amount of backlog the team committed to deliver during the sprint)

In reality, our sprint backlog wasn’t actually fixed. In the middle of the sprint we realized that Dev team A had would not be able to complete even half of their portion of the scope during the sprint. At the same time Dev team B (First Line’s developers) were running faster than anticipated, and so a few additional items were added to the backlog to utilize their capacity fully.

A common way charts usually get misused by the management is to succumb to the temptation to compare team velocities. While the desire to get a handle of the overall progress of a project involving several teams is understandable, this comparison would be counterproductive and can lead to many problems, as explained brilliantly by Mike Cohn in this post. Instead of comparing velocities across different teams, we used charts to monitor each team’s workload. Having this information enabled us to make adjustments early in the sprint.

This time around we were able to complete the sprint and release the product on time and under budget. Not everything went so swimmingly though, and there were a number of negative takeaways that came up in the sprint retrospective… we’ll cover some of those in our next posts.

Talk To Our Team Today

Talk to Our Team Today

Related Blogs

Interested in talking?

Whether you have a problem that needs solving or a great idea you’d like to explore, our team is always on hand to help you.