December 11, 2014

Counting Bugs

Written by CC Pace Technology Team

A couple of years ago I started hearing more and more debate about whether the work to fix bugs that got past the customer’s acceptance testing should be counted towards a team’s velocity or not.  I like a good debate as much as the next person, so I joined in with the opinion that velocity is a measure work done, not value added, so it should be counted with some contextual wiggle room.  (I speak more about my reasoning in a moment.)  Since then, though, I’ve come to believe that putting too much energy into debating this question is just a way to avoid the more important, and honestly more difficult, question: What can we do to prevent the bugs being introduced in the first place?

Let’s start with the debate itself.  There’s a school of thought positing that velocity is a measure of value rather than “just” a measure of effort.  That seems to confuse value with cost, though.  The developers’ estimate for a story is a reasonable representation of the cost of the story, but those estimates specifically do not and should not take the potential value of the story into account.  A customer/product owner cannot prioritize their backlog on cost alone, they must have their own measure of the value of each story and use that in conjunction with the cost information to maximum the value of their project.  As such, there is a specific economic value to fixing a bug[1] and a specific cost, both of which should be considered in prioritizing the fix.  In the context of contracting with an external firm to build software to order, I completely agree that I shouldn’t have to pay directly to fix a regression, but I cannot help but pay the opportunity cost for fixing it.  I should therefore get an estimate for fixing the bug and be able to prioritize it as I see fit.  I’d also expect to have the scope of my project expanded to include the estimates for fixing the bug.

The situation is worse when we switch to the context of internal development: Not only do I have to pay the opportunity cost, but I cannot avoid paying the full economic cost of fixing the bug and I also create a more hostile relationship with the team.  I could yell at the team and try to make them work overtime, but do I really expect to improve quality by doing so?  And don’t I court the danger of pushing the developers so far that they start to use their actual work hours less effectively than they could?  (See Tom DeMarco’s book Slack for more on this.)  Granted that I do want to track how much effort is being spent fixing bugs, but why shouldn’t I just track this directly instead of pretending I’m not paying for the bugs and creating a hostile relationship with my developers?

So, I do have a preference for counting bug fixes in a team’s velocity, but I’ve also come to the conclusion that you’ve probably got a more fundamental issue if you’re getting enough bugs that you want to fight over how to track them instead of trying to understand and eliminate the source of the bugs.  Unfortunately, the sources of bugs are legion:  Maybe the problem is that the system is too complex and the developers can’t figure out how changing one part of it will affect other parts.  (This was a severe problem for me recently while trying to enhance a legacy system.  Because of a lot of duplicated code and hidden coupling, fixing bugs became like playing Whack-a-Mole.)  Maybe the feedback from the acceptance tests is too slow or inconsistent, as often happens when people rely on manual acceptance testing instead of finding a way to automate their acceptance tests.  Maybe the acceptance tests are vague and the developers and testers interpret it differently than the customer.  Maybe all the testing effort is fine and deployment errors or environmental issues cause the system to behave in unexpected ways.

Alas, there isn’t a general purpose answer beyond just getting the team to really reflect on the problem and find the people that are sincerely interested in trying to make a great product.

[1] To be clear, by “bug” I mean a regression in a feature that the customer has already accepted.  A story not being completed because it doesn’t meet the acceptance criteria is a different issue.

Leave a Comment

  • Enjoying Our Content?

  • FacebookTwitterLinkedInGoogle+
  • rss

  • posts are moderated by
    Bug Fixes