The following question on LinkedIn caught my eye as something that is often assumed in project management and estimation efforts.

Testing hours as function of development hours.

Does it make sense to say that number of testing hours in a project should be a function of the number of development hours, such as
TestingHours = X% * DevHours? If so, what should X% be. What considerations would make it be lower or higher. Are there industry standards around this?

My Response:

In a word - no.

There are just too many variables at play of which some will include:

  • type of testing (functional, regression, performance / stress /load, usability, automated / manual, UI / API, back end / white box, browser / multi-platform compatibility)
  • strength of your requirements definition process / artifacts and likelihood of disagreement between the business and the developers which QA must arbitrate. Also, how soon QA is engaged in the project life cycle.
  • strength of the bug list triage and management process and health of communications between all involved. Related issue: is any part of the project outsourced / off-shore.
  • system complexity
  • maturity of system (getting version 1 through QA may take more effort than getting version 2 out depending on the level of innovation between versions)
  • strength of development unit and integration testing (manual or automated)
  • quality risk assumption comfort level / industry quality requirements (medical device / financial services / flight control would be examples with high quality requirements)
  • project time line
  • time line compression (the more a project time line is compressed from its natural length - overall or in any of the phases before QA), this is a paradox though as the forces that tend to compress a project time line usually are unforgiving of long QA cycles. You can bend a time line but eventually the project will break.
  • number and type of users / diversity of their activities with the system (related to system complexity)
  • time to market as a strategic need to break new ground, if so I would rather reduce the feature set than compromise on quality
  • QA build frequency - test concurrently as development proceeds rather than wait for the final build. Some rework will be required but this is well worth the many benefits.
  • headcount ratio between development and QA
  • seniority of staff in BA, development and QA
  • development and QA tools
  • UAT / beta / release candidate process factors
  • prototype activities / early access releases.
  • project manager strength (natural ability, experience and empowerment to minimize scope creep) and attention level. Same for software architect / development manager and business lead if they are not the PM. Product of this factor for all three roles. Apply a communications frequency and health factor to that.

Having said that, I expect that most teams come to a sense of comfort with a testing hours % of development that works for them and can be applied as a rule of thumb as new projects are conceived. Take the last successful project of similar size / complexity etc...

I've never thought of it before but wonder if you could apply the concepts of XP story points and velocity as an estimation tool for future work. This would fit with the statements above about the factors that are unique to your environment and would require measurement before use. You would write stories for testing (separate from development stories) and measure velocity. I would not look for a correlation between developer story size and QA story size or between developer velocity and QA velocity, although they may appear to emerge, I fear they could be deceptive.

What do you think? Respond through comments or to the LinkedIn Question here.

  • Digg
  • del.icio.us
  • Facebook
  • Twitter
  • Technorati
  • email
  • Print
  • PDF
  • StumbleUpon
  • Yahoo! Buzz
  • Google Bookmarks
  • DotNetKicks
  • FriendFeed
  • Google Buzz
  • Live
  • Netvibes
  • Slashdot
  • Add to favorites

Related posts