Why do we test?

Why do we test?

First, let me clarify this question.  I am not asking why any individual started a career in the testing industry.  I am not looking for why anyone works, obviously we all need to feed our families.  I am looking to the root of what it is about testing that causes management to give us a paycheck at any regular interval.  What value does testing add to the companies we work with that has caused our industry to come into existence?  What problem does testing solve?

You would be amazed at the reactions generated by this basic question.  I have impressed managers, developers, stakeholders, and testing experts with this simple question.  I can’t explain why this is an impressive question, this should be a basic question we ask and understand before starting any project.  The point of this post however is not to explore the origin of the question however, I want to share what I have heard people say in response to this question.

This question has been ever present on my mind for some time now.  While at StarWest I was able to ask a broad spectrum of testers this very question.  I have broken their responses into four basic groups that seem to have emerged.

To run tests so we can watch them pass.
As laughable as this is, I have heard several people that fall into this group.  Julian Harty made the comment that he has observed sections of the industry using testing as a scapegoat.  If the product fails in any way, management then has the testers to blame for not finding all the problems.  Julian described this as a cheap insurance policy, just blame the testers.  I personally have seen this in one local company.  Every time there is a significant product defect that leaks to production, a member of their QA team is fired and life goes on.  This is obviously a terrible reason to exist as a tester.

Testing as an end in itself
I also heard a lot of testers describe testing as the goal.  These are the testers that love to have tons of useless automation and thousands upon thousands of rigorously documented test cases.  I was told by several people that the goal of testing is simply to run as many tests as we possibly can as often as possible.

When I tried to dig for more information as to why these tests needed to be run there was little useful information that could be provided.  “We test to make sure all the tests pass” is the mantra of this philosophy.  It seems that these testers like nothing more than to see their screens covered in little green check marks as all of their tests pass with flying colors without ever thinking about why the tests are actually running.

To build confidence in our product.
This is probably the most idealistic thought that I have heard thus far.  This group defines testing as a process that produces accurate information and confidence about the product that management can use to make well informed decisions.  With a high level of confidence in the product, management has an easier decision to know when to ship.  If there is a low level of confidence in the product, there may be cause to delay release in order to improve confidence that the product acts as advertised.

While I definitely see value in knowing what a product will do, all that really accomplishes is to help everyone involved sleep a little better at night.  I don’t see this generating enough value to justify the costs of testing, but this is definitely one of the tools we use to reach our goal.

To prevent harm to our brand and customer.
James Bach told me, “I test because my clients are worried that something they don’t know about the product will hurt them.”

I think this gives us the definition that can be applied across the board to find the reasons we test.  Our job as testers is to identify the potential sources of harm and victims thereof.  Different products and companies define harm differently.  The harm that can be caused by a solitaire program on your personal computer is significantly different than the harm that can be caused by the software running the electrolysis machine your daughter or grandmother is attached to.  Some examples of harm include:

  • time wasted on repeating work
  • damaged reputation
  • harm caused to a customer relying on the product to perform
  • real physical harm to customer
  • potential lawsuits caused by defects
  • loss of revenue/sales

I personally work in the private sector at a for-profit company.  The goal of our company is to make money, and harm is defined as anything that gets in the way of that goal.  My team exists to protect the company from losing money due to bugs in our software.  In other contexts money may not matter.  In ours it does, a lot.

My question for you, do you know why your testing department exists in your organization?  If you don’t, then please find out.  If you do, then make sure you are leveraging that knowledge to make your testing team better.  If you have any reasons that are not described above then please leave a comment and let me know.  I would love to learn about other portions of the testing industry I have yet to experience.

5 Responses to this post.

  1. Posted by Tim Valenta on 08.10.10 at 9:05 am

    I’ve got an interesting dilemma at my current job, where I’m simply the only technical employee. It adds a new depth to my personal integrity, to see whether or not I test adequately or intelligently.

    An advantage to being the only employee with a boss far too ignorant of the technical problems is that I can’t afford to invent useless tests derived for no useful purpose. The flip side is that I’m usually not testing enough, so I litter my code with TODO notes about non-robust areas that need to be improved. It’s hard to hit that perfect balance of testing without testing inefficiently, while doing testing at all!

    It seems like the cake is either made of bad ingredients (bad tests), or its under-baked (good tests, but not enough). The latter is arguably better, incomplete as it is.

    In my current capacity, I see testing as an ingredient for code that works, which is my duty to my colleagues. If I’m hit by a bus tomorrow, and they hire a new programmer, my code needs to do exactly what it was designed to do, and it needs to not do anything it was not designed to do. The result is that my code has strengthened integrity by the promise of functionality that testing provides. If the code can be trusted to do exactly what it was intended to do with no special-case surprises, it means other coders can more easily trust it for code reuse.

    Good topic.

  2. Posted by Jeff Brown on 08.10.10 at 9:05 am

    Tim Makes a very good point. It is hard to really draw an appropriate line between “bad ingredients” and “under-baking” as he put it. Personally, I feel that (with exceptions) a code cake cannot actually get ‘over-baked’, or that the code cannot REALLY be tested too much. I mean, code cakes never burn, they just sit in the oven as they asymptotically approach PERFECTLY well-done quality. I feel that there is always a different case or angle or combination of variables that can be altered that would create another unique test case. There is always another edge case that one in a million end up falling into. However, after a while you encroach on the law of diminishing returns. Specifically identifying that you have tested ‘enough’ cases and situations is extremely difficult. As soon as you ACTUALLY define said line, inevitably, there is an edge case that bubbles up that ends up being extremely harmful in whichever meaning of ‘harm’ your company happens to define. I guess that is Murphy’s Law.

    However, with that said, I happen to have a philosophy of my own that I use in my own testing practices that likely differs from that of the developer whose code I am testing. I always have a desire to find something broken with whatever I am testing. The developer is likely crossing his fingers, just hoping the code compiles … and then actually makes it through one simple test … all the while trying to be gentle with it so it does not freak out and blow up in his face. On the other hand, I am trying to break it … somehow secretly hoping it will fail so I can say “HA! I gotcha! I found your little flaw!” Not because I actually hope the code will not work, but somehow the test seems more meaningful to me if I found something wrong. This seems a little different than what others seem to believe, especially the ones that Wade said had the opinion that tests were just there to show that everything passes them. I feel that the test was almost a waste if it did not find something worth fixing.

    There is a sad part of me when I cannot find anything wrong with code, however it gets replaced with a sense of completion knowing that I feel that code is ready for live customers and that I am confident they will not break it (or it will not break them). But, when they do find something broken, I feel a little guilty that I had not thought of that particular case that some customer managed to think of to break the code.

    Just my own thoughts. I like discussing things of this nature.

  3. Posted by Tim Valenta on 08.10.10 at 9:05 am

    It looks like there are clearly different stages of involvement, as far as testing teams go. For a product that I race to completion, my tests are haphazard at best, and I have that gut-twisting feeling as we put something live that doesn’t deserve a “production” status. Then there are the long-standing products and services which are simply assigned a team to monitor, like bumper rails on a bowling lane. It is naturally easier for the latter to hammer away, while the former is super prone to what Jeff described.

    There’s a lot to be implied when the developer is the tester 🙂 It is sometimes an entirely different ball game.

    If you’re willing to read a few pages, I’d recommend looking at the Python testing philosophy, which has always been something I’ve tried to follow, even before I knew it existed in writing: http://diveintopython.org/unit_testing/index.html . There are 6 pages to follow. I gets good about half-way through.

  4. Posted by Tim Valenta on 08.10.10 at 9:05 am

    Hmm… I think I meant to post this link instead:

    http://diveintopython.org/unit_testing/stage_1.html

  5. […] This post was mentioned on Twitter by JeremySloan, Wade Wachs. Wade Wachs said: Did I miss any good reasons for testing? My post on why we test – http://tinyurl.com/3x4j7au […]