Paradoxes as a testing excersize?

I had a friend ask an interesting question.  I wanted to share my initial response to his question as an excersize for anyone else that wants to take the same challenge.  I see several flaws in my logic that could be cleaned up, but I like the rawness of my initial thought process.  Please share your own responses in the comments.  I would love to hear some feedback.

This question leads me to some very interesting thoughts on how we can use dilemmas like this to teach a path to asking questions and teaching new testers to dig a bit deeper.  I like this idea, I will have to use it.

My conversation with a friend:

Friend: q for you…. what would happen if Pinoccio said “My nose will grow now”?

wadewachs: it would not grow
that is different from saying ‘my nose will grow right this second”
there is also the possibility that at the time he is saying it he is a real boy
in which case it wouldn’t matter whether or not he is telling a lie
so we have to assume the puppet pinochio as opposed to the real boy pinochio


wadewachs: you then also have to define the scenario in which his nose grows
from the story, we understand it to be when he tells a lie
but what is a lie?

Friend: i guess that depends on intent

wadewachs: is it any mis-truth, or is it only when there is intent to mis-guide
so, if you look at his intentions, when he makes the comment ‘my nose will grow’ is he planning on lying in the future
if so, he believes it will grow, and could be argued that is not a lie
however, if he intends to tell the truth for the remainder of the time that he is an enchanted puppet, and he understands that lying is what causes his nose to grow, and he understands lying requires intent to do so, then he creates a bit of a paradox for himself, by lying about his intention not to lie
in which case the nose would grow because of the intent not to lie
did that answer your question?

Stake holders wanting to see the green? (checkmarks that is)

Michael Bolton made an interesting post a month ago titled Gaming the Tests where he explores a situation where we are asked to provide incomplete or inaccurate information.  I would suggest reading the scenario he creates about this topic as this post will be talking about a possible approach to handling that situation.

Jason Strobush commented via Twitter about my previous post creating a situation similar to what Michael talks about in his post:

@WadeWachs Ah, but what if it is MANAGEMENT that likes to see the pretty, green, meaningless checkmarks?

Fast-forward two weeks to this morning as I was making my way through the daily RSS feeds where I came across the following quote:

Trying to be a first-rate reporter on the average American newspaper is like trying to play Bach’s ‘St. Matthew’s Passion’ on a ukulele.

That quote is referred to as Bagdikian’s Observation.  Ben Bagdikian is a professor of investigative journalism, author, former Editor, and expert in his field.  In reading a bit about Bagdikian, I have been thinking that the role of investigative reporter is very similar to that of being a tester.  An investigative reporter digs into society to find the defects that will cause harm to the general public.  A definition from Hugo de Burgh (via Wikipedia) that I particularly like says that, “An investigative journalist is a man or woman whose profession it is to discover the truth and to identify lapses from it in whatever media may be available.”

Is that not the same thing testers do?  We find the differences between the way software is expected to work and the way it actually works.  Those differences are merely ‘lapses in truth’ that need to be identified, which are then reported through our available media, typically a bug report.  Investigative reporter … bug report … tester = reporter … QED.

I digress, the point I want to make here is what do you do if your stake holders are asking for bad information.  What if all they want to see is a page full of meaningless green checkmarks without any real testing going on?

A bad tester would simply produce whatever information management wants, regardless of accuracy.  Test results (if run at all) would be falsified to make management happy.

A mediocre tester would likely run through as many test cases as possible, perhaps even intelligently pick features to test that are known to be working to give management the information they are looking for to help their team look better so the project move forward. (ok, good is a relative term)

A better tester would know which tests are more important than others, and will make informed decisions on what areas of the software are likely to be, and test those early in the testing time, provide feedback to the devs so the problems can be fixed while still providing the magical green checkmarks that management so desperately wants.

Great testers however know better than all of that.  If we want to be first-rate testers, and improve our craft, we need to look for a higher method of dealing with issues like this.  There are lots of aspects that go into being a great tester, and I won’t go into all of those right now, but for the interest of this post I will define great testing as identifying the truth about a piece of software, and reporting that truth accurately.

This is where Bagdikian’s Observation applies to us.  We can’t exist as first-rate testers in situations where great testing is not expected or possible.  The most intelligent tester will never have the ability to shine when only allowed to produce meaningless information to management that doesn’t care.  Bagdikian, in an interview with PBS, talking about similar compromising positions that journalists get stuck in made this comment, “I know a lot of journalists, I’ve taught them for a while … what happens to some of the best people … is that when things like that happen, they in effect say I don’t want to be in this business anymore, and they leave.”  Leaving the field is not the only option, but how many great testers are we losing to bad situations.

I would like to explore two other options of what great testers can do in situations like this.  The first, is to change management’s perception of what testers do.  The way this happens is through open, honest communication.  Don’t be afraid of management.  Don’t beat around the bush.  Don’t tell management one thing then do another.  Work with them in defining a reasonable set of expectations on what testing can do, then (if you come to a consensus) do it.  If you have to do some patching of bad promises made in the past (be it by yourself or someone else) then start now, move forward, and make progress in the right direction.  I don’t care how powerless you feel, or where you fit in the corporate structure, if you want to be a great tester, then create an environment where you can do so.

In my current company, I started in the call center, the bottom of the company.  After a couple open and honest conversations with our CEO, and a year of working my tail off, I was sitting in weekly meetings with department heads defining the direction of the company.  I often felt like a fish out of water, I was a grunt worker coming up out of the trenches to sit and talk about specifics of the company with the men that ran the company.  I held my own however, voiced my opinions, gained the confidence of those around me, and within a few months I too became a department head.  I now manage our QA department.  There is more to the story, but a lot of that has to do with not being afraid to talk to management and being able to have that open and honest communication with them.  You can create change for the better.

Now, I don’t know the political climate of every organization out there.  I’m sure there are some people that get stuck in situations that they truly can’t change.  In these situations you have the option to settle at one of the levels mentioned previously, or you can go find a location where you can be truly great.  That may mean leaving your current company and finding a place where you can grow and find your own greatness.  James Bach throws around a couple numbers related to this topic.  90% of the testing positions out there may be suited for mediocre testers, places where potential is stifled and there is no room for greatness.  That still leaves 10% of all the testing positions where great testers can truly move forward, better the craft, and better themselves.  James is happy working in that 10%, and I am confident that there is plenty of room for more great testers in that job market.

If you want to be great, then don’t settle for a mediocre position.  Push yourself, build your name and reputation, and refuse to compromise your integrity.  I don’t know the whole situation around Ben Simo’s recent employment situation, but from what I have read on Twitter, it sounds like he was a great tester stuck in a mediocre position.  In a tweet a couple weeks ago Ben commented that the decision to leave his previous employer was one of the best he has ever made.

Now in case my boss is reading this, I am very happy in my current position and I know I have plenty of room to reach towards greatness.  But what about your current position?

Bagdikian’s Observation

Trying to be a first-rate reporter on the average American newspaper is like trying to play Bach’s ‘St. Matthew’s Passion’ on a ukulele.

In my post this week I talk about testers and investigative reporters having very similar responsibilities.  If a tester is stuck in an average or mediocre company, it will be very difficult for him or her to push forward and become a great tester.

My high school band director put this in another way:

Once you lick the lollipop of mediocrity, you will suck for the rest of your life.

I have ordered Ben Bagdikian’s book The New Media Monopoly to see if he has any other insights on the flow of information that can be applied to testing. I’ll let everyone know if I find anything.

Context-Driven Testing

Context-Driven Testing (CDT) is one of the concepts that I am pulling heavily from in pulling my team forward.  To me, CDT simply means to think about everything you are testing, the way you are testing it, and make sure that is pertinent to your current situation.  James Bach and Cem Kamer have written that much more eloquently in their information about this subject.  You can read their full definition here (  The 7 basic principles they claim make up this approach are listed below.

The Seven Basic Principles of the Context-Driven School

1.    The value of any practice depends on its context.

2.    There are good practices in context, but there are no best practices.

3.    People, working together, are the most important part of any project’s context.

4.    Projects unfold over time in ways that are often not predictable.

5.    The product is a solution. If the problem isn’t solved, the product doesn’t work.

6.    Good software testing is a challenging intellectual process.

7.    Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.

The Shifting Schema

I mentioned in my first post that there would be more to come about  how my perceptions of “real” testing have changed.  This is that promised post.

As was said in “Serious Schema Shift“, my uneducated guess of what formal testing was included a plethora of documentation, heavily formalized structure for tests and testers, and more planning than actual testing.  Looking back on this perception, I believe this schema was rooted in my natural tendency towards heavy amounts of documentation and process.  I enjoy defining processes, sorting things into groups, and bringing order to chaos, and therefore often try to create formal structure in situations.

As a young kid, one of my favorite past times was to go to my grandma’s house and help her sort through the possessions she has amassed over the years.  We would spend days defining and implementing organization systems to cut the clutter in her house, and I loved it.  As time went on, in high school my friends and I spent many weekends creating new versions of board games such as Risk and Monopoly, complete with loads of new rules that needed to be defined.  Even to this day, I am such a stickler for rules and documentation that my wife has affectionately dubbed me the ‘Rule Nazi’ when playing games with friends.  All of that to make the point that I often see the world in terms of rules and regulations, and that perspective definitely defined a good deal of what I expected of the testing industry.

One of the mantras I have heard repeated in my current company is that of “just enough process and documentation.”  The theory being that the time we would spend in defining every detail of every product and process would be a waste of money when we could be actually implementing and monetizing our products.  I have struggled against this mentality over the last couple years, but the mistakes that have been missed by the lack of planning have been much cheaper than the amount of time it would have taken to meticulously plan all of the projects that launched without problem.

Due to this climate, our QA department has been run in a similar way.  We have documented and defined as little as possible, but as much as necessary.  While this approach worked well for the company as a whole, the QA team was struggling to succeed with this approach.  I was in one meeting where I was told that my department was the most broken in the entire company.  Something needed to change, and I thought the answer was in more formalized testing procedures that forced the QA team to push forward and be better.

To this end, I made my way to the StarWest conference hoping to find the tools and processes I would need to fix my seemingly broken department.  As I planned my class schedule for the conference I looked for the speakers that would tell me about how to develop a successful test plan and how to define strategies that would pull my team up to the next level.  The first day of the conference I wanted to attend Janet Gregory’s tutorial ‘Planning Your Agile Testing: A Practical Guide’, but at registration her tutorial was sold out, so I settled for my second pick, James Bach’s ‘Critical Thinking for Testers’.

Before I came along, my team was hired based upon individuals with critical thinking skills.  The driving principle being that testing is more of a mindset that a skill set.  With this being the history of my team, I was anxious to see what James had to say about using these critical thinking skills in testing.  The tutorial lasted all day, and I left feeling like I left the day with a few nuggets of information, but I hadn’t really heard any incredibly new concepts that I felt were very applicable to my team.  I enjoyed listening to James, and the day was entertaining, but I didn’t feel like I was any closer to having the test plans I was looking for.

The next day I spent in a full day tutorial titled “Successful Test Automation”.  I was introduced to exactly what I had come to learn, but it felt all wrong.  The presentation was all about documentation, metrics, formulas, and red tape in relation to test automation.  This was exactly the stuff I was looking for, but it was all nonsense.  The metrics were measuring flawed information, the formulas were using the bad metrics, and the whole crowd of testers were listening and agreeing with the presenter.  I couldn’t believe it!  I brought up my concerns about the flaws I saw and immediately was shot down not only by the presenter, but by several of the attendees in the class.

I remembered hearing James Bach say the day before that he no longer attended other presenters classes because they often couldn’t handle the comments he would make.  I felt the same way, I was in the class trying to learn, but when I brought up my concerns I felt shunned by the entirety of the class.  Many of the ideas and principles James had taught the day before suddenly seemed so much more alive and real than they had the day before.  The concepts James had taught had mostly seemed like common sense at first, but I could now see that the sense he taught was not as common as I had once thought.

At the lunch break I brought up some of my concerns about the flaws I was seeing in what was being taught.  The one individual at the table that was attending the same tutorial I was disagreed with every point I brought up.  It seemed to me that the tutorial was teaching testing as a means to documentation as opposed to documentation as a means of assisting testing.  The conversation at the lunch table didn’t give me much hope that I would find anyone that really understood what I was trying to say.

In the interest of keeping this post a readable length, I will break more about this session into another post at a later time.  The outcome of my experience was that I realized my testing team was doing a pretty good job.  Yes, we have a lot of room for improvement.  However, the answer for improvement does not lie in endless piles of documented test cases and paperwork.  The improvements we need are to cut through a lot of the mindless regression checking that we are currently doing so my team can step out and use the critically thinking minds that we hired them for to do some intelligent testing.  James Bach has some good processes for developing intelligent testers and giving them the tools they need to be effective.  We are implementing some of these tools, and I am already getting positive feedback from management above me.  I am glad I was able to adjust my model of what good testing is.  Now I feel like my team can move forward.

Why do we test?

Why do we test?

First, let me clarify this question.  I am not asking why any individual started a career in the testing industry.  I am not looking for why anyone works, obviously we all need to feed our families.  I am looking to the root of what it is about testing that causes management to give us a paycheck at any regular interval.  What value does testing add to the companies we work with that has caused our industry to come into existence?  What problem does testing solve?

You would be amazed at the reactions generated by this basic question.  I have impressed managers, developers, stakeholders, and testing experts with this simple question.  I can’t explain why this is an impressive question, this should be a basic question we ask and understand before starting any project.  The point of this post however is not to explore the origin of the question however, I want to share what I have heard people say in response to this question.

This question has been ever present on my mind for some time now.  While at StarWest I was able to ask a broad spectrum of testers this very question.  I have broken their responses into four basic groups that seem to have emerged.

To run tests so we can watch them pass.
As laughable as this is, I have heard several people that fall into this group.  Julian Harty made the comment that he has observed sections of the industry using testing as a scapegoat.  If the product fails in any way, management then has the testers to blame for not finding all the problems.  Julian described this as a cheap insurance policy, just blame the testers.  I personally have seen this in one local company.  Every time there is a significant product defect that leaks to production, a member of their QA team is fired and life goes on.  This is obviously a terrible reason to exist as a tester.

Testing as an end in itself
I also heard a lot of testers describe testing as the goal.  These are the testers that love to have tons of useless automation and thousands upon thousands of rigorously documented test cases.  I was told by several people that the goal of testing is simply to run as many tests as we possibly can as often as possible.

When I tried to dig for more information as to why these tests needed to be run there was little useful information that could be provided.  “We test to make sure all the tests pass” is the mantra of this philosophy.  It seems that these testers like nothing more than to see their screens covered in little green check marks as all of their tests pass with flying colors without ever thinking about why the tests are actually running.

To build confidence in our product.
This is probably the most idealistic thought that I have heard thus far.  This group defines testing as a process that produces accurate information and confidence about the product that management can use to make well informed decisions.  With a high level of confidence in the product, management has an easier decision to know when to ship.  If there is a low level of confidence in the product, there may be cause to delay release in order to improve confidence that the product acts as advertised.

While I definitely see value in knowing what a product will do, all that really accomplishes is to help everyone involved sleep a little better at night.  I don’t see this generating enough value to justify the costs of testing, but this is definitely one of the tools we use to reach our goal.

To prevent harm to our brand and customer.
James Bach told me, “I test because my clients are worried that something they don’t know about the product will hurt them.”

I think this gives us the definition that can be applied across the board to find the reasons we test.  Our job as testers is to identify the potential sources of harm and victims thereof.  Different products and companies define harm differently.  The harm that can be caused by a solitaire program on your personal computer is significantly different than the harm that can be caused by the software running the electrolysis machine your daughter or grandmother is attached to.  Some examples of harm include:

  • time wasted on repeating work
  • damaged reputation
  • harm caused to a customer relying on the product to perform
  • real physical harm to customer
  • potential lawsuits caused by defects
  • loss of revenue/sales

I personally work in the private sector at a for-profit company.  The goal of our company is to make money, and harm is defined as anything that gets in the way of that goal.  My team exists to protect the company from losing money due to bugs in our software.  In other contexts money may not matter.  In ours it does, a lot.

My question for you, do you know why your testing department exists in your organization?  If you don’t, then please find out.  If you do, then make sure you are leveraging that knowledge to make your testing team better.  If you have any reasons that are not described above then please leave a comment and let me know.  I would love to learn about other portions of the testing industry I have yet to experience.

Serious Schema Shift

2 weeks ago, if you had asked me what I thought the biggest need of my testing team was I would have responded as I imagine many other people would in my position.  With the young and inexperienced (in the world of software testing) team that I manage, I would have told you that we needed to be trained in QA best practices with more documentation and automation.  Ultimately I would have talked circles around defining the red tape I thought we needed to be a “real” QA team.

So to find out how other companies defined their red tape, I convinced Bluehost to send me to the StarWest conference.   I showed up prepared to have my head packed full of facts and techniques to wrap my team in just enough red tape so we looked like a “real” QA department.  Testing processes, automation tools, documentation, pre-packaged one-size-fits-all is exactly what I was looking for.  I knew I was going to have to think about how these tools would fit our organization, but I knew someone had to have some snake oil out there and I was ready to take it.

Now, let me explain a bit about my current team.  While they are young and relatively inexperienced in the ways of software testing, they are all highly skilled.  In our organization we have chosen employees with strong analytic and critical thinking skills to be in our small department.  Our thought being that it took a critical eye to identify problems in software.  Most of the team were already active bug reporters while performing their other duties in the company.  We are a young team, but this is not a team built of random people off the street.  We are a  skilled team that is ready to “take it to the next level”.

So, I went looking for this red tape, and guess what I found?  Exactly that!  I spent an entire day learning about the metrics and tools for measuring ROI and how to make bloated statements that grossly inflated the value of useless automated tests.  I heard all about the need to have more documented tests that run more often so our ROI is higher so we can disprove our incompetence to incompetent management (because no manager worth their salt would actually believe any of the stuff I was hearing).

Now, at this point in the conference I was quite concerned.  It seemed that the object of my desire was not what I had hoped it would be.  I was hoping for real useful information about how to set up an intelligent test plan and use that to increase the value of my team, not just how to make statements that help us appear more valuable.  I raised my concern about some of these metrics with the class and was immediately shut down not just by the presenter, but by my peers in the class.  Maybe this really is what these people think testing is about.  Maybe I came to the wrong conference.

I have more to say about this specific class, but that will have to come later.  For this post I would like to explore the decision I had to make at this point in the conference.  I could either let this useless information smother my brain in a fog of numbers and useless tests, or I could stand up for myself and challenge the information that was being fed to me.  Though you may not know it, the very fact that you are reading this post is evidence to the fact that I chose the latter.

My schema for “real” software testing was that of rigid, measurable, repeatable, test cases that were rigorously documented and covered in red tape.  I had yet to experience anything good or bad about that, so my schema persisted into this discussion at StarWest.  Here I got some additional data to attempt to sync with my schema, and I realized this was not what I wanted to implement with my team.  My schema of “real” software testing was changing.

Ultimately, I realized that our approach of hiring intelligent, critical thinkers and cognitively thinking about each test we run is a good start along the road to developing a Context-Driven Testing team.  I did learn that we have some significant steps in progressing along that road, but my schema for “real” testing now has plenty of room for the type of thinking we were already doing.

(There will be more about these schema changes in later posts.)

I hope you will join me as I share the story of how our team matures.  I hope you will be able to learn from our successes and failures.  I also want to hear from everyone that thinks I am going about this all wrong.  Through your comments and discussion we can all push ourselves to be better testers.


Schema (schemata or schemas in plural form) is the term psychology that is used to describe the categories we create that help us understand the world around us.  These are the models we use to describe the world.

One can have a schema that describes the category of dogs as four-legged furry animals.  One could have a schema that describes unicorns as mythical creatures that fart rainbows and prance across meadows of clouds and cotton candy.  Our schema are defined by our past experiences and cognitions, and are used to describe the experiences and cognitions we will have in the future.