The Shifting Schema

I mentioned in my first post that there would be more to come about  how my perceptions of “real” testing have changed.  This is that promised post.

As was said in “Serious Schema Shift“, my uneducated guess of what formal testing was included a plethora of documentation, heavily formalized structure for tests and testers, and more planning than actual testing.  Looking back on this perception, I believe this schema was rooted in my natural tendency towards heavy amounts of documentation and process.  I enjoy defining processes, sorting things into groups, and bringing order to chaos, and therefore often try to create formal structure in situations.

As a young kid, one of my favorite past times was to go to my grandma’s house and help her sort through the possessions she has amassed over the years.  We would spend days defining and implementing organization systems to cut the clutter in her house, and I loved it.  As time went on, in high school my friends and I spent many weekends creating new versions of board games such as Risk and Monopoly, complete with loads of new rules that needed to be defined.  Even to this day, I am such a stickler for rules and documentation that my wife has affectionately dubbed me the ‘Rule Nazi’ when playing games with friends.  All of that to make the point that I often see the world in terms of rules and regulations, and that perspective definitely defined a good deal of what I expected of the testing industry.

One of the mantras I have heard repeated in my current company is that of “just enough process and documentation.”  The theory being that the time we would spend in defining every detail of every product and process would be a waste of money when we could be actually implementing and monetizing our products.  I have struggled against this mentality over the last couple years, but the mistakes that have been missed by the lack of planning have been much cheaper than the amount of time it would have taken to meticulously plan all of the projects that launched without problem.

Due to this climate, our QA department has been run in a similar way.  We have documented and defined as little as possible, but as much as necessary.  While this approach worked well for the company as a whole, the QA team was struggling to succeed with this approach.  I was in one meeting where I was told that my department was the most broken in the entire company.  Something needed to change, and I thought the answer was in more formalized testing procedures that forced the QA team to push forward and be better.

To this end, I made my way to the StarWest conference hoping to find the tools and processes I would need to fix my seemingly broken department.  As I planned my class schedule for the conference I looked for the speakers that would tell me about how to develop a successful test plan and how to define strategies that would pull my team up to the next level.  The first day of the conference I wanted to attend Janet Gregory’s tutorial ‘Planning Your Agile Testing: A Practical Guide’, but at registration her tutorial was sold out, so I settled for my second pick, James Bach’s ‘Critical Thinking for Testers’.

Before I came along, my team was hired based upon individuals with critical thinking skills.  The driving principle being that testing is more of a mindset that a skill set.  With this being the history of my team, I was anxious to see what James had to say about using these critical thinking skills in testing.  The tutorial lasted all day, and I left feeling like I left the day with a few nuggets of information, but I hadn’t really heard any incredibly new concepts that I felt were very applicable to my team.  I enjoyed listening to James, and the day was entertaining, but I didn’t feel like I was any closer to having the test plans I was looking for.

The next day I spent in a full day tutorial titled “Successful Test Automation”.  I was introduced to exactly what I had come to learn, but it felt all wrong.  The presentation was all about documentation, metrics, formulas, and red tape in relation to test automation.  This was exactly the stuff I was looking for, but it was all nonsense.  The metrics were measuring flawed information, the formulas were using the bad metrics, and the whole crowd of testers were listening and agreeing with the presenter.  I couldn’t believe it!  I brought up my concerns about the flaws I saw and immediately was shot down not only by the presenter, but by several of the attendees in the class.

I remembered hearing James Bach say the day before that he no longer attended other presenters classes because they often couldn’t handle the comments he would make.  I felt the same way, I was in the class trying to learn, but when I brought up my concerns I felt shunned by the entirety of the class.  Many of the ideas and principles James had taught the day before suddenly seemed so much more alive and real than they had the day before.  The concepts James had taught had mostly seemed like common sense at first, but I could now see that the sense he taught was not as common as I had once thought.

At the lunch break I brought up some of my concerns about the flaws I was seeing in what was being taught.  The one individual at the table that was attending the same tutorial I was disagreed with every point I brought up.  It seemed to me that the tutorial was teaching testing as a means to documentation as opposed to documentation as a means of assisting testing.  The conversation at the lunch table didn’t give me much hope that I would find anyone that really understood what I was trying to say.

In the interest of keeping this post a readable length, I will break more about this session into another post at a later time.  The outcome of my experience was that I realized my testing team was doing a pretty good job.  Yes, we have a lot of room for improvement.  However, the answer for improvement does not lie in endless piles of documented test cases and paperwork.  The improvements we need are to cut through a lot of the mindless regression checking that we are currently doing so my team can step out and use the critically thinking minds that we hired them for to do some intelligent testing.  James Bach has some good processes for developing intelligent testers and giving them the tools they need to be effective.  We are implementing some of these tools, and I am already getting positive feedback from management above me.  I am glad I was able to adjust my model of what good testing is.  Now I feel like my team can move forward.

Serious Schema Shift

2 weeks ago, if you had asked me what I thought the biggest need of my testing team was I would have responded as I imagine many other people would in my position.  With the young and inexperienced (in the world of software testing) team that I manage, I would have told you that we needed to be trained in QA best practices with more documentation and automation.  Ultimately I would have talked circles around defining the red tape I thought we needed to be a “real” QA team.

So to find out how other companies defined their red tape, I convinced Bluehost to send me to the StarWest conference.   I showed up prepared to have my head packed full of facts and techniques to wrap my team in just enough red tape so we looked like a “real” QA department.  Testing processes, automation tools, documentation, pre-packaged one-size-fits-all is exactly what I was looking for.  I knew I was going to have to think about how these tools would fit our organization, but I knew someone had to have some snake oil out there and I was ready to take it.

Now, let me explain a bit about my current team.  While they are young and relatively inexperienced in the ways of software testing, they are all highly skilled.  In our organization we have chosen employees with strong analytic and critical thinking skills to be in our small department.  Our thought being that it took a critical eye to identify problems in software.  Most of the team were already active bug reporters while performing their other duties in the company.  We are a young team, but this is not a team built of random people off the street.  We are a  skilled team that is ready to “take it to the next level”.

So, I went looking for this red tape, and guess what I found?  Exactly that!  I spent an entire day learning about the metrics and tools for measuring ROI and how to make bloated statements that grossly inflated the value of useless automated tests.  I heard all about the need to have more documented tests that run more often so our ROI is higher so we can disprove our incompetence to incompetent management (because no manager worth their salt would actually believe any of the stuff I was hearing).

Now, at this point in the conference I was quite concerned.  It seemed that the object of my desire was not what I had hoped it would be.  I was hoping for real useful information about how to set up an intelligent test plan and use that to increase the value of my team, not just how to make statements that help us appear more valuable.  I raised my concern about some of these metrics with the class and was immediately shut down not just by the presenter, but by my peers in the class.  Maybe this really is what these people think testing is about.  Maybe I came to the wrong conference.

I have more to say about this specific class, but that will have to come later.  For this post I would like to explore the decision I had to make at this point in the conference.  I could either let this useless information smother my brain in a fog of numbers and useless tests, or I could stand up for myself and challenge the information that was being fed to me.  Though you may not know it, the very fact that you are reading this post is evidence to the fact that I chose the latter.

My schema for “real” software testing was that of rigid, measurable, repeatable, test cases that were rigorously documented and covered in red tape.  I had yet to experience anything good or bad about that, so my schema persisted into this discussion at StarWest.  Here I got some additional data to attempt to sync with my schema, and I realized this was not what I wanted to implement with my team.  My schema of “real” software testing was changing.

Ultimately, I realized that our approach of hiring intelligent, critical thinkers and cognitively thinking about each test we run is a good start along the road to developing a Context-Driven Testing team.  I did learn that we have some significant steps in progressing along that road, but my schema for “real” testing now has plenty of room for the type of thinking we were already doing.

(There will be more about these schema changes in later posts.)

I hope you will join me as I share the story of how our team matures.  I hope you will be able to learn from our successes and failures.  I also want to hear from everyone that thinks I am going about this all wrong.  Through your comments and discussion we can all push ourselves to be better testers.