Quality Assurance vs. Software Testing

For a vast majority of my time in the Context-Driven community, I have loosely accepted many “truths” as presented. I have pushed back on some, argued with a few, and flat out rejected some others. The idea that Quality Assurance is an inferior title to something more appropriate such as Software Testing, Light Shiner, Information Gatherer, or Bug Master. Recently I have found that I have a loose agreement with this idea. This post is an attempt to come to a better understanding for myself, and hopefully others in the community.

So not long after my whole team took the RST course with Paul Holland, they decided as a team that the name of our team should be more representative of what we actually do. It was a unanimous decision to change the name from “Quality Assurance” to “R&D Testers”. This was indicative of the fact that we were first and foremost, members of the Research and Development team, and that the role we filled was testing, as opposed to “Assuring Quality”.

Great! I left our meeting that day thinking the team really listened to some of what was taught. I thought the process of changing the name of the team would be rather simple. Change a couple e-mail aliases, a couple quick conversations with the R&D team leadership, and we’d be done.

So I went to start those quick conversations, and it turned out that they weren’t as quick as I thought. Before I go on, I want it to be clear that these individuals I was talking to are engaged development leadership that really care about what we do, engage in community discussions on agile and kanban topics, and actually have my respect in some ways. This isn’t a “bash the ignorant” type of blog post. In that framework, I brought this idea to the R&D team leadership and was met with some resistance. In my side of the conversation, I parroted the arguments I have heard from others in the community, “testers don’t do anything to assure quality, we can’t be certain (or sure) of the quality of a product.”

This was not received as well as I thought it would be. I was under the impression that this was a self-evident truth. That others in the industry were simply too ignorant of what testing actually is to understand this, and all of this “QA” garbage that flies around are relics of manufacturing processes that get applied to software. Here I was talking to people that I share many beliefs about software development, and they disagreed with me. The main thrust of the argument was disagreement with the notion that testers do nothing to assure the quality of a product. In this person’s opinion every project and team they had been on, testers were very influential in increasing product quality and therefore the name QA wasn’t altogether misleading.

“But we don’t ‘ASSURE’ anything, impact perhaps, but not assure,” was my dutiful context-driven retort.

“Assurance doesn’t mean that the product is perfect, but QA people definitely bring a great value in improving quality,” was the response I got.

I was able to walk away from that conversation with a kind of do-whatever-you-want-to agreement from our team leadership, but I wasn’t satisfied. I went back to my desk to look up the definition of the word ‘assurance’ to prove that my point was right, we don’t assure anything as testers. In looking up this definition, this is where my agreement with CDT started to get a little looser.

The definitions of ‘assurance’ all pointed back to the root word ‘assure’. Miriam-Webster offered 4 definitions of ‘assure’. I pulled each one and started detailing why each of those definitions didn’t apply to what testers do (the outcome of that process can be seen here). I eventually came to a definition of assure that stopped me though: “to give confidence to”. For example, “The child was scared to go to the dentist, but her mother’s assuring words gave her the confidence to climb into the chair.”

This reminded me of a conversation I had with James Bach a few years ago. The first conversation that really pulled me into the CDT community was that they were the only people that seemed to agree with me on how testing is valuable. As James and I were talking he made the following comment, “I test because my clients are worried that something they don’t know about the product will hurt them.”

To me, that statement seems to agree that testing is done to build confidence in a product. At the end of testing, all wrapped up in appropriate safety language and carefully crafted words is a report about the level of confidence in a product, or at the very least information that is meant to affect some stake-holder’s confidence in a product.

The rest of the definitions of the word assurance I agree are misleading, even a bit scary. But the idea of Quality Assurance being a process of building confidence in a product, or gathering information for others to build that confidence, is one that I think I could get behind.

This isn’t to say that I dislike the term ‘testing’ or anything else that does a decent job of describing what a team does. What I am trying to do here is gain a better understanding of why the community is so opposed to the term “Quality Assurance”. Please let me know in the comments if you agree with how this is presented, or where I am way off.

My next post will be about the cultural impacts in an organization of changing the name of team from QA to Test. That is what this post was supposed to be, but I thought this was a better point to start the conversation.

January 9 2013 Update

So after letting this post simmer for a few months, I have decided that taking up the fight internally to officially change the name of the team wasn’t worth it. We refer to ourselves as testers. The rest of the development team understands that we are testers, but in terms of support, sales, marketing, etc. I didn’t find there to be any payoff to changing the team name. Heck, I don’t even have the energy/time at this point to write another full post about why I feel that way. That is why I am updating this post rather than writing a new one. I wanted to cover another topic in my next post, but didn’t want to leave this topic unsettled.

It’s good to be the tester! (HTC DROID DNA Review)

Sometimes, it can be good to be the tester.  And by good I mean really good.  By virtue of my love for testing, and HTC smartphones, I got the opportunity to get my hands on a pre-release version of the DROID DNA, the new flagship ultra-awesome 5-inch Android smartphone from HTC.  Woo! Continue Reading

The Changing Face of Test Management

Another week, another podcast.  I have been very luck to have the opportunity many times to Join Matt Heusser, Michael Larsen, and others on the weekly This Week in Software Testing podcast sponsored by Software Test Professionals.  This week was a good one.

If you remember back to my post on writing conference reports, in my report from the KWSQA conference I mentioned that as our team made progress towards more agile (small ‘a’) methodologies the testers and developers needed to move closer and closer together.  As the testing and development teams have merged together, we have gone from 2 distinct teams and now have 1 team.  This is great and has had a significant impact on the quality of the software we are producing (as I mentioned in my presentation at CAST 2012 last month), however it produces an interesting position for myself (the QA Manager) and the Dev Manager, as we now have 1 team with 2 managers.

Others in the industry are having similar problems, and this week’s podcast is a bit of our conversation along this topic.  Go ahead, give it a listen.

Part 1 – http://www.softwaretestpro.com/Item/5690/TWiST-117-The-Changing-Face-of-QA-Management-Part-I/podcast

Part 2 – http://www.softwaretestpro.com/Item/5700/TWiST-118-The-Changing-Face-of-QA-Management-Part-II/podcast

I taught myself a new word…I’m an autodidact!

For those of you that missed it, Test Coach Camp was a blast.  2 days of non-stop discussion with the best and brightest minds in the space of test coaching, and I got to go!

There were tons of great discussions, exercises, and lessons learned at TCC, but one of my favorite discussions was one that I was able to facilitate on the topic of autodidacticism.  We approached the topic from the angle that  the best way to teach testing is to empower people to teach themselves about testing, but how do you get people to do that.

Luckily, Michael Larsen pulled out his handy dandy portable recording studio and was able to catch the whole conversation and post it out to the interwebs.  Thanks to Software Test Professionals for hosting the recording.  The link is below:

Part 1 – http://www.softwaretestpro.com/Item/5613/TWiST-107-%E2%80%93-Autodidacts-Unite-Part-I/podcast

Part 2 – http://www.softwaretestpro.com/Item/5618/TWiST-108-Autodidacts-Unite-Part-II/podcast

How I Write a Conference Report

A while ago I was able to attend the KWSQA Targeting Quality conference in Waterloo Ontario.  After a great time learning, connecting with old friends, and meeting new ones, I eventually had to go back to the office.  When I got there I was expected to produce an experience report to justify the trip, as I am sure many of you have had to do in the past.

For the purposes of this post, I would like to share a couple tricks I employed to produce what I would consider a decent experience report.

Focus on Value

In my case, the company covered the bill for the trip and the conference.  though it wasn’t a huge investment for the trip, I wanted to make sure it was a worthy investment.  I learned lots of things at the conference, and it is important to make sure those tidbits of knowledge are included to show what was learned that could be leveraged for the company.

Focus on Solutions

I have seen quite a few reports from others (I have even been guilty of it in the past) that just rewrite the class descriptions in a report form and call it good (i.e. I learned x in class a and y in class b).  This covers my first point a bit, but just listing random facts and topics that you learned about don’t show the application of that knowledge.  Based on all of the knowledge you gain, seek for ways to apply that knowledge to problems currently facing your company.

Implement Solutions/Value

Once you have this knowledge and some way in which to apply it, the next step I would consider in writing a great experience report is to actually implement the ideas in the report.  If the experience report is just some document that gets filed into the nether regions of the company storage banks, where is the value in that?

Allow the lessons learned to extend out of the conference, and off the page of the experience report and actually work to implement what you learned.  I was able to do so with what I learned at KWSQA and doing so made the experience (and the experience report) much more valuable.

Below is the text of my experience report from KWSQA (sanitized a bit for safety reasons) for an example of these suggestions in practice:

Targeting Quality 2012 Conference Attendance Report

-Wade Wachs-

After spending a couple days at the Targeting Quality 2012 conference sponsored by KWSQA, I came back to the office with a few items that I feel would benefit the culture and outcomes of the development and QA teams in our company.  Those items are listed and explained below.

Reduce/Remove any Us vs. Them culture

This is one of the biggest actionable items I came away with from the conference.  This applies in several dimensions that our company is already taking actions to accomplish.

Dev vs. QA

I think we have managed a pretty decent relationship between the development team and the testers in our company, but we have consistently thought of these as two separate teams.  One of the big things that I heard at the conference was the idea of considering the testers as part of the development team.

Paul Carvalho talked about this in terms of the fact that SCRUM processes only recognize 3 roles of Product Owner, Scrum Master, and Product Developer.  That is not to say that only those who write code count as developers, but that all members of the team that are not managing or defining the requirements should be working to build a quality product.  I had several conversations with Paul and others that suggested a cultural shift to include the testing role in the team of developers could have a significant impact by tightening the feedback loop between code creation and testing.

We have already made significant steps in the last couple weeks to work towards a goal of integrating the code writers and testers better.  Conversations are in the works to continue this integration further.

Office 1 vs. Office 2

Selena Delsie made a comment that I really liked along the idea that having a small team that practices agile in a larger more waterfall organization is typical, but greater benefits can be realized if the whole organization works together in a more agile manner.  This really hit home for me, as I have felt that Office 1 has been going more and more agile while Office 2 is still struggling with understanding how we do things.  I wrote in my notebook in Selena’s session, “The WHOLE company needs to BE agile, not just development DO agile.”

After conversations with an internal employee last week, I think we are taking some good steps in this direction with the inception of monthly blackout dates and taking the time to all meet together as a company and discuss what we are all doing.  I am cautiously optimistic that these meetings could have a significant positive impact on the quality of the software we are producing as we reduce the feedback loops between those of us producing the software and those teaching how to use the software.

The Software Testing Ice Cream Cone

Paul Carvalho in his tutorial about pitfalls in agile organizations talked about the balance of manual testing and automated testing.  Based on some concepts from Brian Marick (one of the Agile Manifesto signers) and a couple others, there needs to be a push to have manual testers doing business facing testing that is critiquing the product, and spend as little time as possible focusing on base functionality and regression checking.  The amount of testing can be drawn in a pyramid with unit tests at the bottom, integration then functional tests on top of that, and manual exploratory testing depicted as a cloud on top of the pyramid that is being supported by the bottom three layers.

However, in many organizations (ours included) the actual testing effort is an inverted pyramid with very little automated unit and integration testing, a little automated functional testing and lots of cloud shaped manual testing, which ends up looking like an ice cream cone.  I have already talked with Steve about turning that ice cream cone around by adding some additional effort in unit testing and better supported automation.  This goal is in the process of being implemented via the talent reviews with QA and developers.

Effective Metrics

There was a great keynote from Paul Holland where he gave a few techniques on how to effectively provide metrics to management while maintaining integrity of the narrative.  The few concepts that I would like to investigate more and implement are:

– Provide metrics along with narrative to provide the full story behind the metrics.  This narrative can contain any of the potential pitfalls or dangerous conclusions from the metrics or other qualitative information not captured in numbers.

– Use a dashboard to provide a better picture of testing activities.

– More effective use of sticky note boards and how to accurately use those for managing testing effort and displaying work that is being done.

I also was party to a couple side discussions along this topic at the conference.  I hope these conversations will be helpful in moving forward in our goal to identify useful performance measures and provide that information up the management chain.

All in all it was a very enjoyable conference.  The intangibles of the conference were many, but include an increased passion in continuing to push forward, a feeling that the company values me as an employee enough to invest the funds to send me to training, and an increased connection to the testing community to further relationships that will be sustaining in the future.  I truly appreciate the investment and would like to attend further conferences in the future as we get a better handle on this current list of improvements.

Are there any decent testing metrics?

Last week I was asked by my company to define some decent metrics to start tracking for my team.  I have been thinking on this for a while, and the report I ended up with is one that seemed very appropriate for a blog post.

I read a tweet the other day from Janet Gregory:

“I like visible metrics that are aimed at measuring a problem that we are trying to solve. Once the problem is solved, stop.”  (original tweet)

That resonates with me. So I question what problems is it that we are actually trying to solve through collecting metrics? The way these metrics were framed by $person, we are looking for numbers to provide corporate to keep them from forcing us to gather metrics that may not be applicable in our scenario. $person also mentioned justification of new hires and additional resources.

I appreciate the heads up and the suggestion to get some metrics in place before we are forced to do so.I don’t believe  that 2 or 3 numbers will provide any real insight into our team, especially at the corporate level. I also lack an understanding of what would satisfy their need for data. This doesn’t seem like an actual problem that I know how to solve.

As far as justification for additional resources, I understand ‘resources’ to include hiring new people into the team, as well funding for training and improvement of the current team. Through the page coverage metrics below, we will be able to show the page coverage possible with our current team. These metrics will not show any inefficiencies in time usage, but it seems with the coming of $newtool we will get a complete suite of metrics there than can track that. $testerperson will also be tracking time spent on helpdesk tasks to help facilitate hiring a full time helpdesk tech.

The problem I chose to address is one that is authentic to myself and my team.  The coverage metrics listed below are ones that I think will assist in managing the testing effort of my team.  As each release date draws nearer, these coverage metrics will provide insight into areas of the application that may need further testing. As mentioned below, coverage metrics are fallible, and this is only one tool to aid in testing, and showing the story of our testing to others in the company.

Metrics for QA

Hot fixes required to fix critical bugs in production

This is the most important metric for tracking the efficacy of QA as far as I am concerned. Hot fixes are pushed to resolve critical issues clients face. This metric should be as close to 0 as possible for any given period of time. While we have not been tracking this explicitly, we have noticed a decrease in this recently. I think a brief description of each of these incidents should also be recorded.

Time Spent on Support Tasks

One of the problems we are facing right now is the lack of a full time support employee. $testerperson and Wade (the two main QA members that work with customers) will track their time spent on support related tasks to assist in hiring a full time employee.

Page Coverage in Automation

This week we will be adding a tool to $app that will allow us to track all page views by user. We will then be able to compare this against current count of pages in the app and come up with a decent metric for % coverage. This will only be a coverage metric for how many pages are viewed, it is not branch coverage, code coverage, logic coverage, or any other sort of coverage that someone may think of, but this will give us some insight into what portion of the app is being tested by automation.

Page Coverage in Manual Testing

With the same tracking tool that will allow us to track automation’s progress, we will also be able to track manual coverage. This will be useful for tracking coverage at each release cycle to take into consideration for .

These are the metrics I plan to track for my team in the coming months.  Like I mentioned above, the page coverage metric is fallible, but I think in my context we will be able to use the metric for some useful information.  I don’t necessarily want this metric to reach 100% coverage, as there are pages that currently exist in our app that are not actively used by clients, or even available for use by them.

Once I get a feel for how these metrics starts to illuminate our process, the metrics or our processes might change.  Until then, this is what we will use.