Quality Assurance vs. Software Testing

For a vast majority of my time in the Context-Driven community, I have loosely accepted many “truths” as presented. I have pushed back on some, argued with a few, and flat out rejected some others. The idea that Quality Assurance is an inferior title to something more appropriate such as Software Testing, Light Shiner, Information Gatherer, or Bug Master. Recently I have found that I have a loose agreement with this idea. This post is an attempt to come to a better understanding for myself, and hopefully others in the community.

So not long after my whole team took the RST course with Paul Holland, they decided as a team that the name of our team should be more representative of what we actually do. It was a unanimous decision to change the name from “Quality Assurance” to “R&D Testers”. This was indicative of the fact that we were first and foremost, members of the Research and Development team, and that the role we filled was testing, as opposed to “Assuring Quality”.

Great! I left our meeting that day thinking the team really listened to some of what was taught. I thought the process of changing the name of the team would be rather simple. Change a couple e-mail aliases, a couple quick conversations with the R&D team leadership, and we’d be done.

So I went to start those quick conversations, and it turned out that they weren’t as quick as I thought. Before I go on, I want it to be clear that these individuals I was talking to are engaged development leadership that really care about what we do, engage in community discussions on agile and kanban topics, and actually have my respect in some ways. This isn’t a “bash the ignorant” type of blog post. In that framework, I brought this idea to the R&D team leadership and was met with some resistance. In my side of the conversation, I parroted the arguments I have heard from others in the community, “testers don’t do anything to assure quality, we can’t be certain (or sure) of the quality of a product.”

This was not received as well as I thought it would be. I was under the impression that this was a self-evident truth. That others in the industry were simply too ignorant of what testing actually is to understand this, and all of this “QA” garbage that flies around are relics of manufacturing processes that get applied to software. Here I was talking to people that I share many beliefs about software development, and they disagreed with me. The main thrust of the argument was disagreement with the notion that testers do nothing to assure the quality of a product. In this person’s opinion every project and team they had been on, testers were very influential in increasing product quality and therefore the name QA wasn’t altogether misleading.

“But we don’t ‘ASSURE’ anything, impact perhaps, but not assure,” was my dutiful context-driven retort.

“Assurance doesn’t mean that the product is perfect, but QA people definitely bring a great value in improving quality,” was the response I got.

I was able to walk away from that conversation with a kind of do-whatever-you-want-to agreement from our team leadership, but I wasn’t satisfied. I went back to my desk to look up the definition of the word ‘assurance’ to prove that my point was right, we don’t assure anything as testers. In looking up this definition, this is where my agreement with CDT started to get a little looser.

The definitions of ‘assurance’ all pointed back to the root word ‘assure’. Miriam-Webster offered 4 definitions of ‘assure’. I pulled each one and started detailing why each of those definitions didn’t apply to what testers do (the outcome of that process can be seen here). I eventually came to a definition of assure that stopped me though: “to give confidence to”. For example, “The child was scared to go to the dentist, but her mother’s assuring words gave her the confidence to climb into the chair.”

This reminded me of a conversation I had with James Bach a few years ago. The first conversation that really pulled me into the CDT community was that they were the only people that seemed to agree with me on how testing is valuable. As James and I were talking he made the following comment, “I test because my clients are worried that something they don’t know about the product will hurt them.”

To me, that statement seems to agree that testing is done to build confidence in a product. At the end of testing, all wrapped up in appropriate safety language and carefully crafted words is a report about the level of confidence in a product, or at the very least information that is meant to affect some stake-holder’s confidence in a product.

The rest of the definitions of the word assurance I agree are misleading, even a bit scary. But the idea of Quality Assurance being a process of building confidence in a product, or gathering information for others to build that confidence, is one that I think I could get behind.

This isn’t to say that I dislike the term ‘testing’ or anything else that does a decent job of describing what a team does. What I am trying to do here is gain a better understanding of why the community is so opposed to the term “Quality Assurance”. Please let me know in the comments if you agree with how this is presented, or where I am way off.

My next post will be about the cultural impacts in an organization of changing the name of team from QA to Test. That is what this post was supposed to be, but I thought this was a better point to start the conversation.

January 9 2013 Update

So after letting this post simmer for a few months, I have decided that taking up the fight internally to officially change the name of the team wasn’t worth it. We refer to ourselves as testers. The rest of the development team understands that we are testers, but in terms of support, sales, marketing, etc. I didn’t find there to be any payoff to changing the team name. Heck, I don’t even have the energy/time at this point to write another full post about why I feel that way. That is why I am updating this post rather than writing a new one. I wanted to cover another topic in my next post, but didn’t want to leave this topic unsettled.


Definitions of “Assure” from Miriam-Webster:

– to make safe – The testers don’t actually do anything that makes the code/products/releases safer. We provide information about potential risks, we point out logical flaws that could cause problems, but the developers are the ones that actually fix those.

– to give confidence to –

– to make sure or certain – This get’s to that perfection idea. I agree that it can’t be reached, so using words that it can be seems off

– to inform positively – This one worries me a bit, because that ‘assurance’ is not based on fact, “I assure you that we can do it”. I would rather provide information and facts that allows decision makers and other team members to make informed decisions.

– to make certain the coming or attainment of – see above

It’s good to be the tester! (HTC DROID DNA Review)

Sometimes, it can be good to be the tester.  And by good I mean really good.  By virtue of my love for testing, and HTC smartphones, I got the opportunity to get my hands on a pre-release version of the DROID DNA, the new flagship ultra-awesome 5-inch Android smartphone from HTC.  Woo! Continue Reading

The Changing Face of Test Management

Another week, another podcast.  I have been very luck to have the opportunity many times to Join Matt Heusser, Michael Larsen, and others on the weekly This Week in Software Testing podcast sponsored by Software Test Professionals.  This week was a good one.

If you remember back to my post on writing conference reports, in my report from the KWSQA conference I mentioned that as our team made progress towards more agile (small ‘a’) methodologies the testers and developers needed to move closer and closer together.  As the testing and development teams have merged together, we have gone from 2 distinct teams and now have 1 team.  This is great and has had a significant impact on the quality of the software we are producing (as I mentioned in my presentation at CAST 2012 last month), however it produces an interesting position for myself (the QA Manager) and the Dev Manager, as we now have 1 team with 2 managers.

Others in the industry are having similar problems, and this week’s podcast is a bit of our conversation along this topic.  Go ahead, give it a listen.

Part 1 – http://www.softwaretestpro.com/Item/5690/TWiST-117-The-Changing-Face-of-QA-Management-Part-I/podcast

Part 2 – http://www.softwaretestpro.com/Item/5700/TWiST-118-The-Changing-Face-of-QA-Management-Part-II/podcast

I taught myself a new word…I’m an autodidact!

For those of you that missed it, Test Coach Camp was a blast.  2 days of non-stop discussion with the best and brightest minds in the space of test coaching, and I got to go!

There were tons of great discussions, exercises, and lessons learned at TCC, but one of my favorite discussions was one that I was able to facilitate on the topic of autodidacticism.  We approached the topic from the angle that  the best way to teach testing is to empower people to teach themselves about testing, but how do you get people to do that.

Luckily, Michael Larsen pulled out his handy dandy portable recording studio and was able to catch the whole conversation and post it out to the interwebs.  Thanks to Software Test Professionals for hosting the recording.  The link is below:

Part 1 – http://www.softwaretestpro.com/Item/5613/TWiST-107-%E2%80%93-Autodidacts-Unite-Part-I/podcast

Part 2 – http://www.softwaretestpro.com/Item/5618/TWiST-108-Autodidacts-Unite-Part-II/podcast

How I Write a Conference Report

A while ago I was able to attend the KWSQA Targeting Quality conference in Waterloo Ontario.  After a great time learning, connecting with old friends, and meeting new ones, I eventually had to go back to the office.  When I got there I was expected to produce an experience report to justify the trip, as I am sure many of you have had to do in the past.

For the purposes of this post, I would like to share a couple tricks I employed to produce what I would consider a decent experience report.

Focus on Value

In my case, the company covered the bill for the trip and the conference.  though it wasn’t a huge investment for the trip, I wanted to make sure it was a worthy investment.  I learned lots of things at the conference, and it is important to make sure those tidbits of knowledge are included to show what was learned that could be leveraged for the company.

Focus on Solutions

I have seen quite a few reports from others (I have even been guilty of it in the past) that just rewrite the class descriptions in a report form and call it good (i.e. I learned x in class a and y in class b).  This covers my first point a bit, but just listing random facts and topics that you learned about don’t show the application of that knowledge.  Based on all of the knowledge you gain, seek for ways to apply that knowledge to problems currently facing your company.

Implement Solutions/Value

Once you have this knowledge and some way in which to apply it, the next step I would consider in writing a great experience report is to actually implement the ideas in the report.  If the experience report is just some document that gets filed into the nether regions of the company storage banks, where is the value in that?

Allow the lessons learned to extend out of the conference, and off the page of the experience report and actually work to implement what you learned.  I was able to do so with what I learned at KWSQA and doing so made the experience (and the experience report) much more valuable.

Below is the text of my experience report from KWSQA (sanitized a bit for safety reasons) for an example of these suggestions in practice:

Targeting Quality 2012 Conference Attendance Report

-Wade Wachs-

After spending a couple days at the Targeting Quality 2012 conference sponsored by KWSQA, I came back to the office with a few items that I feel would benefit the culture and outcomes of the development and QA teams in our company.  Those items are listed and explained below.

Reduce/Remove any Us vs. Them culture

This is one of the biggest actionable items I came away with from the conference.  This applies in several dimensions that our company is already taking actions to accomplish.

Dev vs. QA

I think we have managed a pretty decent relationship between the development team and the testers in our company, but we have consistently thought of these as two separate teams.  One of the big things that I heard at the conference was the idea of considering the testers as part of the development team.

Paul Carvalho talked about this in terms of the fact that SCRUM processes only recognize 3 roles of Product Owner, Scrum Master, and Product Developer.  That is not to say that only those who write code count as developers, but that all members of the team that are not managing or defining the requirements should be working to build a quality product.  I had several conversations with Paul and others that suggested a cultural shift to include the testing role in the team of developers could have a significant impact by tightening the feedback loop between code creation and testing.

We have already made significant steps in the last couple weeks to work towards a goal of integrating the code writers and testers better.  Conversations are in the works to continue this integration further.

Office 1 vs. Office 2

Selena Delsie made a comment that I really liked along the idea that having a small team that practices agile in a larger more waterfall organization is typical, but greater benefits can be realized if the whole organization works together in a more agile manner.  This really hit home for me, as I have felt that Office 1 has been going more and more agile while Office 2 is still struggling with understanding how we do things.  I wrote in my notebook in Selena’s session, “The WHOLE company needs to BE agile, not just development DO agile.”

After conversations with an internal employee last week, I think we are taking some good steps in this direction with the inception of monthly blackout dates and taking the time to all meet together as a company and discuss what we are all doing.  I am cautiously optimistic that these meetings could have a significant positive impact on the quality of the software we are producing as we reduce the feedback loops between those of us producing the software and those teaching how to use the software.

The Software Testing Ice Cream Cone

Paul Carvalho in his tutorial about pitfalls in agile organizations talked about the balance of manual testing and automated testing.  Based on some concepts from Brian Marick (one of the Agile Manifesto signers) and a couple others, there needs to be a push to have manual testers doing business facing testing that is critiquing the product, and spend as little time as possible focusing on base functionality and regression checking.  The amount of testing can be drawn in a pyramid with unit tests at the bottom, integration then functional tests on top of that, and manual exploratory testing depicted as a cloud on top of the pyramid that is being supported by the bottom three layers.

However, in many organizations (ours included) the actual testing effort is an inverted pyramid with very little automated unit and integration testing, a little automated functional testing and lots of cloud shaped manual testing, which ends up looking like an ice cream cone.  I have already talked with Steve about turning that ice cream cone around by adding some additional effort in unit testing and better supported automation.  This goal is in the process of being implemented via the talent reviews with QA and developers.

Effective Metrics

There was a great keynote from Paul Holland where he gave a few techniques on how to effectively provide metrics to management while maintaining integrity of the narrative.  The few concepts that I would like to investigate more and implement are:

– Provide metrics along with narrative to provide the full story behind the metrics.  This narrative can contain any of the potential pitfalls or dangerous conclusions from the metrics or other qualitative information not captured in numbers.

– Use a dashboard to provide a better picture of testing activities.

– More effective use of sticky note boards and how to accurately use those for managing testing effort and displaying work that is being done.

I also was party to a couple side discussions along this topic at the conference.  I hope these conversations will be helpful in moving forward in our goal to identify useful performance measures and provide that information up the management chain.

All in all it was a very enjoyable conference.  The intangibles of the conference were many, but include an increased passion in continuing to push forward, a feeling that the company values me as an employee enough to invest the funds to send me to training, and an increased connection to the testing community to further relationships that will be sustaining in the future.  I truly appreciate the investment and would like to attend further conferences in the future as we get a better handle on this current list of improvements.

Are there any decent testing metrics?

Last week I was asked by my company to define some decent metrics to start tracking for my team.  I have been thinking on this for a while, and the report I ended up with is one that seemed very appropriate for a blog post.

I read a tweet the other day from Janet Gregory:

“I like visible metrics that are aimed at measuring a problem that we are trying to solve. Once the problem is solved, stop.”  (original tweet)

That resonates with me. So I question what problems is it that we are actually trying to solve through collecting metrics? The way these metrics were framed by $person, we are looking for numbers to provide corporate to keep them from forcing us to gather metrics that may not be applicable in our scenario. $person also mentioned justification of new hires and additional resources.

I appreciate the heads up and the suggestion to get some metrics in place before we are forced to do so.I don’t believe  that 2 or 3 numbers will provide any real insight into our team, especially at the corporate level. I also lack an understanding of what would satisfy their need for data. This doesn’t seem like an actual problem that I know how to solve.

As far as justification for additional resources, I understand ‘resources’ to include hiring new people into the team, as well funding for training and improvement of the current team. Through the page coverage metrics below, we will be able to show the page coverage possible with our current team. These metrics will not show any inefficiencies in time usage, but it seems with the coming of $newtool we will get a complete suite of metrics there than can track that. $testerperson will also be tracking time spent on helpdesk tasks to help facilitate hiring a full time helpdesk tech.

The problem I chose to address is one that is authentic to myself and my team.  The coverage metrics listed below are ones that I think will assist in managing the testing effort of my team.  As each release date draws nearer, these coverage metrics will provide insight into areas of the application that may need further testing. As mentioned below, coverage metrics are fallible, and this is only one tool to aid in testing, and showing the story of our testing to others in the company.

Metrics for QA

Hot fixes required to fix critical bugs in production

This is the most important metric for tracking the efficacy of QA as far as I am concerned. Hot fixes are pushed to resolve critical issues clients face. This metric should be as close to 0 as possible for any given period of time. While we have not been tracking this explicitly, we have noticed a decrease in this recently. I think a brief description of each of these incidents should also be recorded.

Time Spent on Support Tasks

One of the problems we are facing right now is the lack of a full time support employee. $testerperson and Wade (the two main QA members that work with customers) will track their time spent on support related tasks to assist in hiring a full time employee.

Page Coverage in Automation

This week we will be adding a tool to $app that will allow us to track all page views by user. We will then be able to compare this against current count of pages in the app and come up with a decent metric for % coverage. This will only be a coverage metric for how many pages are viewed, it is not branch coverage, code coverage, logic coverage, or any other sort of coverage that someone may think of, but this will give us some insight into what portion of the app is being tested by automation.

Page Coverage in Manual Testing

With the same tracking tool that will allow us to track automation’s progress, we will also be able to track manual coverage. This will be useful for tracking coverage at each release cycle to take into consideration for .

These are the metrics I plan to track for my team in the coming months.  Like I mentioned above, the page coverage metric is fallible, but I think in my context we will be able to use the metric for some useful information.  I don’t necessarily want this metric to reach 100% coverage, as there are pages that currently exist in our app that are not actively used by clients, or even available for use by them.

Once I get a feel for how these metrics starts to illuminate our process, the metrics or our processes might change.  Until then, this is what we will use.

Where does innovation come from?

I have been involved in a rather interesting discussion this week with an individual I will refer to as Mr. Higherup, he’s one of the higher-ups here at the company (I refer to him in this way only because I haven’t asked his permission to use his name in this blog post).  There was a plea for more innovation to come from our company, and that sparked a debate between the two of us on ways to structure the company in such a way to allow for innovation to occur.

As I reflect on this topic, many of the arguments we have made can easily be applied to testing.  Exploratory Testing to me is a very innovative process.  The process of allowing your last step to  influence the next requires a steady stream of ideas on where the next step should be.  Below is my initial response to the request for the company to be more innovative:

Mr. Higherup,

I see a few cultural obstacles that need to be broken down for ideas to really flow out of this company.

The software we write exists to solve problems. In the case of [the division of the company in which I work], Mr. So-and-so and his companions noticed a problem in the justice sector, and they came up with an idea for a solution to solve the problem. This idea came from people that were very intimately connected to the problem that needed to be solved. I imagine that many of the ideas that this company currently monetizes have roots from ideas of people connected to different industries.

The first obstacle I see is that we don’t know what problems exist out there. A large portion of [this division], and by abstraction [the company as a whole], is made up of technical people that create and implement these ideas, but have no connection to the industries or problems we are solving. The members of the company that do have that connection to the industry in general don’t understand how the technology could solve the problems they are facing as well as the technical side of the company.

Steven Johnson talks about the coffee shop being the idea center of the Enlightenment because of the exchange of information and ideas that happen in that space leading to new and exciting ideas. I believe that the ideas you seek will breed in an environment where an understanding of existing problems mix with an understanding of technology solutions. The challenge is to create that environment.

I don’t believe that replacing [our e-mail system] or implementing corporate social networking will create that environment.

The next obstacle in creating these ideas is having time available to devote to creating ideas.. We don’t necessarily need to go the direction of Google’s 20% time or Atlassian’s model of quarterly idea days, but we need time to be able to look up from the day-to-day noise and talk about the problems that we could be solving. This isn’t going to happen passively, or by writing a couple hundred words on a blog. We all have dueling demands on our time, and if ideas are important to the company, then time needs to be dedicated to creating them.

I will offer one brief anecdote on this topic. One of our developers at [this division] just got back from a conference where he had the ability to sit and talk with people in the industry about some of the problems they are facing. He came back today with an idea that could be the killer app in the justice sector. He had time to mix his ideas with others with very positive results.

I will close with one last aspect that I see. For people to want to invest their ideas into the company, many people want to feel like the company is invested in them. [In this environment this is especially important].  Missteps in this area can lead to significant morale problems that will stomp out any desire to invest in a company that is not invested in its employees.

The take away message, it’s one thing to ask for ideas, it’s another to create an environment where ideas can be created. Hopefully this feedback is useful.


The main points I made in this initial response are that employees have basic needs to be innovative, namely an understanding of a problem, time and resources to focus on that problem, and a desire to allocate time and resources to solve that problem.  I believe these same needs apply to good software testing (for some definition of good).

Understanding the Problem

I had the opportunity last month to spend some time in Toronto at the Toronto Workshop on Software Testing (TWST).  At TWST we had some great conversations about identifying stakeholders and our ethical responsibility to these stakeholders as software testing professionals.  We started a mindmap on a flipchart to identify all of the potential stakeholders of a project and very quickly filled an entire page with probably 50+ individuals and groups and ideas were still flowing from the group as we moved on.  Each of these stakeholders represented another potential person our projects can impact.  Each of these people represent a potentially different problem set that we are facing as we develop and test software.  The trick here is to identify the key stakeholders in each specific project, understand what problems they have that need to be solved, then you can use that knowledge to guide your testing on that project.  New projects have new stakeholders and new problems that should be guiding testing.

Requirements and other documentation can be used heuristically to develop a rudimentary understanding of a problem while testing.  I often find that brief conversations with customers, managers, executives, marketing, support, and other important stakeholders can shed a lot of light on a project.

Most of my posts on this blog have covered this topic, and even though I have learned a lot on the topic since those initial posts, I will not cover it in much more detail in this post.  However, there is one important stakeholder that I have never before mentioned on this blog, that is yourself.  For me, it is critical to understand how specific projects and tasks fit into and affect my goals.  Earlier this year I packed my family up and moved across the country to start a new position in part because I had a firm understanding of what my goals were and the projects I was working on at the time did not fit into the goals I had set for myself.  I believe that we as individuals are the most important stakeholders in any project and we need to understand that.

Time and Resources to Focus on the Problem

Within certain frames and contexts, especially in business and software projects, time and resources are limited.  In these contexts we have to make decisions on where we allocate the resources and time that are available.  Often these are allocated based on our understanding of the problem.  In testing, resources are allocated to the portions of the software where someone understands there to be the most significant risks to solving the problem we are facing.

The problem with this method of allocation is that it only as good as our understanding of the actual problem.  With a perfect understanding of the problem, we can attack the most important items in order until the resources are spent at which point we call the project ‘done’ and move on to the next.  This perfect understanding is only possible in the most trivial of circumstances however.  In the more real-world and complex cases, our understanding is often flawed.  Going back to the map we identified at TWST, understanding the full problem set of those 50+ stakeholders is incredibly complex.  Given that as the situation, we can either choose to dedicate resources to developing a better understanding of the problem (which will likely lead to fewer resources and an understanding that is still flawed), or find a way to allocate resources in such a way to compensate for an imperfect understanding.

In my team, we use requirements and a brief regression checklist to drive most of our testing each sprint.  These are the things that we have identified as areas of the software that present the highest risks to reaching the goals of the company.  However, on each sprint I try to make sure that we have at least a few hours, if not a full day at the end, and a day or two at the beginning or each sprint to focus on areas that the testers feel need some attention.  I rely on the intuition and expertise of the individual testers to define what is tested during this time.  This time has helped us identify and track down quite a few important bugs that would not be found by only focusing on our flawed understanding of what certain stakeholders care about.

Desire to Allocate Resources

Understanding and have resources available to solve a problem don’t matter in the slightest if you don’t care enough to actually do anything about it.  In my letter to Mr. Higherup, I mentioned the responsibility of the company to manage itself in a way that promotes the desire of the employees to help the company grow.  This same dynamic exists in testing.

This is a really interesting point that I haven’t really heard much about in this context, but I feel is incredibly important.  I have read quite a bit lately on motivation and I think that it has such a large effect on our products that it should be talked about far more than it is.

In the TWiST podcast that just came out today I was part of a conversation with Matt Heusser, Michael Larsen, and Jon Bach on the topic of motivation.  In this conversation Jon gave an interesting bit of insight into how relationships affect his motivation to allocate his currently limited resources.  He says, “When I look at my todo list, it’s not a matter of, ‘oh look at all of this stuff to do’.  I think about the people that I’m serving.  In fact that often helps me to remember my todo list!  To just think about the people that I work with and go, ‘Oh. OK.  Probably isn’t the highest priority, but it is most motivating to me right now to get back to Thomas about something I told him I would get back to him about.’  I still have another 2 weeks but right now I feel like ‘hey he helped me out today.  I owe him, I want to do that now.’  That motivates me.  That’s part of respect and reputation I suppose, but it’s really a sense of being a part of what I think is cool.”

Right before that Jon defined cool as inspiring.  What this means to me is that he is, and by abstraction many of us are (including myself), inspired to do work based on relationships.  How does this relate to testing?  Several ways.

I often see the world from a Test Manager bias, because I function as a Test Manager.  The way that I see this connecting to testing is that I have a responsibility to create a relationship with my team.  Cem Kaner has stated, “Managers get work done through other people.”  There are many different styles of management, but the way that I choose to manage my team is by creating a relationship of mutual respect and leveraging that respect to get the job done.  In the time since we recorded TWiST, I have seen vast improvements in the team of testers I work with because I finally took a more proactive approach to investing in the relationships I have with my team.  I now understand them much better, they understand me, and we have a stronger relationship of mutual respect.

This respect has led to some great improvements on my team.  I see more initiative.  Based on the very positive response from customers and management over the last few months our testing has improved.  I have more trust in my team to get a job done when asked.  My team is doing a great job at the work I am asking them to do, and I believe that is a direct benefit of building relationships, and therefore desires to accomplish the work we are here to do.

I don’t yet know what the outcome of my conversation with Mr. Higherup will be.  His mind could be opened and significant changes could come, or he may not even respond again.  I do know that in this process I made some great connections and understand my world a little better.  Now that you ahve read this I hope you understand your world better as well.

Exploratory Testing

James Bach states, “The plainest definition of exploratory testing is test design and test execution at the same time.”  I had a very interesting conversation this week where an individual made the following comment: Continue Reading


Paradox – A statement or proposition that, despite sound (or apparently sound) reasoning from acceptable premises, leads to a conclusion that seems senseless, logically unacceptable, or self-contradictory