You Don’t Have Test Cases, Think Again

NOTICE:
We, at the QABlog, are always looking to share ideas and information from as many angles of the Testing Community as we can.  

Even if at times we do not subscribe to all points and the interpretations, we believe there is value in listening to all sides of the dialogue, and allowing this to help us think where do we stand on the different issues being reviewed by our community.

We want to invite people who want to provide their own opinion on this or any other topic to get in touch with us with their ideas and articles.  As long as we believe the article is written in good faith and provides valid arguments for their views, we will be happy to publish it and share it with the world.

Let communication make us smarter, and productive arguments seed the ideas to grow the next generation of open-minded testing professionals!


*The following is a guest post by Robin F. Goldsmith, JD, Go Pro Management, Inc. The opinions stated in this post are his own. 


 

You Don’t Have Test Cases, Think Again

Recently I’ve been aware of folks from the Exploratory Testing community claiming they don’t have test cases. I’m not sure whether it’s ignorance or arrogance, or yet another example of their trying to gain acceptance of alternative facts that help aggrandize them. Regardless, a number of the supposed gurus’ followers have drunk the Kool-Aid, mindlessly mouthing this and other phrases as if they’d been delivered by a burning bush.

What Could They Be Thinking?

The notion of not having test cases seems to stem from two mistaken presumptions:

1. A test case must be written.
2. The writing must be in a certain format, specifically a script with a set of steps and lots of keystroke-level procedural detail describing how to execute each step.

Exploratory Testing originated arguing that the more time one spends writing test cases, the less of limited test time is left for actually executing tests. That’s true. Conclusions Exploratory claims flow from it is not so true because it’s based on false presumptions that the only alternative to Exploratory is such mind-numbing tedious scripts and that Exploratory is the only alternative to such excessive busywork.

Exploratory’s solution is to go to the opposite extreme and not write down any test plans, designs, or cases to guide execution, thereby enabling the tester to spend all available time executing tests. Eliminating paperwork is understandably appealing to testers, who generally find executing tests more interesting and fun than documenting them, especially when extensive documentation seems to provide little actual value.

Since Exploratory tends not to write down anything prior to execution, and especially not such laborious test scripts, one can understand why many Exploratory testers probably sincerely believe they don’t have test cases. Moreover, somehow along the way, Exploratory gurus have managed to get many folks even beyond their immediate followers to buy into their claim that Exploratory tests also are better tests.

But, In Fact…

If you execute a test, you are executing and therefore have a test case, regardless whether it is written and irrespective of its format. As my top-tip-of-the-year “What Is a Test Case?” article explains, at its essence a test case consists of inputs and/or conditions and expected results.

Inputs include data and actions. Conditions already exist and thus technically are not inputs, although some implicitly lump them with inputs; and often simulating/creating necessary conditions can be the most challenging part of executing a test case.

Exploratory folks often claim they don’t have expected results; but of course they’re being disingenuous. Expected results are essential to delivering value from testing, since expected results provide the basis for the test’s determination of whether the actual results indicate that the product under test works appropriately.

Effective testing defines expected results independently of and preferably prior to obtaining actual results. Folks fool themselves when they attempt to figure out after-the-fact whether an actual result is correct—in other words, whether it’s what should have been expected. Seeing the actual result without an expected result to compare it to reduces test effectiveness by biasing one to believe the expected result must be whatever the actual result was.

Exploratory gurus have further muddied the expected results Kool-Aid by trying to appropriate the long-standing term “testing,” claiming a false distinction whereby non-Exploratory folks engage in a lesser activity dubbed “checking.” According to this con, checking has expected results that can be compared mechanically to actual results. In contrast, relying on the Exploratory tester’s brilliance to guess expected results after-the-fact is supposedly a virtue that differentiates Exploratory as superior and true “testing.”

Better Tests?

Most tests’ actual and expected results can be compared precisely—what Exploratory calls “checking.” Despite Exploratory’s wishes, that doesn’t make the test any less of a test. Sometimes, though, comparison does involve judgment to weigh various forms of uncertainty. That makes it a harder test but not necessarily a better test. In fact, it will be a poorer test if the tester’s attitudes actually interfere with reliably determining whether actual results are what should have been expected.

I fully recognize that Exploratory tests often find issues traditional, especially heavily-procedurally-scripted, tests miss. That means Exploratory, like any different technique, is likely to reveal some issues other techniques miss. Thus, well-designed non-Exploratory tests similarly may detect issues that Exploratory misses. What can’t be told from this single data point is whether Exploratory tests in fact are testing the most important things, how much of importance they’re missing, how much value actually is in the different issues Exploratory does detect, and how much better the non-Exploratory tests could have been. Above all, it does not necessarily mean Exploratory tests are better than any others.

In fact, one can argue Exploratory tests actually are inherently poorer because they are reactive. That is, in my experience Exploratory testing focuses almost entirely on executing programs, largely reacting to the program to see how it works and try out things suggested by the operating program’s context. That means Exploratory tests come at the end, after the program has been developed, when detected defects are hardest and most expensive to fix.

Moreover, reacting to what has been built easily misses issues of what should have been built. That’s especially important because about two-thirds of errors are in the design, which Exploratory’s testing at the end cannot help detect in time to prevent their producing defects in the code. It’s certainly possible an Exploratory tester does get involved earlier. However, since the essence of Exploratory is dynamic execution, I think one would be hard-pressed to call static review of requirements and designs “Exploratory.” Nor would Exploratory testers seem to do it differently from other folks.

Furthermore, some Exploratory gurus assiduously disdain requirements; so they’re very unlikely to get involved with intermediate development deliverables prior to executable code. On the other hand, I do focus on up-front deliverables. In fact, one of the biggest-name Exploratory gurus once disrupted my “21 Ways to Test Requirements Adequacy” seminar by ranting about how bad requirements-based testing is. Clearly he didn’t understand the context.

Testing’s creativity, challenge, and value are in identifying an appropriate set of test cases that together must be demonstrated to give confidence something works. Part of that identification involves selecting suitable inputs and/or conditions, part of it involves correctly determining expected results, and part of it involves figuring out and then doing what is necessary to effectively and efficiently execute the tests.

Effective testers write things so they don’t forget and so they can share, reuse, and continually improve their tests based on additional information, including from using Exploratory tests as a supplementary rather than sole technique.

My Proactive Testing™ methodology economically enlists these and other powerful special ways to more reliably identify truly better important tests that conventional and Exploratory testing commonly overlook. Moreover, Proactive Testing™ can prevent many issues, especially large showstoppers that Exploratory can’t address well, by detecting them in the design so they don’t occur in the code. And, Proactive Testing™ captures content in low-overhead written formats that facilitate remembering, review, refinement, and reuse.


About the Author

Robin GoldsmithRobin F. Goldsmith, JD helps organizations get the right results right. President of Go Pro Management, Inc. Needham, MA consultancy which he co-founded in 1982, he works directly with and trains professionals in requirements, software acquisition, project management, process improvement, metrics, ROI, quality and testing. .

Previously he was a developer, systems programmer/DBA/QA, and project leader with the City of Cleveland, leading financial institutions, and a “Big 4” consulting firm.

Author of the Proactive Testing™ risk-based methodology for delivering better software quicker and cheaper, numerous articles, the Artech House book Discovering REAL Business Requirements for Software Project Success, the forthcoming book Cut Creep—Put Business Back in Business Analysis to Discover REAL Business Requirements for Agile, ATDD, and Other Project Success, and a frequent featured speaker at leading professional conferences, he was formerly International Vice President of the Association for Systems Management and Executive Editor of the Journal of Systems Management. He was Founding Chairman of the New England Center for Organizational Effectiveness. He belongs to the Boston SPIN and served on the SEPG’95 Planning and Program Committees. He is past President and current Vice President of the Software Quality Group of New England (SQGNE).

Mr. Goldsmith Chaired attendance-record-setting BOSCON 2000 and 2001, ASQ Boston Section’s Annual Quality Conferences, and was a member of the working groups for the IEEE Software Test Documentation Std. 829-2008 and IEEE Std. 730-2014 Software Quality Assurance revisions, the latter of which was influenced by his Proactive Software Quality Assurance (SQA)™ methodology. He is a member of the Advisory Boards for the International Institute for Software Testing (IIST) and for the International Institute for Software Process (IISP). He is a requirements and testing subject expert for TechTarget’s SearchSoftwareQuality.com and an International Institute of Business Analysis (IIBA) Business Analysis Body of Knowledge (BABOK v2) reviewer and subject expert.

He holds the following degrees: Kenyon College, A.B. with Honors in Psychology; Pennsylvania State University, M.S. in Psychology; Suffolk University, J.D.; Boston University, LL.M. in Tax Law. Mr. Goldsmith is a member of the Massachusetts Bar and licensed to practice law in Massachusetts.

www.gopromanagement.com
robin@gopromanagement.com

 

The post You Don’t Have Test Cases, Think Again appeared first on QA Intelligence.


Source: QA Intelligence blog
You Don’t Have Test Cases, Think Again

Share This Post

Show Buttons
Hide Buttons