Your Test Cases Are Slowing You Down

One of the first QA jobs I had was a position at a company that made software that could be used to create mobile applications.  It was a very complex application, with so many features that it was often hard to keep track of them all.  Shortly before I started working there, the company had adopted a test tracking system to keep track of all of the possible manual tests the team might want to run.  This amounted to thousands of test cases.

Many of the test cases weren’t written well, leaving those of us who were new to the team confused about how to execute them.  The solution to this problem was to assign everyone the task of revising the tests as they were run.  This helped a bit, but slowed us down tremendously.  Adding to the slowdown was the fact that every time we had a software release, our manager had to comb through all the tests and decide which ones should be run.  Then there was the added confusion of deciding which mobile devices should be used for each test.

We were trying to transition to an Agile development style, but the number of test cases and the amount of overhead needed to select, run, and update the tests meant that we just couldn’t adapt to the pace of Agile testing.

You might be thinking at this point, “Why didn’t they automate their testing?”  Keep in mind that this was back when mobile test automation was in its infancy.  One of our team had developed a prototype for an automated test framework, but we didn’t have the resources to implement it because we were so busy trying to keep up with our gigantic manual test case library.

Even when you have a robust set of automated tests in place, you’ll still want to do some manual testing.  Having a pair of eyes and hands on an application is a great way to discover odd behavior that you’ll want to investigate further.  But trying to maintain a vast library of manual tests is so time consuming that you may find that you don’t have time to do anything else!

In my opinion, the easiest and most efficient way to keep a record of what manual tests should be executed is through using simple spreadsheets.  If I were to go back in time to that mobile app company, I would toss out the test case management system and set up some spreadsheets.  I would have one smoke test spreadsheet; and one regression test spreadsheet for each major feature of the application.  Each time a new feature was added, I’d create a test plan on a spreadsheet, and once the feature was released, I’d either add a few test cases to a regression test spreadsheet (if the feature was minor), or I’d adapt my test plan into a new regression test spreadsheet for that feature.

This is probably a bit hard to imagine, so I’ll illustrate with an example.  Let’s say we have a mobile application called OrganizeIt!  Its major features are a To-Do List and a Calendar.  Currently the smoke test for the app looks like this:

Test iOS phone iOS tablet Android phone Android tablet
Log in with incorrect credentials
Log in with correct credentials
Add an event
Edit an event
Delete an event
Add a To-Do item
Edit a To-Do item
Complete a To-Do item
Mark a complete item as incomplete
Delete a To-Do item
Log out

And then we also have a regression test for the major features: Login, Calendar, and To-Do List.  Here’s an example of what the regression test for the To-Do List might look like:

Test Expected result
Add an item to the list with too many characters Error message
Add an item to the list with invalid characters Error message
Add a blank item to the list Error message
Add an item to the list with a correct number of valid characters Item is added
Close and reopen the application Item still exists
Edit the item with too many characters Error message, and original item still exists
Edit the item with invalid characters Error message, and original item still exists
Edit the item so it is blank Error message, and original item still exists
Mark an item as completed Item appears checked off
Close and reopen the application Item still appears checked off
Mark a completed item as completed again No change
Mark a completed item as incomplete Item appears unchecked
Mark an incomplete item as incomplete again No change
Close and reopen the application Item still appears unchecked
Delete the item Item disappears
Close and reopen the application Item is still gone

This test would also be run on a variety of devices, but I’ve left that off the chart to make it more readable in this post.
Now let’s imagine that our developers have created a new feature for the To-Do List, which is that items on the list can now be marked as Important, and Important items will move to the top of the list.  In the interest of simplicity, let’s not worry about the order of the items other than the fact that the Important items will be on the top of the list.  We’ll want to create a test plan for that feature, and it might look like this:

Test Expected result
Item at the top of the list is marked Important Item is now in bold, and remains at the top of the list
Close and reopen the application The item is still in bold and on the top of the list
Item at the middle of the list is marked Important Item is now in bold, and moves to the top of the list
Item at the bottom of the list is marked Important Item is now in bold, and moves to the top of the list
Close and reopen the application All important items are still in bold and at the top of the list
Every item in the list is marked Important All items are in bold
Close and reopen the application All items are still in bold
Item at the top of the list is marked as normal The item returns to plain text, and moves below the Important items
Close and reopen the application The item is still in plain text, and below the Important items
Item in the middle of the Important list is marked as normal The item returns to plain text and moves below the Important items
Item at the bottom of Important list is marked as normal The item returns to plain text and is below the Important items
Close and reopen the application All important items are still in bold, and normal items are still in plain text
Delete an important item Item is deleted
Close and reopen the application Item is still gone
Add an item and mark it as important The item is added as important, and is added to the top of the list
Add an item and mark it as normal The item is added as normal, and is added to the bottom of the list
Close and reopen the application The added items appear correctly in the list
Mark an important item as completed The item is checked, and remains in bold and at the top of the list
Close and reopen the application The item remains checked, in bold, and at the top of the list
Mark an important completed item as incomplete The item is unchecked, and remains in bold and at the top of the list
We would again test this on a variety of devices, but I’ve left that off the chart to save space.  
Once the feature is released, we won’t need to test it as extensively, unless there’s some change to the feature.  So we can add a few test cases to our To-Do List regression test, like this:

Test Expected result
Add an item to the list with too many characters Error message
Add an item to the list with invalid characters Error message
Add a blank item to the list Error message
Add an item to the list with a correct number of valid characters Item is added
Close and reopen the application Item still exists
Add an important item to the list Item is in bold, and is added to the top of the list
Edit the item with too many characters Error message, and original item still exists
Edit the item with invalid characters Error message, and original item still exists
Edit the item so it is blank Error message, and original item still exists
Mark an important item as normal Item returns to plain text and is moved to the bottom of the list
Mark an item as completed Item appears checked off
Mark an important item as completed Item remains in bold text and appears checked off
Close and reopen the application Item still appears checked off
Mark a completed item as completed again No change
Mark a completed item as incomplete Item appears unchecked
Mark an incomplete item as incomplete again No change
Close and reopen the application Item still appears unchecked
Delete the item Item disappears
Close and reopen the application Item is still gone
Delete an important item Item disappears
The new test cases are marked in red, but they wouldn’t be in the actual test plan.  
Finally, we’d want to add one test to the smoke test to check for this new functionality:

Test iOS phone iOS tablet Android phone Android tablet
Log in with incorrect credentials
Log in with correct credentials
Add an event
Edit an event
Delete an event
Add a To-Do item
Add an important To-Do item
Edit a To-Do item
Complete a To-Do item
Mark a complete item as incomplete
Delete a To-Do item
Log out
With spreadsheets like these, you can see how it is easy to keep track of a huge amount of tests in a small amount of space.  Adding or removing tests is also easy, because it’s just a matter of adding or removing a line to the table.  
Spreadsheets like this can be shared among a team, using a product like Google Sheets or Confluence.  Each time a smoke or regression test needs to be run, the test can be copied and named with a new date or release number (for example, “1.5 Release” or “September 2019”), and the individual tests can be divided among the test team.  For example, each team member could do a complete test pass with a different mobile device.  Passing tests can be marked with a check mark or filled in green, and failing tests can be marked with an X or filled in red.
And there you have it!  An easy to read, easy to maintain manual test case management system.  Instead of taking hours of time maintaining test cases, you can use your time to automate most of your tests, freeing up even more time for manual exploratory testing.  


Source: ministry of testing
Your Test Cases Are Slowing You Down

Share This Post

Show Buttons
Hide Buttons