AUTOMATION 29 – The need for speed

This is part 29 of my automation series – for our story to date, check out the list of articles here.

Technical level: **

Just a couple posts ago we were looking at the goal of automaton.  In general terms, what I get from every group I’ve talked to is something along the line of “speedy feedback”.  We’re trying to shorten and supercharge the feedback loop from someone committing a change to being told “the system doesn’t work”.

So what’s an ideal time for it to take?  This is important.

Obviously we want it to take longer than doing it manually, otherwise what’s the point?

Different groups had different ideas about this.  Most just wanted some kind of “thumbs up” or “thumbs down” indicator on build quality.  And they wanted it quite quickly.  For others there was a rigorous series of regression tests they needed to run on the system, and they were trying to speed this up from “currently takes a couple of weeks”.

I’m going to call these two drivers “build quality” and “regression”.  They will dramatically change the timeframe we’ll be looking at.  For both these drivers though, the first port of call for making things as fast as possible is to embrace the automation pyramid and to be sure to be doing at the unit or API level those checks which make sense to do there.

Driven by “build quality”

A developer finishes a code or configuration change, and wants to commit to the test environment.  But first they have to run a series of checks to make sure what’s being deployed isn’t obviously broken.

Ideally you want these to run as fast as possible.  The ideal time I’ve often heard of is about 20 minutes (the office joke is we’ve got a cafe on our ground floor, and this is the time it takes to have a pizza made and delivered).  Certainly any more than 40 minutes is painful.  It leads to “I guess I just kick this off before I leave and hope I’m first in tomorrow morning” *

There are ways to go faster.  Selenium GRID allows you to perform automation tests in something like a multi-threaded environment (running multiple things concurrently).  Katrina Clokie wrote an article about adoption of this at BNZ here.

Using technology to help you go faster is a definite must, it’s the driving force behind trying to use the pyramid to make sure your checks are at the place where they can run most efficiently.  If you’re testing hundreds of business logic combinations, do it at the unit level.

This picture though highlights something to be aware of …

If you’ve got an elephant on a motorbike, and need to go faster, the first port of call is to buy a more powerful motorcycle (faster machines, Selenium GRID).

If that’s still not working, it might be time to put the elephant on a diet.  That means looking through your build tests, and taking a hard critical look as “do we need them all?”.  Potentially scaling it back.

Driven by “regression”

Is the driver for your automation is regression test coverage, you’re probably aiming for a much bigger suite, with much longer runtimes.

It’s as important even though you’re looking at maybe running over days vs running in well under an hour, to still embrace the pyramid and try to have your checks running as efficiently as possible.

A few areas I’ve spoken with to have embraced both “build driven” and “regression”.  Typically the post-build checks that they run are a cut down version of the regression test suit (hey, that means reuse), but of the top critical behaviours.

There’s still a need to try and avoid the feeling of the elephant on the bike.  More automation means more to maintain, and you want to try and avoid trying to service and carry going forward tests which are trivial in nature.

At the beginning of this series I told about the test manager who’d make us run scripts which checked for every bug that we’d ever found (no matter how trivial), and how painful that got in time.  When I was at Agile Test Days in Germany, I happened to meet a fellow tester from the project included in that experience report, and we talked about how painful that approach had become.  So much so that the prohibitive cost of testing like that eventually led to us losing future contract work with that customer (the unit manager happened to be a school friend, and had confirmed that).  And for all that rerunning of those tests, we never once found a recurrence of one of those bugs to warrant the time.

[To be clear, I learned from that test manager the important of going deeper in testing than I’d normally do.  But I learned in trying to engage with that test manager and the business involved, the importance of revising testing to quite that level if we’re not getting defects about the product illuminated.]

*  When I was a developer, waiting for the build to finish drove me nuts.  If it was a short build, I might update some documentation, then write a humorous meme to my friends.  If it was longer, I’d try and remote log in on spare machine, and trawl through the defect list to find a relatively simple bug I might be able to knock out of the park.  This might be a bigger mistake than the comedy meme as I’d end up switching context so would forget some of what I’d done from the compile and test that was going on!
Source: ministry of testing
AUTOMATION 29 – The need for speed

Share This Post

Show Buttons
Hide Buttons