Automating To Detect Change

Much of “Test Automation” falls into the set of “Automated Execution to Detect Change”, in this post I will explain what that means and provide some examples.

Change Detection

Much of the automated execution code that I create detects change.


  • detect if an acceptance criteria is no longer met
  • detect if a functional path can no longer be followed
  • detect if some data combination can no longer be used

The CHANGE detection is implemented through assertions in the code.


            "Todo Admin View",

Amplify, Absorb, Attenuate

I took the concepts of Amplify, Absorb, Attentuate from the writings of Stafford Beer on the Cybernetics of Management.

When our code detects change it can:

  • Amplify, make the change more obvious
  • Absorb, reduce or minimise the impact of the change
  • Attenuate, make the change less obvious or hide it altogether


The assertion below Amplifies the change. If the assertion fails, something has changed, the test throws an exception and the build fails.

            "Todo Admin View",


The location strategy below helps absorb the impact of change to the link text e.g. it could be:

  • “more details”
  • “see details”
  • “click for more details”
  • “see details here”
  • etc.

Other absorption strategies include:

  • self healing scripts
  • multiple fall back location strategies
  • etc.

The point is we allow ‘some’ change, to not impact the test.

Abstraction layers can help us absorb the impact of change because they often isolate the change to a single place in the code e.g. a locator strategy for a menu item changes, but we only have to change the locator in a single ‘menu’ abstraction.

Retry strategies on intermittent tests can absorb issues with our synchronisation or automating approach. We might also view this as a way of attenuating the signal that our approach to automating is less effective and might be introducing risk.


The code below Attenuates the change e.g. we hide the assertion if the button cannot be found.

    Assertions.assertEquals("Todo Admin View",
}catch(NoSuchElementException e){

We might find attenuation code in the form of:

  • only run a test on certain browsers
  • @Ignore intermittent tests
  • @Ignore failing tests

Attentuation is less likely to be found in Automating for Change, but might be found in general process Automating where we might report warnings instead of failing something, or just ignore errors because the point is to keep putting on load, rather than stopping the execution on failure.

Automating For Change Detection

Automating for Change Detection is not always for the purpose of Test Automation.

Consider the following @Test method which I wrote recently.

  • it is an @Test method because JUnit was a convenient framework to run the execution, not because it is ‘Test Automation’

I wrote it to help me investigate functionality in WebDriver 4, and to alert me to potential specific changes in the library.

public void checkDriversWhichHaveDevTools(){

    Map<Class,Boolean> expectedMappings = new HashMap<>();

    expectedMappings.put(ChromeDriver.class, true);
    expectedMappings.put(FirefoxDriver.class, false);
    expectedMappings.put(ChromiumDriver.class, true);
    expectedMappings.put(EdgeDriver.class, true);
    expectedMappings.put(InternetExplorerDriver.class, false);
    expectedMappings.put(OperaDriver.class, false);
    expectedMappings.put(SafariDriver.class, false);

    Map<String,Boolean> hasDevTools = new HashMap<>();

    Boolean asExpected=true;
    String unexpected = "";

    for(Map.Entry<Class,Boolean> entry: expectedMappings.entrySet()) {

        Class driverClass = entry.getKey();

        Boolean hasDevToolInterface = false;

        hasDevToolInterface = HasDevTools.class.isAssignableFrom(

        if(hasDevToolInterface != entry.getValue()){
            unexpected += driverClass.getSimpleName() + " | ";

    for(Map.Entry<String, Boolean>mapping : hasDevTools.entrySet()){
        String does = mapping.getValue() ?
                     "implements" : "does not implement";
        System.out.println(String.format("- %s (%b) [%s hasDevTools]",
                mapping.getKey(), mapping.getValue(), does));

    Assertions.assertTrue(asExpected, unexpected);

This @Test checks which drivers in Webdriver v 4 support the hasDevTools interface, and it fails if any of the browsers have changed to support or not support the interface e.g. if SafariDriver starts to support the dev tools interface then this test would fail and alert me to the change.

The ‘failure’ is not ‘bad’. The failure is an amplification of a change, it is additional information that I can then use to expand the scope of tooling options available to me.

This is not a condition that I want to use for ‘acceptance’ or ‘evaluating the quality’ of a product so doesn’t really fit into a “Test Automation” strategy.


  • the @Test above generates a report, which is designed for me to read i.e.
  • I chose to amplify changes by asserting on an expected state. Because I wanted my attention to be drawn to the report when it contained new information.
- OperaDriver (false) [does not implement hasDevTools]
- EdgeDriver (true) [implements hasDevTools]
- InternetExplorerDriver (false) [does not implement hasDevTools]
- FirefoxDriver (false) [does not implement hasDevTools]
- ChromiumDriver (true) [implements hasDevTools]
- ChromeDriver (true) [implements hasDevTools]
- SafariDriver (false) [does not implement hasDevTools]

Other Amplification and Absorption Approaches

Many build and release tools Automate a process and Amplify changes.

But not in the context of “Test Automation”.

  • if the code does not compile then we want to amplify this and stop the build process
  • if a step in the release process fails then we want to amplify this and stop the build, we might also want to Absorb the failure by automating a roll back process if the deployment process had made changes.

Attenuation is often less obvious when automating.

Attenuation often appears when we haven’t thought thoroughly about the need to Amplify or Absorb change.

  • silent failures in release scripts are often ‘bad’ and put the release and environment at risk

Change Detection for Testing

When we automate for the purposes of asserting on conditions.

We find it easy to review the code to see if we are detecting the ‘changes’ we want.

We do this by reviewing the assertions in the code and seeing if they match the scope of conditions we want to be informed of change.

When reviewing the code we can also review in terms of:

  • What else could we Amplify?
    • what changes are not detected? And is that important?
  • Could we make it easier to Absorb the impact of change?
    • if we find that we are changing @Test code frequently, but not the assertions in the test code, then we could probably do a better job creating abstractions that help absorb the impact of the change.
  • What are we Absorbing?
    • ensure that we are absorbing issues and changes appropriately, e.g. by using partialLinkText I may be making it easier for a @Test to execute, but I may be absorbing changes that are important, I may be accidentally Attenuating information.

Attenuation and Absorption

Attenuation and Absorption often overlap. We need to take care to ensure that important changes are not hidden by that intersection.

In the test that follows you can see that:

  • I have a try catch block, because the test was failing on IE
  • I wanted to Absorb the impact of running the test in a build that was cross browser
  • But I didn’t want to Attentuate away the information that IE might have changed to support the functionality
  • So I add code to Amplify a change, if IE did support the functionality being demonstrated
public void driverGetTitleWithCSSAbsoluteFromRoot(){

  WebDriver driver;
  pageTitle = "Welcome to the Find By Playground";
  driver = Driver.get("" +

  // try catch block added for IE which 
  // does not like starting css at html
      WebElement element;

      element = driver.findElement(
                    By.cssSelector("html &gt; head &gt; title"));


      if(Driver.currentDriver == Driver.BrowserName.IE){
          throw new RuntimeException(
                       "IE now allows CSS starting at html");

  }catch(NoSuchElementException e){
      if(Driver.currentDriver != Driver.BrowserName.IE){
          throw new RuntimeException(
              "Expected only IE to fail on CSS starting at html");

In the above code, ‘Driver’ is a factory class I use which provides me with the current driver, and it knows what type to use from a property, so I can check what type of browser I’m using, e.g. IE, Firefox etc.

The code expects the test to fail under IE, for a condition unrelated to the assert, so I wrap the body of the test in a try/catch block. And the try/catch block is there to alert me if the behaviour of the test changes.

If the catch block is entered then the code throws an exception when the browser is not IE, because it only expects IE to enter the catch block, so if any other browser does then it is a change in behaviour of Selenium WebDriver that I want to be informed about (hence the exception to fail the test).

If the try block carries on, after the assert then the code throws an exception when the browser is IE, to alert me that the behaviour in IE has changed. In the version of IE that the test was written against, IE would always enter the catch block.

The actual test code itself is the assert in the middle, which would still ‘fail’ or ‘pass’ for different browsers.

Really this was a workaround for a limitation in the IE driver at the time, and the try/catch block was there to alert me when the IE Driver changed to allow access to elements in the <head> of the page using CSS selectors. (Which it now does, by the way).

This approach resulted in more complicated code than creating separate tests for the specific conditions. But it absorbed the more complicated changes to the suites and test execution that separate tests might have created.

I share more examples on this approach in Coding workarounds to be alerted when they are not needed

Automating To Detect Change

In summary:

  • Traditional test automation is usually coded to Amplify detected changes.
  • “Automating to Detect Change” does not only apply to Test Automation.
  • Much of our automating should Amplify ‘changes’ or ‘unexpected conditions’.
  • Remaining aware of the additional concepts of “Absorb” and “Attenuate” can boost the effectiveness of a review process to identify if we lack coverage of amplification, or if we are ‘hiding’ information through absorption or attenuation.
  • Automating for results and information goes beyond the domain of Testing.
  • We can incorporate automating in our Test Processes in ways that augment our process, rather than attempting to justify all automating in terms of asserting on condition coverage or condition checking.

<strong><a href="">Read our free ebook</a> on the Software Testing books you Must Read (and Why)</strong>
Source: EvilTester
Automating To Detect Change

Share This Post

Show Buttons
Hide Buttons