As much as I have an aversion to mission statements, born of years working in organisations where everybody had to have one in order to satisfy some standard or other, my team and I agreed on the following “purpose” for our testing:
“To enable informed decision making by discovering and sharing timely and relevant information about the value of solutions, or threats to that value.”
This is our purpose, this is our cause. We test to discover things, things that are useful, things that help our stakeholders make better decisions. Yes, it’s generic. But it’s a starting point. It’s helping us to trigger new ways of thinking through testing problems.
Perhaps more importantly, it give us a useful lens through which to challenge ourselves. When someone suggests that we act in a particular way, we can look to our purpose, and ask “does this do anything to help us achieve our purpose, does it take us in the right direction?”
Steve Jobs had something interesting to say about this: “People think focus means saying yes to the thing you’ve got to focus on. But that’s not what it means at all. It means saying no to the hundred other good ideas that there are… innovation is saying no to a thousand things”. In something like testing, which can never be complete, in which every decision is a trade-off, that kind of focus is critical: we need to be able to say “no” to both the otherwise reasonable sounding suggestions, and the more silly ones, alike. In terms of examining such things, and explaining why we choose to say no, our purpose is a wonderful tool.
This kind of thinking is also making its way down to individual projects. I’ve noticed cases where testers have started to think in terms of the major “exam questions” that they need to answer, and the standard of evidence they need for their stakeholders and regulators. I’ve started hearing testers talking to other team members about what their projects are trying to achieve, and what they might need to know to help them. Test strategy is starting to look less like a bunch of logistics and more like a mandate to go discover. Gradually, setting and refining information objectives for testing seems to be becoming part of the way we work. Not everywhere, but hopefully enough to catch.
I will add a note of caution here. When setting information goals for a project, it is easy to think in confirmatory and binary terms “does the product do X?”, “can we load the data?”. But some of the most interesting questions to ask are neither confirmatory or binary: “what happens when?”, “how long does this take?”, “how many users can it handle before it goes BOOM?”. We should avoid closing ourselves off to such questions. Here’s a tip: if you reframe your testing objectives as questions, try and make sure they are not all closed questions. Open questions are important.
In addition to our purpose, my team and I agreed a set of eight principles: context, discovery, integration, accountability, transparency, information value, lean and learning.
I’m not going to go into the detail of these here. I suspect much of the value is less in the concepts, or the particular wording, but more in how we got there: after spending many long hours working these out, sweating the semantics, they’re highly personal to those testing with me. The process of developing and agreeing them as a group was insanely valuable: forget the shallow glossary of terms the other guys peddle, this is a real common language. We understand each other when one of us says “transparency”, or “lean”, because we’ve invested time getting to the bottom of how those labels matter to us, and we continue to invest time in sharing those meanings with those we work with.
Principles can be values, and they can be heuristics that help guide our thinking. They are not prescriptive or detailed, indeed, they can often be open to interpretation, or contradict one another (such tensions may even be a useful indicator that you’re pitching principles at the right level). This means that the user is forced to THINK when applying them, and this encourages the use of judgment.
The alternative is rules: simple mechanistic formulae or explicit instructions: fill out this template, use this technique, check this box. Principles are different. Principles are pivotal in empowering the tester. What we’re trying to do is regulate the testing system rather than simply control it, and principles have a pedigree in regulation.
In 2007 the UK’s Financial Services Authority published a treatise on “Principles Based Regulation” that described a trend away from rules in the regulation of the financial services industry in the UK. In this, they described their rationale:
- Large sets of detailed rules are a significant burden on industry
- No set of rules is able to address changing circumstances. In contrast, rules can delay or even prevent innovation. They tend to be retrospective, i.e. solve they yesterday’s problems rather than today’s or tomorrow’s
- Detailed rules can divert attention towards adhering to the letter rather than the purpose of regulations, i.e. they encourage rule following behaviour, compliance at the expense of doing what’s right.
The FSA aren’t alone. You see this in a number of domains: regulation, law, accounting and audit. In a 2006 paper called “Principles not rules: a question of judgment“*, ICAS, the Institute of Chartered Accountants of Scotland aired views similar to those of the FSA:
- When using rules, one’s objectives can become lost in a quest for compliance
- In contrast to principles, rules discourage the use of judgment and deskill professionals.
So, do you want testers burdened down by a large body of rules (dare I say it, a standard?) describing how they should behave, do you want them to be deskilled and reduced to simply taking and obeying orders? Or do you want skilled testers who can think for themselves, apply professional judgment to choose, adapt or even innovate testing practices? If the latter, then I suggest you want to be thinking in terms of principles rather than rules.
Before moving on, it is worth mentioning that the SEC (2003) point out that principles based regulation does not equal principles only regulation, and indeed the FSA saw rules and principles as coexisting: the trick to a robust regulatory framework is in finding the right balance. In our framework, we place great emphasis on principles, but do maintain a handful of rules: for example concerning the use of production data in testing. There are laws after all.
Perhaps one of the more interesting aspects of my role this year has been finding this balance for some of the firm’s largest programmes and change initiatives. In each case we started out with long wish lists of rules, driven by a desire to consistency, yet when considering the legitimate variation between projects ended up agreeing principles instead, supported by a bare minimum set of rules. To avoid disempowering people, a light touch is required.
We believe that testing should be organised at the level at which delivery is performed, because the people closest to a context are those best suited to make the right decisions about what practices are needed. As a result, we do not specify any particular practices in this layer of our framework: we avoid dictating testing practices to projects, we do not push standards, we do not have a testing process document, we do not have templates. Teams, who own their own testing, are free to create, adopt or adapt such things, based on their own needs.
That is not to say that it’s a free for all. I have set a clear expectation that delivery teams are accountable for the quality of their own testing and must remain transparent in what they do. This is critical: empowerment can only thrive when there is trust.
And this is the elephant in the room. Many large enterprises are built on a foundation of mistrust: we manage projects through command and control because there is no trust; we demand suppliers comply with standards because there is no trust; we maintain elaborate sets of rules because there is no trust. To change things, we need trust; and trust is dependent on accountability and transparency.
We acknowledge that testing is a service, and that we are accountable to our stakeholders for the quality of our testing. We define the quality of our testing in terms of information value: whether the information we provide is useful, i.e. timely, relevant and consumable. We recognise that information is of no value if not shared, so we must provide transparency into what we discover, and to provide a warrant for those findings, into the extent, progress and limitations of our testing.
The alert amongst you may have noticed that these are three of our principles: accountability, transparency and information value. The prerequisites for trust and empowerment are firmly rooted in our framework.
Figuring out what kinds of information people need is hard. Evaluating software is hard. Sharing what you find in a way that is accessible to stakeholders is hard.
Nothing about testing is easy. It is hard enough without constraining ourselves unnecessarily with inflexible rules! But we also need to acknowledge that, when you’ve been living under a regime of command and control for a while, it can be hard to empower yourself.
This is where our testing community comes in. To break the rules based culture, we need to create an environment where people are comfortable sharing ideas and challenging one another. We need an environment where people can ask for help and support one another. If our people are going to gain confidence, grow into the role of empowered testers, then we need to make sure that there is a support network for them. You’d be foolish to learn a trapeze act without a safety net: and I need to make sure that those testing in my corner of the organization have one. It’s early days and this is an area where I intend to make significant investment in the coming year.
This is proving to be a fascinating journey. It’s a journey of respect, respecting people enough to give them a chance to rise to the challenge of becoming excellent testers, freeing them from the tradition of command and control that has so constrained their work. The empowerment paradox suggests that we cannot directly empower others, but by removing these obstacles, perhaps we can create the conditions for them to empower themselves.
*My thanks to James Christie for discovering and sharing this.
Source: exploring bug uncertainty
Principles Not Rules, part 3