All roads lead to exploratory testing

At the European Testing Conference(ETC) 2018, I was inspired to jot down a flowchart about my experience and thought processes from working with customers or teams who want to have manual test scripts executed.

It reflects both my experience with how the conversation / project progresses and it also serves me as a guideline for having that conversation with someone who doesn’t know much about software testing. It’s been something in my head for a long time, and ETC encouraged it to come out.

Benefits of exploratory testing

When I’m faced with something to test – be it a feature in a software application or a collection of features in a release, my general preference is weighted strongly towards exploratory testing. When someone who doesn’t know a great deal about testing wants me or my team to do testing for them, I would love to educate them on why exploratory testing could be a strong part of the test strategy. My reasons for the right arrow leading directly to exploratory testing can be summarized as:

  • My experience shows that Exploratory testing is better at finding out relevant quality information more quickly and with less overhead.
  • Exploratory testing lets us identify problems we couldn’t foresee (unknown unknowns). Test cases and test scripts let us ask software, very specific questions – but only questions we know to ask. We can’t write a test for something we don’t know to test for. And it’s usually the risks that we weren’t aware of that end up biting us.
  • If I am writing a test script with high level of detailing (with test data, expected results, individual steps), then I would prefer to write it in a way that a machine can understand it. Basically, I want to automate it and have Jubula or Selenium or <tool of choice> execute it for me.
  • Exploratory testing is infinitely more fun and engaging than following a test script. That makes it less likely that the tester will miss out on important information (either just through boredom and repetition, or through observational blindness, i.e. when we focus on one thing and don’t notice anything else, even if it’s very odd and salient). The customer might not necessarily care how happy and enthusiastic testers are, but I do, because they are my team! Also, I’d like to argue that happier testers will do better testing.
  • Exploratory testing is lightweight. It lets us get to the actual testing in a short amount of time. Long detailed test scripts don’t have to be maintained.
  • Exploratory testing can easily grow and adapt with a team. It can be combined with checklists (for example a list of “familiar problems” that it’s always worth looking into, or, as I like to call it, “’the list of things we usually manage to break even though we’re so sure we haven’t affected them”. New test ideas can quickly be incorporated. Through pair testing and workshops, multiple team members and stakeholders can join in. We do exploratory testing days with customer involvement a lot in Bredex projects, and most of our developers have some understanding and training in it, with some being particularly good and practicing testers.
  • It doesn’t require that I have detailed specifications, documents or expected results. These artefacts can be useful to have for testing, but they are only a part of the potential sources of information. And my experience shows that they are often incomplete or out of date if they have been documented at all.
  • And finally, exploratory testing is where good testers really shine. It is a structured and systematic activity that involves working, thinking and evaluating at multiple levels. It uses all the cool stuff about test design we can use to write test scripts (like equivalence class analysis, boundary testing, …). Models of the software are created in the tester’s mind, and compared and contrasted to knowledge about the product under test, other products, experience, expectations and a host of other know-how gained by the tester thus far. It is simultaneously incredibly natural and a skill that requires explicit practice.

If you’re as excited about testing as I am, then you’re already sold, right? It sounds like a great way of finding out information with the incredibly skilled tester who will be helping the team.

Taking the other route

It turns out that talking about all the reasons why exploratory testing is great doesn’t always help though. As frustrating as that is, I’ve realised that the project always ends up involving some (if not a majority of) exploratory testing anyway, which leads us to the questions on the left-hand side of the flowchart. Ideally, this conversation takes place quickly, but it can be a set of stages where the tester has to find out the information for themselves.

  1. Are there test scripts?

The actual conversation usually goes like this:

Alice: Alright, since I couldn’t convince you how great exploratory testing is, we’ll do it the traditional way and use the test scripts. Where are they saved?

Amanda: …

This conversation can lead to a branch not shown on the flowchart, which is the “then write the test scripts before you execute them” answer. Fortunately (and I’m smiling ironically writing this), most projects that don’t have a continuous embedded tester present have left the testing so late, that it’s plain that writing the test scripts will take more time than we have – and they still haven’t been executed. So not having any test scripts takes us straight to exploratory testing.

  1. Are they up to date?

I’m not usually lucky enough to get out that easily. There often are some test scripts, deep in the intestines of some giant tool, or hidden on a drive in an Excel file. It’s then usually clear from the “last modified” date how much they’ve been adapted and kept up to date with changes in the software. This is actually a big win for my cause of using exploratory testing, for two reasons:

  1. It means they have already seen how unlikely it is that they will actually maintain test scripts once they have been created.
  2. I can use the title or aim of the test cases as a starting point for what might need to be tested, and demonstrate the technique for the team using something that they have already identified needs to be tested.
  1.  Can they all be done in time?

Assuming that they are maintained, up to date test scripts, is it possible to execute them all (or all the relevant ones, assuming we can identify them) in the time we have? Either before a release, or within the context of a short development phase. If not, then we’re better off moving to exploratory testing – that will let us quickly gain an overview of the test object and show us which areas need the limited time we have.

  1.  Are you going to feel confident after executing this identified set of tests?

I’m going to be honest, although the “I have never reached this point” comment comes after this question, I’ve never actually reached this stage either. But I can theorise that this would be the next logical question. If there are indeed maintained test scripts that can be executed in the time frame we have, my suspicion is that they are not covering enough of the risks we want to address with testing. Even if we think that the set of tests is sufficient (again, for me a very theoretical idea), the inherent danger of us missing unknown risks is still there. So even in this case too, I would advocate strongly for some exploratory testing.

Exploratory testing wins

And there we have it. In one or other above explained way, exploratory testing becomes part of the test strategy for a project. That doesn’t necessarily mean that it is the only approach; I have worked on projects that had a mix of exploratory tests, automation, checklists and even small sets of test scripts. What I have never encountered though, is a project where exploratory testing wasn’t useful or helpful or necessary in any way – either because of its intrinsic value or because we’ve gone down the left side of the flowchart. As I showed at European Testing Conference in my demo talk, exploratory testing is a skill that uses a great deal of techniques formed by testers while actually testing. It is worth having people in your team (often in a tester role) that have this added and necessary skill and work on improving it. A good exploratory tester is a fantastic bonus to a team.

Author: Alex Schladebeck

Alex is the head of Software Quality and Test Consulting at BREDEX GmbH. Her passion is communicating with people in IT projects, most specifically about quality. She works with customers, testers, users and developers to help them improve quality in their projects and processes.  Her main areas of interest are quality in agile projects, communication, training testers and test automation.

You’ll usually find Alex talking about quality and how it affects the whole development process. She’s also a frequent speaker at conferences where she likes to share her project experiences and learn from other practitioners.

Web profiles

@alex_schl, www.bredex.de, www.schladebeck.de

4 thoughts on “All roads lead to exploratory testing”

  1. I love where your flowchart led you, thanks for the article! You explain the advantages of exploratory testing succinctly.

    I’d add one more benefit from my own experience. Over the years I’ve often had to get the whole delivery team, including programmers, engaged in testing. Years ago, I did write manual test scripts for them. That was tedious and boring for everyone.

    Later, I distilled those into bullet points, but then the non-testers would do a sketchy job and miss problems.

    When I taught them how to write charters and do exploratory testing, suddenly they found it was much more interesting, and they discovered issues and missing features and got a lot of value from it. If you want to engage your whole team in testing (which I think everyone should), teach them ET skills.

    1. That’s a fantastic point, thank you! The next time we meet we should definitely have a chat about how to teach people to design good chartas – it’s something I know I still struggle with.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.