Skip to content

Hiccupps - James Thomas
Syndicate content
James Thomas
Updated: 1 hour 11 min ago

Mum's the Word

Tue, 11/29/2016 - 08:18

A few weeks ago I put out an appeal for resources for testers who are pulled into live support situations:
Looking for blogs, books, videos or other advice for testers pulled into real-time customer support, e.g. helping diagnose issues #testing— James Thomas (@qahiccupps) October 28, 2016 One suggestion I received was The Mom Test by Rob Fitzpatrick, a book intended to help entrepreneurs or sales folk to efficiently validate ideas by engagement with an appropriate target market segment. And perhaps that doesn't sound directly relevant to testers?

But it's front-loaded with advice for framing information-gathering questions in a way which attempts not to bias the the answers ("This book is specifically about how to properly talk to customers and learn from them"). And that might be, right?

The conceit of the name, I'm pleased to say, is not that mums are stupid and have to be talked down to. Rather, the insight is that "Your mom will lie to you the most (just ‘cuz she loves you)" but, in fact, if you frame your questions the wrong way, pretty much anyone will lie to you and the result of your conversation will be non-data, non-committal, and non-actionable. So, if you can find ways to ask your mum questions that she finds it easy to be truthful about, the same techniques should work with others.

The content is readable, and seems reasonable, and feels like real life informed it. The advice is - hurrah! - not in the form of some arbitrary number of magic steps to enlightenment, but examples, summarised as rules of thumb. Here's a few of the latter that I found relevant to customer support engagements, with a bit of commentary:
  • Opinions are worthless ... go for data instead
  • You're shooting blind until you understand their goals ... or their idea of what the problem is
  • Watching someone do a task will show you where the problems and inefficiencies really are, not where the customer thinks they are ... again, understand the real problem, gather real data
  • People want to help you. Give them an excuse to do so ... offer opportunities for the customer to talk; and then listen to them
  • The more you’re talking, the worse you’re doing ... again, listen

These are useful, general, heuristics for talking to anyone about a problem and can be applied with internal stakeholders at your leisure as well as with customers when the clock is ticking. (But simply remembering Weinberg's definition of a problem and the Relative Rule has served me well, too.)
Given the nature of the book, you'll need to pick out the advice that's relevant to you - hiding your ideas so as not to seem like you're needily asking for validation is less often useful to a tester, in my experience - but  as someone who hasn't been much involved in sales engagements I found the rest interesting background too.Image: Amazon
Categories: Blogs

Cambridge Lean Coffee

Thu, 11/24/2016 - 07:23

This month's Lean Coffee was hosted by Abcam. Here's some brief, aggregated comments and questions  on topics covered by the group I was in.

Suggest techniques for identifying and managing risk on an integration project.
  • Consider the risk in your product, risk in third-party products, risk in the integration
  • Consider what kinds of risk your stakeholders care about; and to who (e.g. risk to the bottom line, customer data, sales, team morale ...)
  • ... your risk-assessment and mitigation strategies may be different for each
  • Consider mitigating risk in your own product, or in those you are integrating with
  • Consider hazards and harms
  • Hazards are things that pose some kind of risk (objects and behaviours, e.g. a delete button, and corruption of database)
  • Harms are the effects those hazards might have (e.g. deleting unexpected content, and serving incomplete results)
  • Consider probabilities and impacts of each harm, to provide a way to compare them
  • Advocate for the resources that you think you need 
  • ... and explain what you won't (be able to) do without them
  • Take a bigger view than a single tester alone can provide
  • ... perhaps something like the Three Amigos (and other stakeholders)
  • Consider what you can do in future to mitigate these kinds of risks earlier
  • Categorise the issues you've found already; they are evidence for areas of the product that may be riskier
  • ... or might show that your test strategy is biased
  • Remember that the stuff you don't know you don't know is a potential risk too: should you ask for time to investigate that?

Didn't get time to discuss some of my own interests: How-abouts and What-ifs, and Not Sure About Uncertainty.

Can templates be used to generate tests?
  • Some programming languages have templates for generating code 
  • ... can the same idea apply to tests?
  • The aim is to code tests faster; there is a lot of boilerplate code (in the project being discussed)
  • How would a template know what the inputs and expectations are?
  • Automation is checking rather than testing
  • Consider data-driven testing and QuickCheck
  • Consider asking for testability in the product to make writing test code easier (if you are spending time reverse-engineering the product in order to test it)
  • ... e.g. ask for consistent Ids of objects in and across web pages
  • Could this (perceived) problem be alleviated by factoring out the boilerplate code?

How can the coverage of manual and automated testing be compared?
  • Code coverage tools could, in principle, give some idea of coverage
  • ... but they have known drawbacks
  • ... and it might be hard to tie particular tester activity to particular paths through the code to understand where overlap exists
  • Tagging test cases with e.g. story identifiers can help to track where coverage has been added (but not what the coverage is)
  • What do we really mean by coverage?
  • What's the purpose of the exercise? To retire manual tests?
  • One participant is trying to switch to test automation for regression testing
  • ... but finding it hard to have confidence in the automation
  • ... because of the things that testers can naturally see around whatever they are looking at, that the automation does not give

What are the pros and cons of being the sole tester on a project?
  • Chance to take responsibility, build experience ... but can be challenging if the tester is not ready for that
  • Chance to make processes etc that works for you ... but perhaps there are efficiencies in sharing process too
  • Chance to own your work ... but miss out on other perspectives
  • Chance to express yourself ... but can feel lonely
  • Could try all testers on all projects (e.g. to help when people are on holiday or sick)
  • ... but this is potentially expensive and people complain about being thinly sliced
  • Could try sharing testing across the project team (if an issue is that there's insufficient resource for the testing planned)
  • Could set up sharing structures, e.g. team standup, peer reviews/debriefs, or pair testing across projects

What do (these) testers want from a test manager?
  • Clear product strategy
  • As much certainty as possible
  • Allow and encourage learning
  • Allow and encourage contact with testers from outside the organisation
  • Recognition that testers are different and have different needs
  • Be approachable
  • Give advice based on experience
  • Work with the tester 
  • ... e.g. coaching, debriefing, pointing out potential efficiency, productivity, testing improvements
  • Show appreciation
  • Must have been a tester
Categories: Blogs

A Mess of Fun

Tue, 11/22/2016 - 09:11

In The Dots I referenced How To Make Sense of Any Mess by Abby Covert. It's a book about information architecture for non-information architects, one lesson per page, each page easily digestible on its own, each page informed by the context on either side.

As a tester, I find that there's a lot here that intersects with the way I've come to view the world and how it works and how I work with and within it. I thought it would be interesting to take a slice through the book by noting down phrases and sentences that I found thought-provoking as I went.

So, what's below is information from the book, selected and arranged by one reader, and so it is also information about that reader.

Mess: a situation where the interactions between people and information are confusing or full of difficulties. (p. 169)

Messes are made of information and people. (p.11)

Information is whatever is conveyed or represented by a particular arrangement or sequence of things. (p. 19)

The difference between information, data, and content is tricky, but the important point is that the absence of content or data can be just as informing as the presence. (p. 21)

Intent is language.  (p. 32)

Think about nouns and verbs. (p. 98)

Think about relationships between nouns and verbs. (p. 99)

I once spent three days defining the word "customer". (p. 88)

We create objects like maps, diagrams, prototypes, and lists to share what we understand and perceive. Objects allow us to compare our mental models with each other. (p. 57)

People use aesthetic cues to determine how legitimate, trustworthy, and useful information is.  (p. 64)

Ambiguous instructions can weaken our structures and their trustworthiness. (p. 131)

Be careful not to fall in love with your plans or ideas. Instead, fall in love with the effects you can have when you communicate clearly. (p. 102)

Why, what and how are deeply interrelated. (p. 43)

We make places. (p. 86)

No matter what you're making, your users will find spaces between places. (p. 87)

We listen to our users and our guts. There is no one right way. There is only your way. (p. 101)

Murk: What alternative truths or opinions exist about what you're making or trying to achieve? (p. 113)

Uncertainty comes up in almost every project. But you can only learn from those moments if you don't give up. (p. 118)

One tiny decision leads to another, and another. (p. 85)

Perfection isn't possible, but progress is. (p. 148)
Image: Discogs,Amazon
Categories: Blogs

The Dots

Sun, 11/20/2016 - 08:28

One of the questions that we asked ourselves at CEWT 3 was what we were going to do with the things we'd discovered during the workshop. How would, could, should we attempt to share any insights we'd had, and with who?

One of the answers I gave was that Karo and me would present our talks at Team Eating, the regular Linguamatics brown-bag lunch get-together. And this week we did that, to an audience of testers and non-testers from across the company. The talks were well-received and the questions and comments were interesting.

One of them came from Rog, our UX Specialist. I presented a slide which showed how testing, for me, is not linear or strictly hierarchical, and it doesn't necessarily proceed in a planned way from start to finish, and it can involve people and objects and information outside of the software itself. Testing can be gloriously messy, I probably said:

His comment was (considerably paraphrased) that that's how design feels to him. We spoke for a while afterwards and he showed me this, the squiggle of design:

I saw his squiggle and raised him a ring, showing images from a blog post I wrote earlier this year. In Put a Ring on It I described how I attempt to deal (in testing, and in management) with an analog of the left-hand end of that squiggle, by constraining enough uncertainty that I can treat what remains as atomic and proceed without needing to consider it further, at that time, so that I can shift right:

He reminded me that, perhaps a year earlier, we'd spoken about information architecture and that this was relevant to the discussion were were having right there and then. He lent me a book, How to Make Sense of Any Mess by Abby Covert.

The book discusses information-based approaches to understanding a problem, working out what kinds of changes might exist and be acceptable, choosing a route to achieving a change, monitoring progress towards it, and adapting to whatever happens along the way. I started reading it that evening and came immediately across something that resonated strongly with me:
Intent is Language: Intent is the effect we want to have on something ... The words we choose matter. They represent the ideas we want to bring into the world ... For example, if we say we want to make sustainable, eco-centered design solutions, we can't rely on thick, glossy paper catalogs to help us reach new customers. By choosing those words we completely changed our options.Covert goes on to suggest that for our designs we list two sets of adjectives: those that describe properties we want and those that describe properties we don't want. The second list should not be simple negative versions of the first and the aim should be that a neutral observer should not be able to tell which is the desired set. In this way, we can attempt to capture our intent in language in a way which can be shared with others and hopefully result in a shared vision of a shared goal.

Later in the book, she suggests some structures for managing the information that is intrinsic to any mess-resolution project. Here I saw a link to another book that I'm reading at the moment, one that I borrowed from Sime, another colleague at Linguamatics: Beautiful Evidence by Edward Tufte.

This book considers ways to improve the presentation of evidence, of information, by removing anti-patterns, by promoting clarity, by exploiting aspects of the human perceptual system. It does this in order to provide increased opportunity for greater data density, enhanced contextual information about the data, the provision of comparative data, and ultimately more useful interpretation of the data presented.

Covert's high-level information structures are useful tools for organisation of thoughts and, in one phrase - "keep it tidy" - with one brief page of prose to accompany it, she opens a door into Tufte's more detailed world.

I had begun to reflect on these things while speaking to another couple of my colleagues and noted that I continue to see value returned to me by reading around testing and related areas. The value is not necessarily immediate, but I perceive that, for example, it adds depth to my analyses, it allows me to make connections that I otherwise would not, it helps me to avoid dead ends by giving a direction that might otherwise not have been obvious.

I was a long way into my career (hindsight now shows me) before I realised that reading of this kind was something that I could be doing regularly rather than only when I had a particular problem to solve. I now read reasonably widely, and also listen to a variety of podcasts while I'm walking to work and doing chores.

And so it was interesting to me that yesterday, with all of the above fresh in my mind, while I was raking up the leaves in our back garden, a recently-downloaded episode of You Are Not So Smart with James Burke came on. In his intro, David McRaney says this, reflecting Burke's own words from a television series made in the 1970's, called Connections:
Innovation took place in the spaces between disciplines, when people outside of intellectual and professional silos, unrestrained by categorical and linear views, synthesized the work of people still trapped in those institutions ...Innovation, yes, and testing.
Images: EilReVision LabAmazon

Edit: after reading this post, Sime pointed out Jon Bach's graphical representation of his exploratory testing, which bears a striking surface resemblance to the squiggle of design:

Categories: Blogs

Something of Note

Thu, 11/17/2016 - 23:16
The Cambridge Tester meetup last week was a workshop on note-taking for testers by Neil Younger and Karo Stoltzenburg. An initial presentation, which included brief introductions to techniques and tools that facilitate note-taking in various ways (Cornell, mind map, Rapid Reporter, SBTM), was followed by a testing exercise in which we were encouraged to try taking notes in a way we hadn't used before. (I tried the Cornell method.)

What I particularly look for in meetups is information, inspiration, and the stimulation of ideas. And I wasn't disappointed in this one. Here's some assorted thoughts.

I wonder how much of my note-taking is me and how much is me in my context?
  • ... and how much I would change were I to move somewhere else, or do a different job at Linguamatics
  • ... given that I already know that I have evolved note-taking to suit particular tasks over time
  • ... further, I already know that I use different note-taking approaches in different contexts. But why? Can I explore that more deeply?

Is this blog post notes?
  • ... what is a note?
  • ... perhaps this is an article? It doesn't feel like a formal report, although perhaps it could turn into one
  • ... but it's more than simple aides memoire
  • ... but it's not exactly full sentences 
  • ... but it started as notes. Then I iterated on them and they become a draft, of sorts
  • ... but how? Why? According to who?
  • ... and when do notes turn into something else?
  • ... and when should notes turn into something else?

By writing up my notes for this post I have remembered other things that aren't in my notes
  • ... and thought things that I didn't think at the time
  • ... and, a week later, after discussing the evening with Karo, I've had more thoughts (and taken notes of them)

I showed my notes from CEWT 3 to one of the other participants at the event
  • ... and I realised that my written notes are very wordy compared to others'
  • ... and that I layer on top of them with emphasis, connections, sub-thoughts, new ideas etc

What axes of comparison make sense when considering alternative note-taking techniques?
  • ... what do they give over pen and paper? (which scores on ubiquity and familiarity and flexibility)
  • ... what do they give over a simple use of words? (perhaps transcription of "everything" is a baseline?)
  • ... what about shorthand? (is simple compression a form of note taking?)
  • ... is voice a media for notes? Some people use voice recorders
  • ... sketchnoting is richer in some respects, but more time-consuming

What advantages might there be of constraining note-taking?
  • ... Rapid Reporter appears to be a line-by-line tool, with no editing of earlier material
  • ... the tooling around SBTM enforces a very strict syntax
  • ... the concentration on structure over text of mind maps

How might contextual factors affect note-taking?
  • ... writing on graph paper vs lined paper vs plain paper; coloured vs white
  • ... one pen vs many different pens; different colour pens
  • ... a blank page vs a divided page (e.g. Cornell)
  • ... a blank page vs a page populated with e.g. Venn diagram, hierarchical structure, shapes, pie charts
  • ... scrap paper vs a Moleskine
  • ... pencil vs fountain pen pen vs crayon vs biro

Time allocation during note-taking
  • ... what kinds of techniques/advice are there for deciding how to apportion time to note-taking vs listening/observing?
  • ... are different kinds of notes appropriate when listening to a talk vs watching an event vs interacting with something (I do those differently)

What makes sense to put into notes?
  • ... verbatim quotes?
  • ... feelings?
  • ... questions?
  • ... suggestions?
  • ... connections?
  • ... emotions?
  • ... notes about the notes?
  • ...
  • ... what doesn't make sense, if anything? Could it ever make sense?

I am especially inspired to see whether I can distil any conventions from my own note-taking. I have particular contexts in which I make notes on paper - meetups are one - and those where I make notes straight onto the computer - 1-1 with my team, for instance, but also when testing. I make notes differently on the computer in those two scenarios.

I have written before about how I favour plain text for note-taking on the computer and I have established conventions that suit me for that. I wonder are any conventions present in multiple of the approaches that I use?

Good thought, I'll just note that down.
Categories: Blogs

The Anatomy of a Definition of Testing

Sun, 11/13/2016 - 07:43

At CEWT 3 I offered a definition of testing up for discussion. This is it:
Testing is the pursuit of actual or potential incongruityAs I said there, I was trying to capture something of the openness, the expansiveness of what testing is for me: there is no specific technique; it is not limited to the software; it doesn't have to be linear; there don't need to be requirements or expectations; the same actions can contribute to multiple paths of investigation at the same time; it can apply at many levels and those levels can be distinct or overlapping in space and time.
And these are a selection of the comments and questions that it prompted before, during and after the event, loosely grouped:

Helicopter view
  • it is sufficiently open that people could buy into it, and read into it, particularly non-testers.
  • it's accurate and to the point.
  • it has the feel of Weinberg's definition of a problem. 
  • it sounds profound but I'm not sure whether there is any depth.
  • it seems very close to the regular notion of targeting information/unknowns.
  • can not testing be part of this idea of testing?
  • how does the notion of tacit testing (from CEWT 3 discussion) fit in?
  • Kaner talks about balancing freedom and responsibility in testing. Is that covered here?
  • the definition doesn't talk about risk.
Practical utility
  • it couldn't be used to help someone new to testing decide what to do when testing.
  • I could imagine putting this onto a sticky and trying to align my actions with it.
  • what do you mean by pursuit
  • incongruity is too complex a word.
  • what other words could replace testing in the definition and it still hold?
  • when I see or I wonder about whether it's exclusive (in the Boolean sense).

In this post I'm going to talk about just the words. I spent a deal of time choosing my words - and that in itself is a warning sign. If I have to graft to find words whose senses are subtly tuned to achieve just the interpretation that I want, then I worry that others will easily have a different interpretation.

And, despite this being a definition of testing for me, it's interesting to observe how often I appeal to my feelings and desires in the description below. Could the degree of personal investment compromise the possibility of it having general appeal or utility, I wonder.

"pursuit"Other definitions use words like search, explore, evaluate, investigate, find out, ... I was particularly keen to find a verb that captured two aspects of testing for me: finding out what is there, and digging into what has been found.

What I like about pursuit is that it permits (at least to me) both, and additionally conveys a sense of chasing something which might be elusive, itinerant, latent or otherwise hard to find. Oxford Dictionaries has these definitions, amongst others of pursue:
  • follow or chase (someone or something)
  • continue to investigate or explore (an idea or argument)

These map onto my two needs in ways that other verbs don't:
  • search: feels more about the former and less about the latter.
  • investigate: feels more natual when there's a thing to investigate.
  • explore: could possibly do duty for me (and it's popular in testing definitions) but exploratory testing can be perceived as cutting out other kinds of testing and I don't want that interpretation.
  • evaluate: needs data; pursuit can gather data.
  • find out: feels like it has finality in it. To reflect the fact that testing is unlikely to be complete I'd want to say something like "Testing is the attempt to find out about actual or potential incongruity"

"incongruity"As one of the criticisms of my definition points out, this word is not part of most people's standard lexicon. Oxford Dictionaries says that it means this:
 Not in harmony or keeping with the surroundings or other aspects of something.I like it because it permits nuance in the degree to which something needs to be out of place: it could be completely wrong, or just feel a bit odd in its context. But the price I pay for the nuance is the lack of common currency. On balance I accepted this in order to keep the definition short.

"actual or potential"I felt unhappy with a definition that didn't include this, such as:
Testing is the pursuit of incongruitybecause I wanted testing's possibilities to include suggesting that there might be a problem. If the definition of incongruity I am using permitted possible disharmony then I'd have been happier with this shorter variant.

I have subsequently realised that I am, to some extent, reflecting a testing/checking distinction here too: a check with fixed expectations can find actual incongruity while testing could in addition find potential incongruity.

However, the entire definition is, for me, in the context of the relative rule - so any incongruities of any kind are tied to a context, person, time - and also the need to govern the actions in the pursuit by some notions of what is important to the people who are important to whatever is being tested.

But, even given that, I still find it hard to accept the definition without potential. Perhaps because it flags the lack of certainty inherent in much testing.

Edit: Olekssii Burdin wrote his own definition of testing after reading this.
Categories: Blogs


Sat, 11/12/2016 - 10:55

CEWT is the Cambridge Exploratory Workshop on Testing, a peer discussion event on ideas in and around software testing. The third CEWT, held a week or so ago, had the topic Why do we Test, and What is Testing Anyway?  With six speakers and 12 participants in total, there was scope for a variety of viewpoints and perspectives to be voiced - and we heard them - but I'll pull out just three particular themes in my reflection on the event.

WhoLee Hawkins viewed testing through the eyes of different players in the wider software development industry, and suggested aspects of what testing could be to them. For tools vendors or commercial conference organisers, testing is an activity from which money can be made; for financial officers, testing is an expense, something to be balanced against its return and other costs; for some managers and developers and even testers, testing is something to be automated and forgotten.

James Coombes also considered a range of actors, but he was reporting on how each of them - at his work - contribute to an overall testing effort: the developer, tester, security expert, technical author, support operator, integration tester, manager and customer. Each person's primary contribution in this approach is their expertise, their domain knowledge, their different emphasis, their different lenses through which to view the product.

In discussion, we noted that the co-ordination of this kind of activity is non-trivial and, to some extent, unofficial and outside of standard process. Personal relationships and the particular individuals concerned can easily make or break it.

There was also some debate about whether customers are testing a product when they use it. It's certainly the case that they may find issues, but should we regard testing as inherent in "normal use"? Does testing require intent on the part of the "tester"?

WhyKaro Stoltzenburg focussed on an individual tester's reasons for testing and concluded that she tests because it makes her happy. Her talk was a description of the kinds of testing she'd done, of herself, to arrive at this understanding and then to try to see whether her own experience can be generalised, and to who. She suggested that we, as testers, sell our craft short and called on us to tell others what a great job it is!

One particularly motivating slide gave a selection of responses from her work colleagues to the question "why do you test?", which included: it's like a puzzle, variety of challenges, a proper outlet for my inner pedant, it's fun. Lee's talk also included a set of people who found testing fun and he characterised them as people like us, people who love the craft of software testing.

Later, in his blog post about this event, Lee described the CEWT participants as "a passionate group of testers". There was an interesting conversation thread in which we asked why we were there, doing what we were doing, and what we'd do with whatever we got from it.

Why? is a powerful question. On a personal level, I enjoy talking about testing, about its practical aspects and in the abstract. I like being exposed to ideas from other practitioners (which is not to say that I get ideas only from other practitioners) and I like to get other perspectives on my own ideas.

And, of course, understanding our own motivations is interesting, but I think the conversations in the event rarely got very deeply into the motivations of other stakeholders who ask for, or just expect, testing. We did discuss questions such as "if the testing is being done by others, why do we need testers?" and again wondered whether what others were doing was testing, or contributing to testing, or both. Harry Collins' The Shape of Actions has things to say about the way in which activities can be broken down and viewed differently at different resolutions.

But to return to Karo's challenge to us: does an event like CEWT essentially have the choir singing to itself? We know why we like testing and we know that we implicitly value testing, because we do it and because we gave up a Sunday to talk about it. But we're self-selecting. The event can help us to support each other and improve ourselves and the work we do, but can it change other's views of testing? Should it? Why?

WhatMichael Ambrose described a project concerned with increasing the software development skills of his test team. The aim was to write code in order to reduce the manual effort required to support testing of a wide range of platforms without reducing the coverage (in terms of the platform set).

Naturally, this begs questions such as: is the new software doing testing? is it replacing testing (and if so to what extent?) or augmenting testing? is it extending testing scope (e.g. by freeing testers to take on work that currently isn't being done at all)? what dimensions of coverage might be increased, reduced? how can its success be evaluated?

We talked a little during the day about the tacit testing that goes on outside of that which was planned or expected or intended by a tester: those things that a human will spot "in passing" that an algorithm would not even consider.

Does that tacit investigation fit into the testing/checking distinction? If so, it's surely in testing. But, again, how important is intent in testing activity? Is it sufficient to set out with the intent to test, and then anything done during that activity is testing?  What kind of stuff that happens in a tester's day might not be regarded as testing? One participant gave an example of projects on which 85% of effort was spent covering their arse.

In his talk, Aleksandar Simic presented a diary of two days in his role as a tester, and categorised what he did in various ways. These categorisations intriguingly included "obsession" which described how he didn't want to let an issue go, and how he spent his own time building background knowledge to help him at work. He talked about how he is keeping a daily diary of work done and his feelings about that work, and looking for patterns across them to help him to improve himself.

This is challenging. It's easy to mislead and be misled by ourselves. Seeing through our own rose-tinted spectacles involves being prepared to accept that they exist and need to be at least cleaned if not removed.

But is that kind of sense-making, data gathering and analysis a testing activity? I would like to regard it as such. In my own talk I explained how I had explored a definition of testing from Explore It! and also explored my reaction to it, and how these processes - and others - ran in parallel, and overlapped with, and impacted on each other. I rejected testing as simply linear and tried to find my own definition that encompasses the gloriously messy activity that it can be (for me).

One comment on my definition - which inverted a concern that I have about it - was that it is sufficiently open that people could buy into it, and read into it, particularly non-testers. This touches again on the topic of taking testing out of the testing bubble.

There was some thought that distinctions like testing vs checking - which have not been universally approved of in the wider testing community; some thinking that it is simply navel-gazing and semantics - are useful as a tool for conversations with non-testers. An example might be explaining why a unit test suite, valuable as it might be, need not be all that testing is.

Perhaps that's a useful finding for us: that we can get value from events like these by going away and being open to talking about testing, explaining it, justifying it, in ways that other parties can engage with. By doing that we might (a) spread the word about testing,  (b) understand what others want and need from it, and (c) have fun.

I am intellectually buoyed by events like this, and also not a little proud to see something I created providing a forum for, and returning pleasure and value to, others. CEWT 4 anyone?
Categories: Blogs

Testing All the Way Down, and Other Directions

Mon, 11/07/2016 - 07:03

This is a prettied-up version of the notes I based my CEWT #3 talk on.

Explore It! by Elisabeth Hendrickson is a classic book on exploratory testing that we read - and enjoyed - in the Test Team book club at Linguamatics a few months ago. Intriguingly, to me, although the core focus of the book is exploration, I found myself over and again drawn back to a definition given early on (p.6):
Tested = Checked + Exploredwhere, to elaborate (p.5):
Checking [is testing] that you design in advance to check that the implementation behaves as intended under supported configurations and conditions.Exploratory Testing [is] simultaneously designing and executing tests to learn about the system, using your insights from the last experiment to inform the next.And both of these aspects are necessary for testing to have been performed (p.4-5):
 ... you need a test strategy that answers two core questions:
 1. Does the software behave as intended under the conditions it’s supposed to be able to handle?
 2. Are there any other risks?Which doesn't sound particularly controversial does it, at first flush, that testing should involve checking against what we know and exploring for other risks? So why did I find myself worrying away at it? Here's a few thoughts:

[1] The definition of testing is cast in terms of a mathematical formula. So I could wonder what ranges of values those variables could take, and their types, and what units (if any) they are measured in, and what kind of algebra is being used.

Perhaps it's something like numerical addition, and so Checked and Explored represent some value associated with their corresponding activity - bug counts or coverage or something; or maybe Checked and Explored are more akin to sets or multisets and I should interpret "+" as a kind of union operator; or, alternatively, perhaps Checked and Explored are simply Boolean values and "+" is something like and AND operation, which makes Tested only true if both Checked and Explored are true.

[2] I wonder whether Checked and Explored can overlap. Can one kind of action qualify as both checking and exploration? Can the same instance of an action qualify as both?

[3] Choosing the past tense to express a definition of Tested appears to wrap up two things: testing and the completion of testing.

[4] My intuition about what I'm prepared to regard as testing is itself tested by a statement like (p.5):
Neither checking nor exploring is sufficient on its own.For instance, I can imagine circumstances in which I'd accept that the testing for some project would consist only of "checking" via unit tests. And I might do that on the basis of a high-level risk analysis of this project vs other projects in some product, timescale, the kind of application, the expected usage, resource availability and so on.

[5] Further, there's also the suggestion that exploratory testing alone is sufficient for testing. In this exchange, Elisabeth is helping a tester to see what their exploration of requirements is (p.121):
"Huh. Sounds like testing," I said. I waited to hear his response ... His face brightened. "Oh!" he exclaimed. "I get it. I hadn’t thought of it that way before. I am testing the requirements ..."No checking is mentioned although, of course, it's possible to imagine pro-forma constraints that could be checked, for example that any document is in a language understandable by all parties that need to understand it, or that it has a particular nominated format, or that it's not written in invisible ink.

[6] There's another (implicit, non-formal) definition of testing on page 3:
interact with the software or system, observe its actual behavior, and compare that to your expectations which appears to tie testing to software and software systems (such as firmware, infrastructure, collections of software components, say) unless, perhaps, the "system" here can be interpreted as something much more general such as processes. And if exploring requirements can be testing then "system" probably is more general. But, if so, what might be meant by observing the "actual behaviour" of a functional specification or a bunch of Given, When, Thens?

[7] There's other well-known binary divisions of testing, which overlap in terminology with Explore It! and each other: Michael Bolton and James Bach have testing and checking while Paul Gerrard prefers exploring and testing. There's some discussion of the commonality between Bolton/Bach and Gerrard in this Twitter thread and Paul's blog and I wondered how Elisabeth's variant fitted into that kind of space.

"Picky, picky, picky," you may be thinking, "picky, pick, pick, pick ..."

Perhaps. But please don't get the idea that this essay is about pulling apart, or kicking, or rejecting Explore It! on this basis, because it's not. I've got a lot of time for the book and the definitional stuff I've referred to all occurs at the beginning, in the set up, with a light touch, to motivate a description of what the book is and is not about. The depth in the book is - intentionally - in the description and discussion and examples of exploration and how it can be utilised in software development generally and testing specifically.

Which is perhaps a little ironic, because thinking about the definition turned into this essay and is the exploration of what testing means for me. The starting point was the observation that I kept on returning to my model of the Explore It! view of testing even though the core focus of the book is exploration.

Instinctively, I am happy to regard the thoughts that lead to the kinds of questions I've listed above as testing: I am testing the definition of testing in the book using my testing skills to find and interpret evidence, skills such as: reading, re-reading, skim reading, searching, cataloguing, note-taking, cross-referencing, grouping related terms, looking for consistency and inconsistency within and outside the book, comparing to my own intuition, reflecting on the reaction of my colleagues in the book club when I said that I'd been distracted by the definition, filtering, factoring, staying alert, experimenting, evaluating, scepticism, critical thinking, lateral thinking, ...

I feel like I was performing actions which are at least consistent with testing activities:
  • I criticised the definition,
  • I challenged my model of the definition,
  • I analysed Elisabeth’s answers,
  • I reflected on the way I asked questions,
  • I wondered at why I cared about this,
  • I sought justification for all the above.

But I felt that the definition I was thinking about wouldn't classify what I was doing as testing.

Amongst the perennial testing problems (and Explore It!, as you would expect, talks about many of them) are these: which direction to follow and when to stop. In my case I decided, after my initial analysis, that I wanted to continue and that I'd do so by contacting Elisabeth herself and asking her questions about the definitions, her use of them in the book and her views on them today. And she graciously agreed to that.

With additional evidence from our conversation I was able to filter out some of the uncertainties I had and to refine my model of the system under test, if you will. I could then ask further questions and refine my model still further. I could then switch and ask questions of my model: test the model. Perhaps I think that some section of the model is underdeveloped because it doesn't stand up to questioning, then I could, and did, go back to the book and re-read sections of it, again adding data to my model. And I could correlate data from any of them, I could choose what approach to take next based on what I'd just discovered, and so on.

These things are all completely analogous (for me) to the exploration of some application when software testing. And I can take exactly this kind of approach when I'm reading requirements, when talking to the product owner, when reviewing proposals for new internal systems, when looking at my own ideas about how we might organise our team, when I'm thinking about talks or blog posts that I'm writing, ... More or less anything can have this kind of exploratory analysis applied to it, I think.

It can even, to take it to another - meta - level, be applied to the analysis itself: testing the testing. For example, when I was talking to Elisabeth I wanted to do review what I said, how Elisabeth responded, how I interpreted her responses and so on, to understand whether:
  • I felt like I was getting my point across clearly; and, if not, then whether I could find another way such as giving examples or reframing or using different words.
  • I could see that the answers helped to resolve some point of uncertainty for me; and, if not, wondering why not: perhaps it was my setup (the context in which I placed the question) or some misapprehension in my model.
  • I was refining my model with any given experiment; and, if not, then ask whether the line of investigation is valid or worthwhile.
  • ...

Further, when I was testing the definition of testing in Explore It! I was curious about why I cared, and so tried to understand that. How? By testing it!  A couple of questions recurred and required strong answers from me:
  • What value do I see in this exercise, and who for?
  • Am I reading too much into the definitions, building ideas on something that isn't intended to bear deep analysis?

And the techniques that I choose to use for these kinds of analyses are themselves testing techniques such as questioning, review, idea generation, comparison, critical thinking, and so on. Not everything that I am applying my testing to necessarily exhibits "behaviour" (as in Elisabeth's second definition) but it can yield information in response to experiments against it.

At some point, I cast around for other views of testing. It's not uncommon to view testing as a recursive activity. In his keynote at EuroSTAR 2015 Rikard Edgren said this beautiful thing:
Testing is simple: you understand what is important and then you test it.Adam Knight has a couple of really nice blogs on fractal exploratory testing, and presented a talk on it at Linguamatics recently too.  He argues that, in exploratory testing, each exploration uncovers points which can themselves be explored, and so on, down and down and down, with the same techniques applicable in the same kinds of ways at each step:
as each flaw .. is discovered ... [a] mini exploration will result in a more targeted testing exploration around this feature areaI enjoy this insight. And I have a lot of sympathy for that view of a possible traversal of a testing space. I feel like I follow that pattern while I'm testing. But I also feel that that kind of self-similar structure applies in other ways than simply increasing resolution at each step.  For me, testing can be done across, and around, and inside and outside, and above and below, and at meta levels of a system.

Which you might be happy enough to accept. But I also think that these different dimensions of testing can be taking place in different planes, looking in different directions, seeking different goals, and often many of these at the same time. Imagine this scenario, where I've been asked to test some product feature or other, and I have access to the product owner (PO):
  • While I am putting my questions to the PO, and getting her answers, I am interpreting her response and feeding that data to my model of the system we're testing, and so asking questions of the model.
  • But I'm also wondering whether I could have got more helpful answers to my questions if I'd phrased them a different way and evaluating whether or not I should risk upsetting her now by re-asking, or wait for another opportunity to practise a different question format. 
  • I take a high-level view to find out what the stakeholders want from the testing I'm doing, which makes me question whether what I've done returns that value, which in turn makes me question what I'm doing.
  • I want to find out which stakeholders have useful information about which aspects of the feature. By talking to them I begin to understand where they are reliable, their degrees of uncertainty, the clarity of their vision. While I'm doing this I'm testing their ability to express their opinion, and I'm feeding that into my model by adding uncertainties.
  • At the same time, I'm running an ad hoc experiment against the system based on the PO's data and noticing out of the corner of my eye that some of the text on the dialog we need to use is misaligned, and I recall that there have been similar examples in the past on other dialogs and so I shift my model of that problem into focus.
  • As I start thinking about it, I check myself and realise that I've missed what the PO is saying. 
  • I review that decision and curse myself.
  • And then I observe something that seems at odds with what the PO is saying. It could be that the software is wrong, or it could be that my model of the PO's view of the world is wrong, or the PO's view of the world or the PO's expression of their view of the world, or something else. I frame another experiment - perhaps more questions - to try to isolate whether and where there's an issue. 

And so on and so on and so on. Sometimes multiple activities feed into another. Sometimes one activity feeds into multiple others. Activities can run in parallel, overlap, be serial. A single activity can have multiple intended or accidental outcomes, ... By the definition in Explore It! as I interpret it, only some parts of this are testing. But, for me, it's just testing: all the way down, and the other directions.

I started off by being curious about a definition and then about my own curiosity and then about the value of either of those things. That lead to some interesting thoughts and a very enjoyable exchange and some introspection, and indeed to this essay. But, when analysing something, a natural question to ask can be: well, what are the alternatives?

It might not always be regarded as within a tester's remit to come up with alternatives but, as a testing tool, finding or generating alternatives is very useful.  Perhaps unsurprisingly there are numerous alternatives available and Arborosa's blog post What is Testing?  lists many. Taking inspiration from Michael Bolton's training session at Linguamatics I tried to create one that reflected testing for me, and this is what I came up with:
Testing is the pursuit of actual or potential incongruity.There is no specific technique; it is not limited to the software; it doesn't have to be linear; there don't need to be requirements or expectations; the same actions can contribute to multiple paths of investigation at the same time; it can apply at many levels and those levels can be distinct or overlapping in space and time.

That's my idea so far. Feel free to test it.

Particular thanks to Elisabeth Hendrickson for being open to my questions.

Edit: I later wrote some more about how I arrived at the specific wording in The Anatomy of a Definition of Testing.

Image: You Are Not So Smart

Categories: Blogs

Testical Debt

Fri, 11/04/2016 - 09:39
So the other day, while listening to Testing in the Pub with Keith Klain, as it happens, a thought that made me chuckle popped into my head. And when, an hour or two later, I was still chuckling, I tweeted it.
Testical Debt: the #testing that is prioritised out of a cycle and then later kicks you in the ... well, later really hurts you.— James Thomas (@qahiccupps) October 25, 2016The tweet format can be a sweet format because the detail is left to the reader's imagination. But I wanted to add a couple of notes.

The term is a pun on technical debt:
In this metaphor, doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt. Like a financial debt, the technical debt incurs interest payments, which come in the form of the extra effort that we have to do in future development because of the quick and dirty design choice. And, while the tweet is funny because we've all felt that kind of pain due to that kind of debt in our testing effort at some point (and because testicles are just inherently funny), it fails to acknowledge an extremely important point.

Sure, we have to prioritise testing work, like pretty much all other work. Sometimes the work we don't do turns out to have been important. Sometimes we suffer because of that. Often this is because the assumptions we held when prioritising were incorrect, or circumstances have changed and they are no longer valid.

But - and this is the important point - sometimes that is also true of the work we did do.  Every action we take is a gamble. Sometimes it will pay off, and sometimes it won't, as I somehow found myself telling *cough* Kent Beck recently.
@KentBeck Yes, and everybody takes risks repeatedly. They just mostly aren't aware of that. (Or of what all the outcomes are.)— James Thomas (@qahiccupps) October 19, 2016Image:
Categories: Blogs

Cambridge Lean Coffee

Wed, 10/26/2016 - 22:30
This month's Lean Coffee was hosted by us at Linguamatics. Here's some brief, aggregated comments and questions  on topics covered by the group I was in.

How important is exploratory testing?
  • When interviewing tester candidates, many have never heard of it.
  • Is exploratory testing a discrete thing? Is it something that you are always doing?
  • For one participant, exploratory testing is done in-house; test cases/regression testing are outsourced to China.
  • Some people are prohibited from doing it by the company they work for.
  • Surely everybody goes outside the test scripts?
  • Is what goes on in an all-hands "bug bash" exploratory testing? 
  • Exploratory testing is testing that only humans can do.

How do you deal with a flaky legacy automation suite?
  • The suite described was complex in terms of coverage and environment and failures in a given run are hard to diagnose as product or infrastructure or test suite issues
  • "Kill it with fire!"
  • Do you know whether it covers important cases? (It does.)
  • Are you getting value for the effort expended? (Yes,so far, in terms of personal understanding of the product and infrastructure.)
  • Flaky suites are not just bad because they fail, and we naturally want the suites to "be green"
  • ... flaky suites are bad because they destroy confidence in the test infrastructure. They have negative value.

What starting strategies do you have for testing?
  • Isn't "now" always the best time to start?
  • But can you think of any scenarios in which "now" is not the best time to start? (We could.)
  • You have to think of the opportunity cost.
  • How well you know the thing under test already can be a factor.
  • You can start researching before there is a product to test.
  • Do you look back over previous test efforts to review whether testing started at an appropriate time or in an appropriate way? (Occasionally. Usually we just move on to the next business priority.)
  • Shift testing as far left as you can, as a general rule
  • ... but in practice most people haven't got very far left of some software being already made.
  • Getting into design meetings can be highly valuable
  • ... because questions about ideas can be more efficient when they provoke change. (Compared to having to change software.)
  • When you question ideas you may need to provide stronger arguments because you have less (or no) tangible evidence
  • ... because there's no product yet.
  • Challenging ideas can shut thinking down. (So use softer approaches: "what might happen if ..." rather than "That will never work if ...")
  • Start testing by looking for the value proposition.
  • Value to who?
  • Value to the customer, but also other stakeholders
  • ... then look to see what risks there might be to that value, and explore them.

Death to Bug Advocacy
  • Andrew wrote a blog, Death to Bug Advocacy, which generated a lot of heat on Twitter this week.
  • The thrust is that testers should not be in the business of aggressively persuading decision makers to take certain decision and, for him, that is overstepping the mark.
  • Bug advocacy isn't universally considered to be that, however. (See e.g. the BBST Bug Advocacy course.) 
  • Sometimes people in other roles are passionate too
  • ... and two passionate debaters can help to provide perspectives for decision makers.
  • Product owners (and others on the business side) have a different perspective.
  • We've all seen the reverse of Andrew's criticism: a product owner or other key stakeholder prioritising the issue they've just found. ("I found it, so it must be important.")

Categories: Blogs

Making Fünf Myself

Sat, 10/22/2016 - 07:11

The first post on Hiccupps was published five years ago this week. It's called Sign Language and, reading it back now, although I might not write it the same way today, I'm not especially unhappy with it. The closing sentence still feels like a useful heuristic, even if I didn't present it that way at the time:
Your audience is not just the target audience, it's anyone who sees what it is you've done and forms an opinion of you because of it.I've looked back over the blog on most of its anniversaries, and each time found different value:
  • I Done the Ton: After two years I compared my progress to my initial goals and reflected on how I'd become a tester and test manager 
  • It's the Thought That Counts: After three years I began to realise that the act of blogging was an end in itself, not just a means to an end 
  • My Two Cents: After four years, the value of time series data about myself and the evolution (or lack of evolution) of my thoughts and positions became clearer 

And so what have I observed after five years? Well, by taking the time series data to Excel (see the image at the top), I find that this has been a bumper year in terms of the number of posts I've produced.

I think it's significant that a year ago I attended and spoke at EuroSTAR in Maastricht and came back bursting with ideas. In November 2015 I wrote eight posts, the largest number in any month since November 2011. This year I've achieved that number three times and reached seven posts in a further three months.

But I don't confuse quantity with quality ... very often.

In fact, if I look back over this year's posts I see material that I am ridiculously proud of:
  • Joking With Jerry: Jerry Weinberg - yes, that Jerry Weinberg - asked me to organise a discussion on something that I'd written that he enjoyed. I think Jerry is the person I have been most influenced by as a tester and a manager and it's no exaggeration to say that, while nerve-wracking, it was a labour of love from start to end. 
  • Bug-Free Software? Go For It!: An essay I wrote in preparation for CEWT #2 which, I think, shows a biggering in my capacity to think bigger, and which I like because it reminds me that the Cambridge Exploratory Workshop on Testing is a thing. I set it up. It works. Other people are getting value from it. And we're doing another one in a couple of weeks. 
  • Toujours Testing: This one simply because it is a kind of personal manifesto. 
  • What is What is Professional Testing?: An essay I wrote in preparation for MEWT #5 which, I think, reflects the move I've been making over the years to perform what I might call exploratory retrospection. By this I mean that I will try to test my testing while it is ongoing rather than waiting until afterwards - although, of course, I reserve the right to do that too. What I like about this is that I can and do use the same kinds of tools in both cases. 
  • Tools: Take Your Pick: It's got ideas and tools up the wazoo. From the seed of a thought I had while cleaning the bathroom through the thicket of ideas that came pouring out once I started to scratch away at it. From the practical to the theoretical and back. I found it challenging to arrange the ideas in my head but immensely satisfying to write. 

I'll stop at five, for no other reason than this post is for the fifth birthday. I wouldn't be so crass as to say they're presents for you. But when they pop out, completed, they do sometimes feel like presents for me.
Categories: Blogs

He Said Captain

Fri, 10/21/2016 - 10:41
A few months ago, as I was walking my two daughters to school, one of their classmates gave me the thumbs up and shouted "heeeyyy, Captain!"

Young as the lad was, I congratulated myself that someone had clearly recognised my innate leadership capabilities and felt compelled to verbalise his respect for them, and me. Chest puffed out I strutted across the playground, until one of my daughters pointed out that the t-shirt I was wearing had a Captain America star on the front of it. Doh!

Today, as I was getting dressed, my eldest daughter asked to choose a t-shirt for me to wear, and picked the Captain America one. "Do you remember the time ..." she said, and burst out laughing at my recalled vain stupidity.

Young as my daughter is, her laughter is well-founded and a useful lesson for me. I wear a virtual t-shirt at work, one with Manager written on it. People no doubt afford me respect, or at least deference, because of it. I hope they also afford me respect because of my actions. But from my side it can be hard to tell the difference. So I'll do well to keep any strutting in check.
Categories: Blogs

And Now Repeat

Sat, 10/15/2016 - 06:57

As we were triaging that day's bug reports, the Dev Manager and me, we reached one that I'd filed. After skimming it to remind himself of the contents, the Dev Manager commented "ah yes, here's one of your favourite M.O.s ..."

In this case I'd created a particular flavour of an object by a specific action and then found that I could reapply the action to cause the object to become corrupted. Fortunately for our product, this kind of object is created only rarely and there's little occasion - although valid reasons - to do what I did with one.

The Dev Manager carried on "... if you can find a way to connect something that links out back to itself, or to make something that takes input read its own output, or to make something and then try to remake it, or stuff it back into itself ... you will."

Fascinating. It should come as no surprise to find that those with a different perspective to us see different things in us. And, in fact, I was not surprised to find that I use this kind of approach. But once I was aware that others see it as a thing and observe value in it, I could feed that back into our testing consciously.

Connecting my output to my input to my output ...
Categories: Blogs