Skip to content

Hiccupps - James Thomas
Syndicate content
James Thomas
Updated: 17 hours 26 min ago

Oh, Kay!

Sat, 12/20/2014 - 08:28

Phil Kay is a stand-up comedian known for his love of live work and improvisation. In his interview for The Comedian's Comedian recently he said some things that resonated with me.

When he's talking about the impression others may have that there are rules of improvisation, I'm thinking about testing:
There's not a principle that I must avoid things I've done before ... There's plenty of room in the form for doing brand new things [but that's] not the aim, that I must do it brand new.When he's talking about how he constantly watches for and collects data that he hopes will come in useful later, that will help him to make connections and that will keep his mojo working when he's not on stage, I'm thinking about testing:
I write notes all the time ... anything interesting that comes to me ... but [the notes] are not the thing. The thing is the fact that I'm watching out for stuff ... like a boxer keeping loose ... on stage I hope they'll all come together.When he's talking about how not being tied to a prescribed structure opens up possibilities, I'm thinking about testing:Allow the best to be a thing that could happen.  If you're trying to enforce something, no best can ever happen.And when he talking about how it doesn't work sometimes, I'm still thinking about testing:
The list of traumatic failure gigs is so long ...  I accept the risk it'll go wrong.Looking around for related material I found that James Lyndsay has a workshop on Improvising for Testers, and Damian Synadinos has one specifically on the links between improv comedy and testing, Improv(e) Your Testing! Tips and Tricks from Jester to Tester.
Categories: Blogs

The So in Absolute

Fri, 12/19/2014 - 08:51
In a job interview once, the candidate said to me
All software requires regression testingand I said
 All software requires regression testing?(I didn't think I could put stress on regression testing as well. It might have sounded like I was shouting.)

The candidate said - after a reasonably lengthy pause - simply
Yes. When reporting something as apparently absolute, I want my testers to caveat, to contextualise, to define the scope of the statement
I'm saying X, so long as ...When presented with an unequivocal, absolute, universal statement, I want my testers to be thinking about the ramifications, to be testing it
You're saying X, so what about ...Well, if I want to stay on the right side of Batman, I want them to do those things so far as it makes sense in their context.

So here's a bit of seasonal fun: what scenarios can you think of where software doesn't require regression testing? Be as creative as you like and stuff them into the comments.
Categories: Blogs


Thu, 12/04/2014 - 11:28

Bob Marshall, in a bunch of his recent tweets on #NoTesting, quoted Philip Crosby, the author of Quality is Free. Here's a couple:
Why spend all this time finding and fixing and fighting when you could prevent the incident in the first place?If managers think testing is the answer to quality, then people will test.One of Crosby's arguments, as I understand it, runs like this: The quality of a thing is the extent to which it conforms to its requirements. The cost of making something, finding it doesn't conform and then fixing or remaking it is higher than the cost of making it to conform in the first place. So you can have a quality thing for (at worst) no additional cost.

Which can work, as long as you're prepared to consider anything outside conformance to requirements to also be outside of any quality considerations.

In the same #NoTesting Twitter stream Marshall said:
The problem of testing being seen as the only path to software quality is a very real, longstanding problem  People demand testing when they have little or no faith/trust in the dev team. So, test evermore? Or work on the trust issues?One of Marshall's arguments, as I understand it, runs like this (see e.g. the comments in More NoTesting): The traditional "develop" and "test" roles by their nature cause a separation in the creation and inspection of an implementation. This lengthens any feedback loop. Removing the test role and pushing the inspection back into development tightens it and this is considered desirable. No test role means no testing, or #NoTesting. (He declines to define testing in No Testing.)

Which can work, as long as you're prepared to consider anything outside verification of implementation to be outside the remit of testing.

Crosby's work was aimed at manufacturing rather than software development while Bob Marshall's work is steeped in software development. Regardless, in both cases, the generalisations stemming from restricted contexts can be provocative to us in our software development world.

And to my mind there's no problem with that, because being prompted to reconsider a position can be useful - see e.g. "Test automation is any use of tools to support testing". We often do things the way we do them simply because that's the way that we've been doing them.

But that doesn't mean that all the things we might (with a broader view) be prepared to think of as quality or testing considerations can be had for nothing or are not useful. In order to meet the needs of those concerned in any software development process we do best to understand them, the motivations for them and so on - see e.g. Weinberg.

To get to the desired (bigger picture) quality involves asking the (bigger picture) questions; that is, testing the customer's assumptions, testing the scope of the intended solution - you can think of many others - and indeed testing the need for any (small picture) testing, on this project, at this time.

Whether this is done by someone designated as a tester or not, it is done by a human and, as Rands said this week, I believe these are humans you want in the building. #GoTesting
Categories: Blogs

Whys After the Event

Tue, 12/02/2014 - 16:56
In the shadow of a failure - and after cleaning the fan - some kind of post-mortem is often requested. The Five Whys is a well-known approach for this kind of analysis and I found myself reading a handful of articles about it recently.  I particularly enjoyed the sceptical take on it in these:
Image: Amazon
Categories: Blogs

Ask Me Another

Sat, 11/22/2014 - 07:59
I just wrote a LinkedIn recommendation for one of my team who's leaving Cambridge in the new year. It included this phrase
unafraid of the difficult (to ask and often answer!) questions And he's not the only one. Questions are a tester's stock-in-trade, but what kinds of factors can make them difficult to ask? Here's some starters:
  • the questions are hard to frame because the subject matter is hard to understand
  • the questions have known answers, but none are attractive 
  • the questions don't have any known answers
  • the questions are unlikely to have any answers
  • the questions put the credibility of the questionee at risk
  • the questions put the credibility of the questioner at risk
  • the questions put the credibility of shared beliefs, plans or assumptions at risk
  • the questions challenge someone further up the company hierarchy
  • the questions are in a sensitive area - socially, personally, morally or otherwise
  • the questions are outside the questioner's perceived area of concern or responsibility
  • the questioner fears the answer
  • the questioner fears that the question would reveal some information they would prefer hidden
  • the questioner isn't sure who to ask the question of
  • the questioner can see that others who could are not asking the question
  • the questioner has found that questions of this type are not answered
  • the questioner lacks credibility in the area of the question
  • the questioner lacks confidence in their ability to question this area
  • the questionee is expected not to want to answer the question
  • the questionee is expected not to know the answer
  • the questionee never answers questions
  • the questionee responds negatively to questions (and the questioner)
  • the questionee is likely interpret the question as implied criticism or lack of knowledge
Some of these - or their analogues - are also reasons for a question being difficult to answer but here's a few more in that direction*:
  • the answer will not satisfy the questioner, or someone they care about
  • the answer is known but cannot be given
  • the answer is known to be incorrect or deliberately misleading
  • the answer is unknown
  • the answer is unknown but some answer is required
  • the answer is clearly insufficient
  • the answer would expose something that the questionee would prefer hidden
  • the answer to a related question could expose something the questionee would prefer hidden
  • the questioner is difficult to satisfy
  • the questionee doesn't understand the question
  • the questionee doesn't understand the relevance of the question
  • the questionee doesn't recognise that there is a question to answer
Much as I could often do without them - they're hard! - I welcome and credit difficult questions. 

Because they'll make me think, suggest that I might reconsider, force me to understand what my point of view on something actually is. Because they expose contradictions and vagueness, throw light onto dark corners, open up new possibilities by suggesting that there may be answers other than those already thought of, or those that have been arrived at by not thinking.

Because they can start a dialog in an important place, one which is the crux of a problem or a symptom or a ramification of it.

Because the difficult questions are often the improving questions: maybe the thing being asked about is changed for the better as a result of the question, or our view of the thing becomes more nuanced or increased in resolution, or broader, or our knowledge about our knowledge of the thing becomes clearer.

And even though the answers are often difficult, I do my best to give them in as full, honest and timely a fashion as I can because I think that an environment where those questions can be asked safely and will be answered respectfully is one that is conducive to good work.

* And we haven't taken into account the questions that aren't asked because they are hard to know or the answers that are hard purely because of the effort that's required to discover them or how differences in context can change how questions are asked or answered, how the same questions can be asked in different ways, willful blindness, plausible deniability, behavioural models such as the Satir Interaction Model and so on.

Thanks to Josh Raine for his comments on an earlier draft of this post.
Categories: Blogs

Testing Testing

Tue, 11/11/2014 - 08:36
Metascience, according to this article in Nature, is "the science of science ... It has its roots in the philosophy of science and the study of scientific methods" with a primary focus being the study of the reproducibility of experimental findings.

The article points out "the decline effect, an idea ... that the size of an effect decreases over repeated replications," acknowledges experimenter expectancy effects and the power of double-blinding and expects that "self-examination can only strengthen the scientific process for all."

And when we try something new in our testing, we subject that thing to testing, don't we? Don't we?
Categories: Blogs