Skip to content

Hiccupps - James Thomas
Syndicate content
James Thomas
Updated: 5 hours 42 min ago

Getting Your Back Up

Sun, 01/31/2016 - 12:29
One of the effects of being asked to explain yourself can be that you get to find out what you really think.

This has potentially many outcomes: sometimes your view might turn out to be a surprise even to you; sometimes you realise that you don't have a justification for what you said at all; sometimes you are reminded that you established your perspective in another context or another time and it would be sensible to revisit it (as I did the other day in Me and My Bestimates).

Another possibility is that the challenge surfaces an opinion that you realise you are comfortable with but have previously held only implicitly. That happened to me most recently a few weeks ago. I was talking to one of my team about some actions that had been taken and also about those that had not but, I suggested, perhaps could or should have been without needing to wait to consult me.

Understandably, given the particular situation, I was asked about the potential consequences of taking actions without approval if they turn out to be the wrong actions.

And that was when my internal position was revealed: I said as long as they had taken a reasonable decision in good faith based on available evidence and appropriate effort, I would back them up, even if it proved to be the wrong decision. And so we both learned something.

There are all sorts of reasons that we might not enjoy being questioned, but none of them outweigh the benefits. Which is why I continue to encourage challenges in and from my teams.
Categories: Blogs

Just the Fracts, Ma'am

Fri, 01/22/2016 - 08:39

Adam Knight spoke at the Cambridge Tester Meetup last night on Fractal Exploratory Testing, a topic he's blogged about a couple of times:

Fractals can be roughly defined as having similar properties whatever level of magnification you apply to them. The Mandelbrot Set is a famously fractal shape and zooming into into it exposes characteristics that make each image recognisably from the same family. Going 10x or 100x into some other image, say a photograph of my head, would not have the same effect.

There's an analogy to be made with Exploratory Testing - in fact, with exploration of any kind - and this is reinforced by Adam's choosing to cast exploration in terms of charters written in a concise but formal way inspired by Elizabeth Hendrickson along the lines of "Explore <area> with <resources> to achieve <aim>".

Each exploration uses appropriate testing approaches to attempt to achieve its aim and sometimes succeeds. However, along the way it might expose another area of interest, or fail to because it instead finds something else, or is blocked for some reason or an assumption about the mission proves false and so the charter is invalid or ...

Each of these outcomes can themselves pose new questions, which can in turn inspire new charters, new explorations which will look just the same as the mission which spawned them in all relevant details: they will have a charter in the same format and the same kinds of testing techniques can be used to execute them.

In a fractal, you can magnify any part to any degree. It's a mathematical paradox that coastlines tend to infinite length: the greater the level of magnification, the greater the possible resolution of the ruler, the more small deviations that can be observed and measured. And which of us hasn't from time to time got so engrossed in a testing task that we've burned through hours of investigation focusing on increasingly detailed analysis of some aspect of a product and still thought that there was more we could do?

Adam's insight in this talk wasn't to do with exploratory testing, nor even how thinking about fractals can help a tester in their testing missions, particularly, but much more about how describing testing in this recursive way can help to explain why, for example:
  • on a project with 10 requirements, there aren't merely 10 test cases to be executed before the product is shipped
  • estimation of testing time "required" is not necessarily a simple calculation
  • focus on different areas of the system under test might differ radically, depending on what exploration in those areas found

He talked about how he has used fractals to explain testing to non-testers and particularly to non-testers on the business side of the company. They might not "get" testing but they can understand a picture of it which shows that successive rounds of investigation are defining the differences between the specification and the product that was delivered. The level of investigation in a particular area can increase the resolution with which the size and shape of that area is understood.

Decisions from the business, based on what's known at any point, can then be seen to be guiding further testing into a new area or choosing the magnification of some existing area that is most important to get the information that will motivate the next round of decisions. The alert members of the business side might then themselves see that they have become engaged in a fractal process too.
Categories: Blogs

Me and My Bestimates

Thu, 01/14/2016 - 23:09

As Test Manager I fit my team's work into multiple overlapping project schedules which are not under my control. The schedules have, as you might expect, multiple constraints that operate simultaneously, such as:
  • end dates or other milestones
  • dependencies on other parts of the schedule or other schedules
  • level of effort we are able or prepared to commit to different tasks in absolute terms (e.g. contractually) or in relative terms (e.g. based on perceived risk)
  • methodology; different teams in Linguamatics operate different development methodologies, and we work with non-development teams too
  • desired quality level (whatever that means in any given case)
Scheduling in this environment is a challenge even without the wildcard that is the unknowns; the stuff you find out as you go and the contexts that change under your feet.

To do my best to provide the service my (most often internal) clients want - which generally includes some kind of estimate, even if there is little at times to base it on or anchor it to - I think I need a few things, including some model of:
  • the resources that I have or will have available
  • the status of the projects that are ongoing
  • the relative importance of projects to one another
To best provide a framework for my team I think I need a few more things, including:
  • transparency of decision-making
  • clarity of current priorities
  • visibility of actions aimed at improvement
In an attempt to achieve these ends (amongst others) I maintain a spreadsheet that records pieces of work over a certain granularity. The definition of granularity isn't very scientific: if a project feels "worth tracking" then it gets in; if a tester feels they'd like it tracked then it gets in; if it has the feeling of something that might blow up then it gets in. I update and publish this to an internally public location every week, with a short summary of significant changes and various charts showing how we're spending time in various dimensions which I use to analyse how we're working and what changes we might experiment with. (I am a practitioner of and believer in Open Notebook Testing and openness and responsiveness generally.)

For most projects that we decide to track, the tester or testers on a project, and/or me, will pick some initial budget. (There are some exceptions which I'll touch on later.) The budget is based on all sorts of factors, such as those mentioned above and those summarised by George Dinwiddie in Estimating in Comparison to Past Experience  and attempts to take uncertainties into account.

I view the budget as a kind of "top-down" number. It (a) fits into and influences the global scheduling and (b) provides implicit guidance to the testers about the time they have available to test given what we know now. It is part of the project context for them, and should constrain testing in the way that other aspects of that context do.

As work proceeds, testers report the time spent on each project along with their current estimate for the remaining work. I view this as a "bottom-up" number. It (a) invites the tester to think at a level higher than immediate action, to step back, to defocus, to prioritise and (b) provides feedback to me and the bigger scheduling headache.

Some projects are more top-down: for example a piece of work which is set up in a time-box. Some projects are open-ended and bottom-up: for example an investigation into a live support issue. Most projects sit somewhere in between. Projects which are entirely bottom-up will often start with a phase where no explicit budget is set and no estimate is made; instead we'll just record time spent. At some point we'll either reach the end of the investigation, the end of the availability of the tester (in which cases work stops) or some point where we feel we understand enough to set a budget.

Because many of our projects tend to be more schedule-bound, our default position tends to be to assume that project work consumes the budget and then stops and that the tester will reprioritise based on this as necessary. However, frequently something happens on a project that affects the tester's estimate - the feature turns out to be more solid or less complex than we expected; we were able to do more work in unit testing than we anticipated; there's some horrible interaction with another piece of our application that no-one expected. In this case, we will talk about the current budget and the differences between it and the new estimate and what impacts there would be of changing the budget, or not. Some for-instances:
  • the tester discovers that there's some area we didn't think needed testing but actually does and increases their estimate to accommodate it. I might decide, on a schedule basis, that more time is not available, so the budget does not change, and the question becomes one of balancing risk inside the existing budget. Future estimates for this project will need to take this decision into account. 
  • the tester discovers that the implementation is missing some behaviour that the company deems is critical. It will require investigation and retest so they increase their estimate to accommodate that work. I agree that this is important and we increase the budget to provide more time. (And I deal with the global scheduling impact.)
  • I find that there is more time in the schedule, perhaps because something else got cancelled. I tell a tester who previously wanted time for some piece of work that is it now available and increase the remaining budget for that work.
  • I am told that we need to complete work on some project earlier than previously understood and I reduce the budget on it, prompting a discussion about how best to proceed (with the usual iron triangle to constrain us).
What's crucial here is that the discussion can be opened at any point, by either side, should circumstances make that necessary or desirable or even just a sensible precaution. And any negotiation can and does include other stakeholders.

The budget and and estimates mutually constrain one another and contribute uncertainties at their different granularities. They mix science and instinct, planning and gut; known and unknown; they're guesstimates. They go well together, they bounce off each other, they influence one another and reflect one another, they're like best mates; they're bestimates you might say, although I never do.

In some sense I think of them as a simple interface between the complexities at the test level and complexities at the management level. Compressed down into a single number they naturally lose information but, when more information doesn't need to be shared (essentially we are on-track relative to the most recent conversation), that is sufficient and indeed efficient. Having a communication channel open for when more needs to be shared is, as I've just said, crucial. And I'll say it again: available channels of communication are absolutely crucial.

NotesThe approach described is a high-level sketch of the process that has evolved over time to try to manage competing constraints and requirements of the kinds that I mentioned at the top. And it continues to evolve to meet new constraints or ideas. If it stops meeting my needs, I'll stop using it.

We use the same spreadsheet and reporting mechanism to record time spent on ongoing team-level tasks which we then use to project future levels of effort, or identify trends or that there may be a problem we need to look into.

What I've written here leaves out a lot of detail in and around what we do, including how we decide what work to do in any given case, the existence of peer review (so there are other eyes on the work), the help I get from my team in spotting bugs in what I've done, questioning why I do it - which is how this particular post came about - or suggesting improvements, the importance of trust on both sides, and the importance of testers being empowered and supported in their work.
Categories: Blogs

A Good Knight

Tue, 01/12/2016 - 15:02

I'm delighted to say that Adam Knight will be talking about Fractal Exploratory Testing at the Cambridge Tester Meetup on 21st January at 7pm. Karo is hosting it here at Linguamatics with pizza and drinks.

Whet your appetite with these posts:
And then sign up here.

Edit: My notes from Adam's talk.
Categories: Blogs

A Broken Record

Sat, 01/09/2016 - 11:45

Years ago I chucked a faulty video recorder and bought a cheap and compact PC to use as a PVR. (I run MythTV on Ubuntu, for those interested in such things.) Because me and Mrs Thomas don't watch telly that much, and record less, and because we're interested in not wasting electricity, we only have the box on when we're watching something on it or when we've scheduled something to record.

Of course, sometimes that means that we have to remember to leave it on. And we kept forgetting. But being a problem-solver, and interested in proportionate solutions, I implemented a quick fix. In fact it was more an initial trial, just a simple little sign that we stick next to the telly. It says VIDEO and has served us so well that we found no need for anything more sophisticated.
Until now. Our kids have come along and control the telly, operate the computer and so on. We're helping them to become interested in not wasting electricity too, and so their habit is to turn appliances off when they're done with them.
Do you see where this is going? 
The word video means little to them. If it's anything at all it's something they watch on YouTube and nothing to do with recording, although it's not as alien as when I talk about taping something... And so our sign doesn't work any more; the girls just keep turning everything off as we have asked them to. Explaining carefully to them what the sign means, many times, hasn't helped.
Being a problem-solver, and aware that solutions can date and the problems they address can shift, and interested in meta-aspects of problem solving, I took a step back. Was I looking at this in the right way? What was really the problem here today? And whose problem is it?
The answers? Simply: No. The sign. Mine.
And so I've changed the sign. It now says Please don't turn the computer off.Image:
Categories: Blogs

Testing Utility

Sat, 01/02/2016 - 22:43
Testing can take a lot of inspiration from the sciences and the scientific method and I've blogged about some concepts that I think cross over in the past. Here's a few examples:
The science around policy - and the policy around science - is particularly interesting because it mirrors in useful respects the relationship between a tester and a stakeholder. In What makes an academic paper useful for health policy? Christopher Witty looks at ways that scientists can better serve policy makers and  much of what he's saying is also relevant to testers who want to do their best to:
  • put the most valuable information they can 
  • into the hands of the stakeholders who are asking for it
  • at a time where it's useful
  • at a cost which is acceptable
  • in a manner which is easily consumable
  • and at the right level
  • with caveats and methodology clear
  • and biases minimised.
Which is all testers, I hope.Image:
Categories: Blogs

State Your Business

Thu, 12/31/2015 - 08:58
In their write-up of the State of Testing 2015 survey, the organisers say:
we can say with  confidence that demand for the “Thinking Tester” is on the rise, as it appears that today’s Industry needs people who are more than just “a tester”.I don't know whether two data points (2013 and 2015) are really enough to give (statistical) confidence in such a rise, but it certainly reflects my own intentions and desires for the team I run.

With that in mind, although the annual snapshots can be interesting, the value of this kind of enterprise is often in the visibility of changes over time and I hope that with another year of data we'll begin to find evidence for some interesting trends.

In general, analyses of this sort becomes more reliable with larger numbers of participants so why not help us all to help ourselves and get over to  the State of Testing survey for 2016  which launches at the beginning of January.
Categories: Blogs

My Two Cents

Wed, 12/23/2015 - 06:58
This is the 200th post on Hiccupps. At a milestone like this it's common to pause and reflect, and I've done so a couple of times to date. If you are of an historical bent you might try the lengthy hundredth, or, if introspective is your thing then number 150 is perhaps more up your street.

But this one, this double centenary, this one is short and sweet and about ideas.

My blog, I see more and more, is a repository of ideas I've had, and sometimes about aspects of the ideas, meta ideas such as the paths to those ideas, connections between ideas and the way that ideas breed ideas.

I try hard not to knowingly regurgitate other people's ideas unless I am commenting on them, or questioning them, or testing my own against them.

Which doesn't mean that I don't value them. Quite the opposite in fact: I am a fan of ideas, for and by us all; not least because with no ideas there are no good ideas.
Image: Old Book Illustrations
Categories: Blogs

You Meta Watch Out

Tue, 12/22/2015 - 08:00
You are presented with a problem. You are a problem-solver so you suggest solutions and eventually find one that satisfies the problem-poser. Along the way you find out a lot of implicit things that might have been useful to know earlier. But well done anyway. Another satisfied customer!

You are presented with a problem. You are a problem-solver but you know that diving into the detail of potential solutions is only one way to skin a cat (although the problem is rarely about feline furectomies in my experience). So you think about asking questions that help you to understand the problem. You might ask questions that help to constrain your search for solutions to the problem. You might ask questions that help to understand the history of the problem, the needs and intent of the problem-poser, the permitted ways in which a solution can be found, the scope of the solution, the time-frame for the solution, the priority of the solution, the necessity of the solution. You learn about the problem. And then you solve it, or not, as required. Well done you, too!

The kinds of questions just listed are meta questions - questions that assist with an underlying question. I look out for meta questions because they show that the person asking them is, amongst other things, capable of maintaining a view of the problem itself and the way in which the problem is being or might be approached. This kind of person is giving themselves a chance of generalising across problems, of  reducing the solution space, of understanding that the problem need not be addressed or perhaps would be better framed in some other way.

Of course, having the capacity to think of meta questions doesn't tell you how and when and where to ask them to avoid aggravating the solution-seeking poser. That's a different problem.
Categories: Blogs