Skip to content

Hiccupps - James Thomas
Syndicate content
James Thomashttp://www.blogger.com/profile/01185262890702402757noreply@blogger.comBlogger246125
Updated: 17 hours 15 min ago

Fail Over

Thu, 08/18/2016 - 22:50
In another happy accident, I ended up with a bunch of podcasts on failure to listen to in the same week. (Success!) Here's a few quotes I particularly enjoyed.

In Failing Gracefully on the BBC World Service, David Mindell from MIT recalls the early days of NASA's Project Apollo:
The engineers said "oh it's going to have two buttons. That's the whole interface. Take Me To The Moon, that's one button, and Take Me Home is the other button" [but] by the time they landed on the moon it was a very rich interactive system ...The ultimate goal of new technology should not be full automation. Rather, the ultimate goal should be complete cooperation with the human: trusted, transparent, collaboration ... we've learned that [full autonomy] is dangerous, it's failure-prone, it's brittle, it's not going to get us to where we need to go.And NASA has had some high-profile failures. In another episode in the same series of programmes, Faster, Better, Cheaper, presenter Kevin Fong concludes:
In complex systems, failure is inevitable. It needs to be learned from but more importantly it needs to become a conscious part of everything that you do.Which fits nicely with Richard Cook's paper, How Complex Systems Fail, from which I'll extract this gem:
... all practitioner actions are actually gambles, that is, acts that take place in the face of uncertain outcomes. The degree of uncertainty may change from moment to moment. That practitioner actions are gambles appears clear after accidents; in general, post hoc analysis regards these gambles as poor ones. But the converse: that successful outcomes are also the result of gambles; is not widely appreciated. In the Ted Radio Hour podcast, Failure is an Option, Astro Teller of X, Google's "moonshot factory", takes Fong's suggestion to heart. His approach is to encourage failure, to deliberately seek out the weak points in any idea and abort when they're discovered:
... I've reframed what I think of as real failure. I think of real failure as the point at which you know what you're working on is the wrong thing to be working on or that you're working on it in the wrong way. You can't call the work up to the moment where you figure it out that you're doing the wrong thing failing. That's called learning. He elaborates in his full TED talk, When A Project Fails, Should The Workers Get A Bonus?:
If there's an Achilles heel in one of our projects we want to know it right now not way down the road ... Enthusiastic skepticism is not the enemy of boundless optimism. It's optimism's perfect partner.And that's music to this tester's ears.
Image: Old Book Illustrations
Categories: Blogs

Understanding Testing Understanding

Fri, 08/12/2016 - 07:40
Andrew Morton tweeted at me the other day:
Does being able to make a joke about something show that you understand it? Maybe a question for @qahiccupps— Andrew Morton (@TestingChef) August 9, 2016I ran an on-the-spot thought experiment, trying to find a counterexample to the assertion "In order to make a joke about something you have to understand it."

I thought of a few things that I don't pretend to understand, such as special relativity, and tried to make a joke out of one of them. Which I did, and so I think I can safely say this:
@TestingChef Wouldn't have thought so. For example ...

Einstein's law of special relativity says you /can/ have a favourite child.— James Thomas (@qahiccupps) August 9, 2016Now this isn't a side-splitting, snot shower-inducing, self-suffocating-with-laughter kind of a joke. But it is a joke and the humour comes from the resolution of the cognitive dissonance that it sets up: the idea that special relativity could have anything to do with special relatives. (As such, for anyone who doesn't know that the two things are unrelated, this joke doesn't work.)

And I think that set up is a key point with respect to Andrew's question. If I want to deliberately set up a joke then I need to be aware of the potential for that dissonance:
@TestingChef To intentionally make a joke, you need to know about some aspect of the thing. (e.g. Special Relativity is not about family)— James Thomas (@qahiccupps) August 9, 2016@TestingChef If you're prepared to accept that intention is not required then all bets are off.— James Thomas (@qahiccupps) August 9, 2016Reading it back now I'm still comfortable with that initial analysis although I have more thoughts that I intentionally left alone on the Twitter thread. Thoughts like:
  • What do we mean by understand in this context?
  • I don't understand special relativity in depth, but I have an idea about roughly what it is. Does that invalidate my thought experiment?
  • What about the other direction: does understanding something enable you to make a joke about it?
  • What constitutes a joke?
  • Do we mean a joke that makes someone laugh?
  • If so, who?
  • Or is it enough for the author to assert that it's a joke?
  • ...
All things it might be illuminating to pursue at some point. But the thought that I've been coming back to since tweeting that quick reply is this: in my EuroSTAR 2015 talk, Your Testing is a Joke, I made an analogy between joking and testing. So what happens if we recast Andrew's original in terms of testing?Does being able to test something show that you understand it?And now the questions start again...
Image: https://flic.kr/p/i6Zqba
Categories: Blogs

Know What?

Tue, 08/02/2016 - 23:07

I regularly listen to the Rationally Speaking podcast hosted by Julia Galef. Last week she talked to James Evans about Meta Knowledge and here's a couple of quotes I particularly enjoyed.
When discussing machine learning approaches to discovering structure in data and how that can change what we learn and how we learn it:
James: In some sense, these automated approaches to analysis also allow us to reveal our biases to ourselves and to some degree, overcome them. Julia: Interesting. Wouldn't there still be biases built into the way that we set up the algorithms that are mining data? James: When you have more data, you can have weaker models. When discussing ambiguity and how it impacts collaboration:
James: I have a recent paper where we explore how ambiguity works across fields ... the more ambiguous the claims ... the more likely it is for people who build on your work to build and engage with others who are also building on your work ...Really important work often ends up being important because it has many interpretations and fuels debates for generations to come ...  It certainly appears that there is an integrating benefit of some level of ambiguity.Image: https://flic.kr/p/cXJ31N 
Categories: Blogs

Seven Sees

Sat, 07/30/2016 - 06:23
Here's the column I contributed this month to my company's internal newsletter, Indefinite Articles. (Yeah, you're right, we're a bit geeky and into linguistics. As it happens I wanted to call the thing My Ding-A-Ling but nobody else was having it.) 
When I was asked to write a Seven Things You Didn't Know About ...  article ("any subject would be fine" they said) I didn't know what to write about. As a tester, being in a position of not knowing something is an occupational hazard. In fact, it's pretty much a perpetual state since our work is predominantly about asking questions. And why would we ask questions if we already knew? (Please don't send me answers to this.)
Often, the teams in Linguamatics are asking questions because there's some data we need to obtain. Other times we're asking more open-ended, discovery-generating questions because, say, we're interested in understanding more about why we're doing something, exploring the ramifications of doing something, wondering what might make sense to do next, and you can think of many others I'm sure.
We ask these kinds of questions of others and of ourselves. And plenty of times we will get answers. But I've found that it helps me to remember that the answers - even when delivered in good faith - can be partial, be biased, be flawed, and even be wrong. And, however little I might think it or like it, the same applies to my questions.
We are all subject to any number of biases, susceptible to any number of logical fallacies, influenced by any number of subtle social factors, and are better or worse at expressing the concepts in our heads in ways that the people we're talking to can understand. And so even when you think you know something about something, there's likely to be something you don't know about the something you think you know about that something.
To help with that, here's a list of seven common biases, oversights, logical fallacies and reasoning errors that I've seen and see in action, and have perpetrated myself:
Further reading: Thou Shalt Not Commit Logical FallaciesMental Models I Find Repeatedly UsefulSatir Interaction Model.Image: https://flic.kr/p/9uHWvp
Categories: Blogs

It's Great When You're Negate... Yeah

Thu, 07/28/2016 - 22:54
I'm testing. I can see a potential problem and I have an investigative approach in mind. (Actually, I generally challenge myself to have more than one.) Before I proceed, I'd like to get some confidence that the direction I'm about to take is plausible. Like this:

I have seen the system under test fail. I look in the logs at about the time of the failure. I see an error message that looks interesting.  I could - I could - regard that error message as significant and pursue a line of investigation that assumes it is implicated in the failure I observed.

Or - or -  I could take a second to grep the logs to see whether the error message is, say, occurring frequently and just happens to have occurred coincident with the problem I'm chasing on this occasion.

And that's what I'll do, I think.

James Lyndsay's excellent paper, A Positive View of Negative Testing, describes one of the aims of negative testing as the "prompt exposure of significant faults". That's what I'm after here. If my assumption is clearly wrong, I want to find out quickly and cheaply.

Checking myself and checking my ideas has saved me much time and grief over the years. Which is not to say I always remember to do it. But I feel great when I do, yeah.
Image: Black Grape (Wikipedia)
Categories: Blogs

A Glass Half Fool

Wed, 07/27/2016 - 07:48
While there's much to dislike about Twitter, one of the things I do enjoy is the cheap and frequent opportunities it provides for happy happenstance.
@noahsussman Only computers?

It's easy to put people in incongruous situations. The art is in not doing it accidentally.— James Thomas (@qahiccupps) July 27, 2016Without seeing Noah Sussman's tweet, I wouldn't have had my own thought, a useful thought for me, a handy reminder to myself of what I'm trying to do in my interactions with others, captured in a way I had never considered it before.
Image: https://flic.kr/p/a341bn 
Categories: Blogs

Go, Ape

Thu, 07/21/2016 - 22:44


A couple of years ago I read The One Minute Manager by Ken Blanchard on the recommendation of a tester on my team. As The One Line Reviewer I might write that it's an encouragement to do some generally reasonable things (set clear goals, monitor progress towards them, and provide precise and timely feedback) wrapped up in a parable full of clumsy prose and sprinkled liberally with business aphorisms.

Last week I was lent a copy of The One Minute Manager Meets the Monkey, one of what is clearly a not insubstantial franchise that's grown out of the original book. Unsurprisingly perhaps, given that it is part of a successful series, this book is similar to the first: another shop floor fable, more maxims, some sensible suggestions.

On this occasion, the advice is to do with delegation and, specifically, about managers who pull work to themselves rather than sharing it out. I might summarise the premise as:
  • Managers, while thinking they are servicing their team, may be blocking them.
  • The managerial role is to maximise the ratio of managerial effort to team output.
  • Which means leveraging the team as fully as possible.
  • Which in turn means giving people responsibility for pieces of work.

And I might summarise the advice as:
  • Specify the work to be done as far as is sensible.
  • Make it clear who is doing what, and give work to the team as far as is sensible.
  • Assess risks and find strategies to mitigate them.
  • Review on a schedule commensurate with the risks identified.

And I might describe the underlying conceit as: tasks and problems are monkeys to be passed from one person's back to another. (See Management Time: Who’s Got the Monkey?)  And also as: unnecessary.

So, as before, I liked the book's core message - the advice, to me, is a decent default - but not so much the way it is delivered. And, yes, of course, I should really have had someone read it for me.
Image: Amazon
Categories: Blogs

Iterate to Accumulate

Tue, 07/19/2016 - 06:08
I'm very interested in continual improvement and I experiment to achieve it. This applies to most aspects of my work and life and to the Cambridge Exploratory Workshop on Testing (CEWT) that I founded and now run with Chris George.

After CEWT #1 I solicited opinions, comments and suggestions from the participants and acted on many of them for CEWT #2.

In CEWT #2, in order to provide more opportunity for feedback, we deliberately scheduled some time for reflection on the content, the format and any other aspect of the workshop in the workshop itself. We used a rough-and-ready Stop, Start, Continue format and here's the results, aggregated and slightly edited for consistency:
Start
  • Speaker to present "seed" questions
  • Closing session (Identify common threads, topics; Share our findings more widely)
  • More opposing views (Perhaps set up opposition by inviting talks? Use thinking hats?)
  • Focused practical workshop (small huddles)
Stop
  • 10 talks too many?
  • Whole day event (About an hour or two too long; Make it half a day)
  • Running CEWT on a Sunday
  • Earlyish start
  • Voting for talks (perhaps group familiar ones?)
  • Don’t make [everyone] present
  • Prep for different length talks
Continue
  • CEWT :)
  • Loved it!
  • Great Venue
  • Good location
  • Lunch, logistics
  • One whole day worked well
  • Varied talks
  • Keep to 10 min talks
  • Short talks & long discussions are good
  • This amount of people
  • Informal, local, open
  • Topic discussions
  • Everyone got a chance to speak
  • Cards for facilitation
  • Flexible agenda
  • Ideas being the priority
Other
  • Energy seemed to drop during the day
Me and Chris have now started planning CEWT #3 and so we reviewed the retrospective comments and discussed changes we might make, balanced against our own desires (which, we find, differ in places) and the remit for CEWT itself, which is:
  • Cambridge: the local tester community; participants have been to recent meetups.
  • Exploratory: beyond the topic there's no agenda; bring on the ideas.
  • Workshop: not lectures but discussion; not leaders but peers; not handouts but arms open.
  • Testing: and anything relevant to it.
We first discussed the reported decrease in energy levels towards the end of the day during CEWT #2. We'd felt it too. We considered several options, including reducing the length. But we decided for now to keep to a whole day.

We like that length for several reasons, including: it allows conversation to go deep and broad; it allows time for reflection; it allows time for all to participate; it contributes to the distinction between CEWT and local meetups.

So if we're keeping the same length, what else could we try changing to keep energy levels up? The CEWT #2 feedback suggested a couple of directions:
  • stop: 10 talks too many; Don’t make [everyone] present
  • start: More opposing views; Focused practical workshops
We are personally interested in switching to some kind of group activity inside the workshop, maybe even ad hoc lightning talks, so we're going do something in that direction. We also - after much deliberation - decided to reduce the number of talks and to inject more potential views by increasing the number of participants to 12.

CEWT #1 had eight participants, CEWT #2 had 10. We felt that the social dynamic at those events was good. We are wary of growing to a point where anyone doesn't get a chance to speak on any topic they feel strongly about or that they have something interesting to contribute to. We will retain cards to facilitate discussion but we know from our own experience, and research amongst other peer workshop groups, that we need to be careful here.

At the two CEWTs to date all participants have presented. Personally I like that aspect of it; it feels inclusive, participatory, about peers sharing. But we are aware that asking people to stand up and talk is a deterrent to some and part of what we're about is encouraging local testers. Participation in a discussion might be an easier next step from meetups than speaking, even in a safe environment like CEWT. So we're going to try having only some participants present talks.

But we also don't want to stop providing an opportunity for people who have something to say and would like to practice presenting in front of an interested, motivated, friendly audience. One of the CEWT #2 participants, Claire, blogged on just that point:
I was asked if I wanted to attend CEWT2. I knew this would involve doing a presentation which I wasn't particularly thrilled about, but the topic really had me chomping at the bit to participate. It was an opportunity for me to finally lay some ghosts to rest about a particularly challenging situation I foolishly allowed to affect me to the extent I thought I was a rubbish tester. I deleted my previous blog and twitter as I no longer had any enthusiasm about testing and wasn't even sure it was a path I wanted to continue down. So, despite being nervous at the thought of presenting I was excited to be in a position to elicit the thoughts from other testers about my experience.
...
The reactions, questions and suggestions have healed that last little hole in my testing soul. It was a great experience to be in a positive environment amongst other testers, all with different skills and experiences, who I don't really know, all coming together to talk about testing.Chris and me talked a lot about how to implement the desire to have fewer talks. Some possibilities we covered:
  • invite only a select set of participants to speak
  • ask for pre-submission of papers and choose a set of them 
  • ask everyone to prepare and use voting on the day to decide who speaks
  • ask people when they sign up whether they want to speak
I have some strong opinions here, opinions that I will take a good deal of persuading to change:
  • I don't want CEWT to turn into a conference.
  • I don't want CEWT to turn into a bureaucracy.
  • I don't want anyone to prepare and not get the opportunity to present.
In CEWT #2 we used dot voting to order the talks and suggested that people be prepared to talk for longer (if their talk was voted early) or shorter (if late). As it happened, we decided on the day to change the schedule to let everyone speak for the same length of time but the two-length talk idea wasn't popular, as the stop "Prep for different length talks" feedback notes.

So this time we're going to try asking people whether they want to present or not, expecting that some will not and we'll have a transparent strategy for limiting the number in any case. (Perhaps simply an ordered list of presenters and reserve presenters, as we do for participants.) We'll have a quota of presenters in mind but we haven't finalised that quite yet; not until we've thought some more about the format of the day.

With some presenters and some non-presenters, we're concerned that we don't encourage or create a kind of two-level event with some people (perceived as) active and some passive. You'll notice I haven't referred to attendees in this post; we are about peers, about participation, and we want participants. Part of the CEWT #3 experiment will be to see how that goes on the day.

Clearly the changes we've chosen to make are not the only possible way to accommodate the feedback we received. But we have, consciously, chosen to make some changes. Our commitment here is to continually look to improve the experience and outcomes from the CEWTs (for the participants, the wider community and ourselves) and we believe that openness, experimentation, feedback and evaluation is a healthy way to do that.

Let's see what happens!
Image: https://flic.kr/p/qvpt1p
Categories: Blogs

Getting the Worm

Thu, 07/14/2016 - 06:57
Will Self wrote about his writing in The Guardian recently:
When I’m working on a novel I type the initial draft first thing in the morning. Really: first thing ... I believe the dreaming and imagining faculties are closely related, such that wreathed in night-time visions I find it possible to suspend disbelief in the very act of making stuff up, which, in the cold light of day would seem utterly preposterous. I’ve always been a morning writer, and frankly I believe 99% of the difficulties novices experience are as a result of their unwillingness to do the same.I am known (and teased) at work for being up and doing stuff at the crack of dawn and, although I don't aim to wake up early, when it happens I do aim to take advantage. I really do like working (or blogging, or reading) at this time. I feel fresher, more creative, less distracted.

I wouldn't be as aggressive as Self is about others who don't graft along with the sunrise (but he's not alone; even at bedtime I don't have to look hard to find articles like Why Productive People Get Up Insanely Early) because, for me, there are any number of reasons why novice writers, or testers or managers, or others experience difficulties. And I doubt more conscientious attention to an alarm clock would help in most of those cases.

Also, it's known that people differ in chronotype. I came to terms with my larkness a long time ago and now rarely try to go against it by, say, working in the evenings.

How about you?
Image: https://flic.kr/p/4a3yKL
Categories: Blogs

Put a Ring on It

Sat, 07/09/2016 - 11:28

Back in May I responded to the question "Which advice would you give your younger (#Tester) self?" like this:
Learn to deal with, rather than shy away from, uncertainty.#testing https://t.co/Db8Uj1HGyU— James Thomas (@qahiccupps) May 25, 2016Last week I was reminded of the question as I found myself sketching the same diagram three times for three different people on three different whiteboards.

The diagram represents my mind's-eye view of a problem space, a zone of uncertainty, a set of unresolved questions, a big cloud of don't know with a rather fuzzy border:


What I'll often want to do with this kind of thing is find some way to remove enough uncertainty that I can make my next move. 
For example, perhaps I am being pressed to make a decision about a project where there are many uknowns. I might try to find some aspect of the project to which I can anchor the rest and then give an answer relative to that. Something like this: "Yes, I agreed an approach in principle with Team X and until their prototype confirms the approach our detailed planning can't start."
I've still got a lot of uncertainty about exactly what I will do. But I found enough firm ground - in this case a statement in principle - that I can move the project forward.
In my head, I think of this as putting a band around the cloud and squeezing it:

And I'm left with a cleaner picture, the band effectively containing the uncertainty. Until the conditions that the band represents are confirmed or rejected I don't have to consider the untidy insides. (Which doesn't mean that I can't if I want to, of course.)

A useful heuristic for me is that if I find myself thinking about the insides too much - if something I expect to be in is leaking out - then probably I didn't tighten the band enough and I need to revisit.

When I'm exploring, the band can represent an assumption that I'm testing rather than some action that I've taken. "if this were true, then the remaining uncertainty would look that way and so I should be able to ..."

I like this way of picturing things even though the model itself doesn't help me with specific next moves. What it does do, which I find valuable, is remind me that when I have uncertainty I don't have to work it out in one shot.
Image: https://flic.kr/p/5EdSAW

P.S. While writing this, I realised that I've effectively written it before, although in a much more technical way: On Being a Test Charter.
Categories: Blogs

Good Conduct

Sat, 07/02/2016 - 07:23

I've been reading Here Comes Everybody by Clay Shirky. It's about how, in around 2007, technology and social media was beginning to change the ways in which people were able to organise themselves. Interesting to me on sociological, leadership and managerial grounds, here's a handful of quotes that I particularly enjoyed:
If you have ever wondered why so much of what workers in large organizations know is shielded from the CEO and vice versa, wonder no longer: the idea of limiting communications, so that they flow only from one layer of the hierarchy to the next, was part of the very design of the system at the dawn of managerial culture. (p. 42, on Daniel McCallum's revolutionary ideas for hierarchical management)  In business, the investment cost of producing anything risks creating a systematic bias in the direction of acceptance of the substandard. You have experienced this effect if you have ever sat through a movie you didn't particularly like in order to "get your money's worth." (p. 249) If transaction costs are a barrier to taking advantage of the individual with one good idea (and in a commercial context they are), then one possible response is to lower the transaction costs by radically rearranging the relations between the contributors. (p. 252, where transaction costs are the inherent costs of participation) [Successful peer collaboration needs] a plausible promise, an effective tool, and an acceptable bargain with the users. The promise is the basic "why"  ... The tool helps with the "how" ... the bargain sets the rules of the road: if you are interested in the promise and adopt the tools, what can you expect, and what will be expected of you? (p. 260)I've been listening to a Ted Radio Hour podcast  called Trust and Consequences. It's about how, in different contexts, one person's trust for another can facilitate different kinds of outcomes. Interesting to me on sociological, leadership and managerial grounds, here's a handful of quotes that I particularly enjoyed:
[it] is like holding a small bird in your hand. If you hold it too tightly, you crush it. If you hold it too loosely, it flies away. (Charles Hazlewood, on being an orchestra conductor)I have to say, in those days, I couldn't really even find the bird. (Charles Hazlewood, on his early career as a conductor)When you're in a position of not trusting, what do you do? You overcompensate. And in my game, that means you over-gesticulate. You end up like some kind of rabid windmill. And the bigger your gesture gets, the more ill-defined, blurry and, frankly, useless it is to the orchestra. (Charles Hazlewood, on the importance of clarity when directing)We call them leaders because they take the risk before anybody else does. And when we ask them, "why would you do that? Why would you give your blood and sweat and tears for that person?" They all say the same thing - because they would've done it for me. And isn't that the organization we would all like to work in? (Simon Sinek)Image: https://flic.kr/p/bBnTPF
Categories: Blogs