Skip to content

Hiccupps - James Thomas
Syndicate content
James Thomas
Updated: 16 hours 57 min ago

Toujours Testing

Mon, 05/02/2016 - 06:46

Some time ago, maybe even a year ago now, one of my team said that she had been watching me. I acted cool - although that may have been the onset of a cold sweat - but fortunately my dark secrets remain mine and her observation was simply that, to her mind, I am always testing. She gave a couple of examples:
  • When the test team were being given an expert-level product demo, I took notes not only on the functionality but also on the way that that information was being communicated to us, verbally and in slides, and I fed that back to the business because what we were watching was similar to the content of our customer demos.
  • When I set a piece of work - for myself or others - I will frequently have a secondary aim other than simply getting the piece of work done, and that aim often has an evaluation or assessment element to it.

And on reflection I think she's right. This is something that I have done, and do do. These days, I believe, I do it more explicitly than I used to because, over a long period of time and particularly by observing myself, I realised that this was my intuition and instinct, and I have found it personally valuable.

Right now at work we're considering some potentially significant changes to the way we organise ourselves and I'm spending time thinking about possibilities and exploring ramifications of them, mostly as thought experiment and by talking to members of the team. When I had the opportunity to try, in a small way, one of the possibilities, I took it. It's a possibility I instinctively shy away from and so I was very interested in my reaction to it. And later I  followed up with a recipient of my action too, to understand their feelings and explain why I did what I did.

I like to make myself and my work open for others to test and, when I do, the questions that follow mean that I learn things and, as a team, we do a better job overall. I am happy when someone on my team finds one of my mistakes and we correct it, although it took me some time to see and understand my feelings of defensiveness in those kinds of situations. (And they never go away.) Through this kind of self-testing I have arrived at knowledge about myself that I can codify and use to guide how I want to behave in future.

I also test myself by issuing challenges. Talking at EuroSTAR 2015 was the culmination of a 12-month challenge to try to get over increasing nervousness at public speaking. (I'll have more to say about that another time.)

I try hard to write reports in terms of testable assertions. And then, before delivery, I test them. I'll frequently find places where I've made too general or specific a claim, or I might feel that I need to go and look again at some data to check that I can back up what I'm saying. For instance, in an earlier draft of this post, the next paragraph started "So, yes, I am always testing ..." But it doesn't start that way any more because in proofing I asked myself "really, always testing? Always?"

So perhaps I can agree that I am always checking, challenging, exploring, investigating, ... Maybe that's why testing felt like a good fit when I stumbled into it. I like the spirit of the suggestion and - to the extent I'm prepared to commit to a literal "always" - I am always testing. Or, to tune it still further, I am always doing things that I think are consistent with and conducive to being a good and improving tester.

So I was particularly intrigued to read Harnessed Tester writing about wanting to switch testing off:
... how do you switch off the tester in you (if you do even manage it at all)? Are you able to function in the “Real World” without slipping into your profession or do you find yourself testing things you shouldn’t or don’t need to test? Do you even see it as a bad thing?I don't ever want this to be switched off.  I want to work it, to exercise it, to train it. I seek out opportunities to put it to use. I love this quote from George Carlin:
The brain is a goal-seeking and problem-solving machine, and if you put into it the parameters of what it is you need or want or expect, and you feed it, it will do a lot of work without you even noticing.I quoted it in Your Testing is a Joke where I described how I use humour as just such a training device, as a tool for feeding my brain.

I don't ever want this to be switched off. Since having children, I have become interested in how they see and interact with the world. I encourage my two daughters to have their enquiring mind turned on at all times and I praise them when they find a new perspective or ask a question or seek information. I am prepared (most times) to keep answering those long chains of why questions until they get bored, and I try find opportunities to provoke thoughts that will start their thinking process off.

I don't ever want this to be switched off. Last Christmas I bought my family a shared present of How to be an Explorer of the World and we've done several of the experiments together. Last month we went on an adventure walk in Ely.  My youngest daughter was particularly inspired by one task that I set: find something hidden. She's since begun to read Pippi Longstocking and styles herself a "Thing Searcher" and, while I was weeding the drive (and on the side listening to an interview with James Bach) last weekend, she interrupted me:
"Dad, how can I find things that no-one else finds?" she asked."You could look where no-one else looks" I replied.And so that's exactly what did, initially by standing on a wheelie bin to inspect the top of the hedge.

I don't want this to ever be switched off. What I want is for asking, probing, looking, questioning, reviewing, being creative, and exploring to be second nature. I think that these are valuable skills in life, but also that if they become what you just do  then, as a tester, you can more often get on with a task and not spend explicit time on the techniques.

I don't ever want this to be switched off. However, where I think that Harnessed Tester has got a point, and it's a strong one, is that it's important to be able to deploy these skills appropriately, to make sensible use of them, to report your findings from it in a way that's acceptable and beneficial to you, to whoever you are dealing with and to the context you find yourself in.

And that, by Weinberg's definitions, is about acting congruently, (see e.g. Managing Teams Congruently) which is one of the challenges I have set for myself and have been working on for the last couple of years.

And I've told my team that I'm dong it.

And I know that they are watching...
Categories: Blogs

Your Testing is a Joke

Thu, 04/28/2016 - 22:00

My second eBook has just been released! It's called Your Testing is a Joke and it's a slight edit of the piece that won the Best Paper prize at EuroSTAR 2015. Here's the blurb:
Edward de Bono, in his Lateral Thinking books, makes a strong connection between humour and creativity. Creativity is key to testing, but jokes? Well, the punchline for a joke could be a violation of some expectation, the exposure of some ambiguity, an observation that no one else has made, or just making a surprising connection. Jokes can make you think and then laugh. But they don't always work. Does that sound familiar? This eBook takes a genuine joke-making process and deconstructs it to make comparisons between aspects of joking and concepts from testing such as the difference between a fault and a failure, oracles, heuristics, factoring, modelling testing as the exploration of a space of possibilities, stopping strategies, bug advocacy and the possibility that a bug, today, in this context might not be one tomorrow or in another.  It goes on to wonder about the generality of the observations and what the value of them might be before suggesting ways in which joking can provide useful practice for testing skills.  There are some jokes in the eBook, of course. And also an explanation of why any groaning they provoke is a good sign… And in case you're wondering, my first eBook was called My Software Under Test and Other Animals. It's got some groanworthy moments too.
Categories: Blogs

Cambridge Lean Coffee

Thu, 04/21/2016 - 08:24

We hosted this month's Lean Coffee at Linguamatics. Here's some brief, aggregated comments on topics covered by the group I was in.

It is fair to refer to testers as "QA"?
  • One test manager talked about how he has renamed his test team as the QA team
  • He has found that it has changed how his team are regarded by their peers (in a positive way).
  • Interestingly, he doesn't call it "Quality Assurance" just "QA" 
  • His team have bought into it.
  • Role titles are a lot about perception: one colleague told him that "QA" feels more like "BA".
  • Another suggestion that "QA" could be "Quality Assisting"
  • We covered the angle that (traditional) QA is more about process and compliance than what most of us generally call testing.
  • We didn't discuss the fairness aspect of the original question.

What books have you read recently that contributed something to your testing?
  • The Linguamatics test team has a reading group for Perfect Software going on at the moment.
  • Although I've read the book several times, I always find a new perspective on some aspect of something when I dip into it. This time around it's been meta testing.
  • The book reinforces the message that a lot of testing (and work around the actual testing) is psychology.
  • But also that there is no simple recipe to apply in any situation.
  • We discussed police procedural novels and how the investigation, hypotheses, data gathering in them might be related to our day job

When should we not look at customer bugs?
  • When your product is a platform for your customers to run on, you may find bugs in customer products when testing yours.
  • How far should you go when you find a bug in customer code? 
  • Should you carry on investigating even after you've reported it to them?
  • In the end we boiled this question down to: as a problem-solver, how do you leave an unresolved issue alone?
  • Suggestions: time-box, remember that your interests are not necessarily the company priorities, automate (when you think you need lots of attempts to find a rare case), take the stakeholder's guidance, brainstorm with others, ... 
  • If the customer is still screaming, you should still be working. (An interesting metric.)

Categories: Blogs

It is Possible to be Professional Without Being in a Profession

Mon, 04/18/2016 - 21:32

This guest post is by Abby Bangser, writing on the recent MEWT 5. I enjoyed Abby's talk on the day, I enjoyed the way she spoke about testing both in debate and in conversation and I am very much enjoying her reflections now.

I'm also enjoying, and admiring, the open attitude of the MEWT organisers to Abby's comments on gender diversity at the workshop, later on in email and in this piece.  In particular, I like their eagerness to share their intentions, process and feelings about it at the event and then engage in the wider discussion in the testing community (e.g. 12, 3).

MEWT 5 was my first experience in a small peer conference, and the format provided a very interesting style of sharing and learning. A big thanks to Bill and Vernon for organizing, the Association for Software Testing for helping fund the event, and particularly Simon for identifying a common theme. The conference theme of What is a Professional Tester? was a tough one to prepare for, and it became apparent that other attendees had a diverse way of approaching it as well. Maybe that was what made it an interesting topic!

I want to briefly touch on the fact that we identified and discussed the difference between a capital P professional and a person working in a professional manner. Working in a profession does not, by itself, indicate a person's level of professionalism, and I enjoyed the conversation around what defines professionalism. Based on the discussion at the conference, my own definition of professionalism is achieving high standards of pride and integrity.

Just as with other heated topics related to testing, the term "professional" has a lot of baggage. This can include but is not limited to the idea that as a profession, there may need to be a board that regulates who gains access to, and who can be denied/revoked access to, the profession.

This point seemed to be the biggest reason why testing as a profession had a negative ring to many in the room. This, of course, runs too close to the debate around testing as an activity or a role for this to be omitted from our discussions. I want to use this blog post to dig a little more deeply into this.

In my opinion, the room had a bit of support on both sides of this debate but I think we made progress on why it has become such a heated topic. We seemed to identify why having a defined role of "tester" or "QA" is necessary in some contexts. By clearly articulating these needs as being focused around ease of recruiting and some industry regulations, it became clear to me that role does not need to equate to job title or job specification even though it often does these days.

There was a proposal that in lieu of roles we could look at areas of accountability, but this didn't quite sit well with me as it still has an air of assessing blame. I suggested (and prefer) thinking of roles as hats. Each person has a certain number of hats that they are skilled enough to wear, but are not required to wear them all at all times.

I can’t remember where I first heard this, but I like it for a number of reasons that I want to explore further:

  • Hats are not permanent; they are easy to take on and off: Each person should be able to find the ones that fit them, and be looking for ways to work new ones. While my day-to-day hat may be the testing one, I also enjoy putting on the infrastructure hat, the project management hat and the business analysis hat as the need arises.
  • You look a bit silly if you are wearing more than one: As stated above, changing hats is not only OK, it's pretty much required to be a successful team. But I still put a lot of value in focusing on a single hat at a time. This was referred to as time slicing instead of multi-tasking, and I really liked that distinction.
  • Hats are not unique to a single person: Just because you are wearing a certain hat does not mean someone else can’t put on the same type of hat. Some challenges may take a number of testing-focused people to solve, and others may take a variety of roles. In either case, the team should be able to self-organize.

I want to take the idea of changing hats just one step further. Throughout the day, there was a definite majority of the room who felt that successful team mates (not just testers) are those who step in and get the job done. They do not let job titles/specifications limit what they learn or where they provide support. This was a big reason I felt my topic of the “full stack” tester was well received.

I think that this topic has a lot of really interesting avenues left to personally explore, and I look forward to doing that both off- and on-line. If I had to sum up my current hope for a takeaway, it is that every team member has a responsibility to make their expertise accessible to others AND find ways to access others’ expertise. It is no longer acceptable to silo our team mates based on arbitrary terms like “technical”.

A final and important word on my experienceWhile I was very glad to be able to attend MEWT 5 and participate in the discussions, I would be remiss to not raise the lack of diversity in the room. While there are many axes that we could discuss diversity on, I am going to speak only of gender diversity here. The story told in the room of MEWT attendees has been told in countless other industries, organizations and events. A notable example is the article by the Guardian which noted that there are more FTSE 100 leaders named John than all the female chief executives and chairs combined. This definitely hit home, since the participants in our room named Dan outnumbered all the women combined by a ratio of 3:1.

There is no single answer on how to support diversity in these circumstances, but we have countless people paving the way who are showing that it is not only possible to succeed in doing so, but to actually thrive. I hope to attend another -EWT event in the future that can promote the kind of diversity shown by many including Rosie Sherry and her work at TestBash, Adi Bolboacă and Maaret Pyhäjärvi with European Testing Conference, and supporting organizations like Speak Easy started by Anne-Marie Charrett and Fiona Charles.
Categories: Blogs


Thu, 04/14/2016 - 05:52

I've said before on here that I enjoy ideas just for their own sake. The CEWT meetings that I run are all about that: exposure to new ideas, novel angles on existing ideas, a view of the evolution, merging, death and resurrection of ideas, and challenges to, or support of, my own ideas and perhaps even my ways of generating or thinking about ideas.

So I was delighted to be invited to attend someone else's workshop, MEWT 5, on the topic What is Professional Testing? Apart from anything else, I had a blast simply coming up with what I thought was an interesting angle. The talks and discussion were wide-ranging but I'll just pick out bits from my notes on three of the threads that ran across the day that I found particularly interesting.

RelativismWhether someone is acting professionally is entirely subjective. Variables in play include the person doing (the tester in this MEWT's context), the person that's having something done for them (a client; potentially the same person as the tester), the project, the project context. Change any of these and there's a chance that what is considered professional might change. Two different observers might see the same situation differently with respect to the professionalism of the participants. Dan Caseley saw something of this in the responses he got from colleagues when he asked them to characterise testers they'd worked with.

We talked about Jeremy Clarkson, the ex-presenter of Top Gear. He is well-known for controversial comments and actions which, for a long time, the BBC were prepared to put up with, and pay up for. Possibly - presumably - this was because he continued to return enough value to them to justify the risk and expense.

Eventually he did something sufficiently serious that his contract was not renewed. Question: was he acting professionally when he was deliberately controversial? Or was he just unprofessional?

Was it part of his remit, part of his employer's desires, that he was edgy (or plain old-fashioned offensive) in order, for example, to drive ratings and hence revenue? Did the BBC think he was professional? A professional? Acting professionally? Did the audience? Which parts of the audience? Was any given audience member consistent in their view of his professionalism over time?

Adam Knight brought another perspective: are we in the testing community, particularly the vocal and active context-driven community, creating and reinforcing a notion of what it means to be a professional tester? If so, can it be viewed as a a reaction against perceived unprofessional "Factory Schoolers"? But wouldn't they in turn, and legitimately from their perspective, consider themselves professional testers and others less so? The debate that followed touched on subjective positions that motivate principles that in turn risk sliding into dogma.

A shared perspective isn't necessarily an invalid perspective, nor harmful, nor one without utility. But a uniform perspective can lead to uniform actions and, as Iain McCowatt put it, often the fringes are where the excellence is.

CommitmentMohinder Khosla quoted Steve Pressfield on professionalism:
Amateurs let adversity defeat them. The pro thinks differently. He shows up, he does his work, he keeps on truckin’, no matter what.Elsewhere the conversation included whether it was characteristic of a professional that they will knuckle down and get on with it, see a project to its conclusion, find ways to thrive in adversity, do whatever it takes to get it over the line.

But only up to a point? It was generally agreed that the professional will also recognise when a situation has become untenable, when they are unable to operate with integrity, when the only course of action that retains integrity is to bow out.

Related conversation spinning out of Abby Bangser's talk covered the idea that a professional would be able to ramp up on any topic and not be boxed in by some idea of what they should be. This in turn slid into contrasting testers who define themselves by some area - technical, performance, automation - with developers who will often define themselves by the language they favour - Java, Ruby, C++.

EthicsFor me, this was the most interesting theme of the day.  Mohinder referenced a blog by Uncle Bob Martin which plays out two variants of a doctor-patient scenario. In the first the doctor appears to be offhand and uncaring when the patient describes pain in their arm and in the second the doctor refuses to amputate a limb at the patient's request on the basis that it would violate a medical oath.

We revisited this and other medical scenarios, trying to make parallels between professional testing and other professions. It was suggested that the professional position was clear-cut in the arm example: the second doctor is the professional.

I'm less certain about this. I do think that one way we might define a professional (if we wanted to) is in terms of the ethical standards they hold themselves to. But I don't think that having standards or principles unequivocally forces the choice of any particular course of action.

The second doctor might be doing quite the wrong thing here if, for example, their patient has some condition - body dysmorphia, say - that might mean they would harm themselves or even commit suicide were the doctor not to remove the limb. A single abstract ethical principle can govern both courses of action - to operate or not - under appropriate circumstances.

Likewise, two different doctors, operating under the same code of ethics, could legitimately make different decisions in the same circumstances.

Some speakers - Doug Buck and Danny Dainton particularly - described personal ethics that govern their view of what it means to be a professional. These included
  • striving to do good work
  • taking responsibility
  • doing what you say you will
  • sharing knowledge
It was fascinating to see these enumerated. I am a practitioner and proponent of introspection and it takes no little strength of mind to decant motivations from the actions they provoked and the stories you tell yourself about them. The effort required to do that is seldom wasted, in my experience.

Much like the effort to organise and run a workshop like this. And I'd like to thank Bill, Simon, Vernon for doing it and and the Association for Software Testing for helping to fund it.
Categories: Blogs

What is What is Professional Testing?

Sun, 04/10/2016 - 10:32

I was invited to the fifth Midlands Exploratory Workshop on Testing (MEWT) yesterday. In it, the MEWTs were asking: 
What is Professional Testing?Wikipedia has this to say about what a professional is: A professional is a member of a profession or any person who earns their living from a specified activity. Hmm. So what’s a profession then?  A profession is a vocation founded upon specialized educational training, the purpose of which is to supply disinterested objective counsel and service to others, for a direct and definite compensation, wholly apart from expectation of other business gain. (also from Wikipedia.) In MEWT 5 we’ll be asking – what exactly does it mean to be a Professional Tester? And, since I already cited the quotes above. You won’t be allowed to. In fact, we’ll expect you to delve deep and reach far and wide into the many and varied facets of what it means to demonstrate professionalism in the face of a rapidly changing technological and sociological landscape.This post is an edit of the notes I made while I was preparing my talk.

I'm going to use the question What is Professional Testing? as a lens through which to view, inspect and explore itself. I'll imagine a project where I'll take on the role of tester; MEWT will be the client and the call for the workshop will be the brief.

In order to tease out intuitions about professional testing I'll work though the project by making assertions about what the average person might think it would be reasonable to expect from a professional tester in particular situations. Once I have those, I'll wonder whether we can generalise from them into some kind of definition.

It's probably not too controversial to suggest that a professional in most domains should think carefully about knee-jerk responses (except for professional cage fighters, perhaps) so my first action here will be to exploit the information in front of me and read the full brief.

The brief's headline asks What is Professional Testing? but the body of the text includes another question:
 ... in MEWT 5 we’ll be asking - what exactly does it mean to be a Professional Tester? Is the workshop asking about professional testers or professional testing? That's two different questions. Or, at least, it might be two different questions. But there's more:
 .... what it means to demonstrate professionalismSo perhaps there's even three concepts here?

A professional tester should be, reasonable people might think, alert to nuances of meaning. They will know that sometimes in those small gaps, those seemingly innocuous chinks between terms, the fuzzy areola of greyness around the sharp black centre of a label, there are vast conceptual differences lurking. Even Humpty Dumpty knew that semantics matter and was prone to useful aphorisms like this:
When I say "Professional Testing" it means just what I choose it to mean — neither more nor less.But by the same token, wouldn't reasonable people expect a professional tester to also know that sometimes there are not great differences in people's use of terminology and it's quite common for them to use varied vocabulary to refer to the same thing?

And the professional tester, I guess most of us would agree, should be able to navigate the world keeping ambiguities and multiple possibilities in mind until it's a sensible time to ask those involved whether it matters and whether the range of possibles need to be collapsed.

The brief also includes a definition of profession, with explicit instructions that it should not be quoted ... So here it is:
A profession is a vocation founded upon specialized educational training, the purpose of which is to supply disinterested objective counsel and service to others, for a direct and definite compensation, wholly apart from expectation of other business gain.The instruction given alongside the definition is a blocking manoeuvre and noting such things is a characteristic that the man on the Clapham Omnibus is likely to want in his professional tester.

Perhaps in this case it means nothing; perhaps it's simply a pointer that the client wants us to do our own research. But then again perhaps it's a coded message that they want us to start from their point of view, that they expect we will encounter other people with other perspectives on this project and they are trying to constrain our acceptance of them. Or perhaps they don't really care what we do with respect to definitions their intended point being that we shouldn't waste time on things that they would consider unnecessary. Or none of those things. Or all of them.

As an aside, would right-thinking people be upset if, having hired a professional tester and told them not to do something, they still did it? It probably depends on the something, right? What about if the tester did it because they had an instinct that it was the right thing to do for the project? Like I just did ...

To try to understand the motivations better, a professional tester might explore alternative definitions. Definitions are often a good place to start in any case; they can provide anchors or reference points for other information discovered along the way and supply related terms to be investigated in their own turn. On consulting Oxford Dictionaries, I find that there are multiple definitions of professional, and alternative definitions of profession:
Professional: (a) Relating to or belonging to a profession; (b) Engaged in a specified activity as one’s main paid occupation rather than as an amateur; (c) A person competent or skilled in a particular activityProfession: A paid occupation, especially one that involves prolonged training and a formal qualificationSomeone with a little experience of testing might expect that the professional tester would look a little further than the obvious. In this case, when I do that, I find that there are also other senses of profession that we might come across that we can disregard for the purposes of the current mission:
Profess: An open but often false claim: "his profession of delight rang hollow"Ah yes, the mission. Surely a professional tester would want to understand what their mission was, or at least some things that it wasn't, before going too far?

A professional tester attuned to language and blockers and ready to ask questions at the point when they are likely to add value, might choose now to talk to the client and highlight the potential for the brief to not be expressing the client's intent unambiguously enough to move forward.

And I couldn't think of a strong enough reason not to, so I asked Simon Knight, the content owner for MEWT 5, if he would take some questions. He said he would and so I emailed him a bunch:
Who is asking this question? (or on whose behalf are MEWT asking the question?)
The organisers, on behalf of the testing communityAnd why?
Because we needed a topic to discuss at MEWTDo they have a problem to resolve and has this question come up in their attempts to resolve it? If so, what is the problem?
See question belowWhat do they hope to achieve by asking the question?
Through further discussion we hope to find some heuristics and principles that can be utilised within the testing community to increase the profile of testing by improving behaviours, work-products etcWhat do they expect to do (or be able to do) once they have the answer?
We expect to be able to communicate the results of our discussions by sharing materials, slides, thoughts and experiencesOne might assume that a professional tester could glean a lot from this, from both the implicit information that surrounds the surface information and that surface information itself.

Look at how terse some of those answers are, look at how efficiently one answer is said to be covered by another, look at how the longer answers to the more open questions are shorter on specifics and look how the questions that could easily be answered directly have been answered directly and apparently without guile.

A professional tester might now attempt to test the relationship he or she is building with the client. Perhaps a drop of humour, edged with respect, illustrating that this is not the first rodeo for either of them? In this case:

Well played. Are you in senior management? ;-)
Cost/benefit analysis suggested spending less than 5 minutes of my time and giving you my most honest response would probably suffice. If you actually do need more info, let me know. A professional tester might make a working hypothesis that Simon is a pragmatist, thinking about his answers and why he is giving them in the way he is, sounds like he is prepared to talk straight, took the questions seriously but cares to guard his time. And, importantly for an information-seeker, is offering further information should it be desired.

Further, a professional tester - who we can undoubtedly assume is attuned to language, and communication, and the sociology of interpersonal relationships - might note that Simon did not use the word profession, nor any of its derivatives, in his response. And also that he didn't simply copy-paste bits of information from the brief or even refer back to the brief.

Armed with this new information, any professional tester worth his salt would surely reevaluate what they already knew, and try to state the mission (for their own benefit at least):
On the basis that there is no specific problem to solve here, and indeed no specific client to solve it for, questions of definition could justifiably become less part of the mission's own definition and more part of the deliverable, the report, the testing story. And in order to deliver the story the professional tester (and indeed any other kind of tester) needs ways to get to the content of the story. Because you can't have a story with no content, can you?

Unless you're a consultant.

Experience is a great guide to instinct here, but experience comes from practice. We might hope that a professional tester has practised and continues to practice. A professional tester, a practising professional tester, would certainly have battle-hardened tricks, an A-Z, for getting an investigation going, wouldn't they? They'd reach for tools such as analogy, anecdote, and antithesis to just list a handful of the A's.

Let's practice then; I'll pick a technique that I have found productive in the past: the use of antithesis to shape understanding. What might we contrast professional with? Perhaps amateur.

Remember that Galton talked about the wisdom of the crowd from a statistical perspective over 100 years ago and then think of uTest or 99tests. uTest touts itself as "The Professional Network for Testers". A professional tester might again notice the nuance here. The network is professional but are the testers considered professional? By who? In what way? And what does it mean for a network to be professional?

A professional tester might dig for a little more evidence. The uTest Facebook site says "uTest is the world's largest open community that exists to advance and promote the software testing profession." Another variant.

These kinds of companies sell testing services. They would almost certainly assert that they provide a professional service, if asked, but their projects are staffed by people who are not necessarily professional testers. A professional tester might be expected to make this kind of observation: does professional testing have to be carried out by professional testers?

Is there another kind of opposite of professional? Yes: Unprofessional.

A professional tester might be thought unprofessional because of some aspect of the way the work was carried out. e.g. making racist remarks or punching a member of his team. Is Jeremy Clarkson a professional TV presenter? Might we say he always behaves professionally?

This exposes the difference between being professional and acting professionally. Which does our client care about? And it's not a binary decision, necessarily. Also, testing is an activity. A tester can be doing more than one activity at a time and so acting both professionally and unprofessionally at once. For different observers, the same single action could be perceived as professional or unprofessional.

Another tool I sometimes use, and might hope that our imagined professional tester would also be familiar with, is looking for exemplars. In fact, this exercise exploring What is Professional Testing? itself could be seen that way. So what exemplars of professional testing are out there?

Well, there are bodies such as the Association for Software Testing. Its website says:
The Association for Software Testing (AST)  is an international non-profit professional association ... focused on supporting the development of professionalism in software testing, among practitioners and academics, at all levels of experience and education.There we go again, more variants: "professional association" and "professionalism in software testing".

I am a member of the AST (and good on them for providing grant funding for this MEWT) and one of the things this means is that I agree to abide by their Code of Ethics and Professional Conduct, itself adopted from another body, the Association for Computing Machinery. The code mentions various flavours of professional over 50 times. Some examples:
  •  professional conduct
  •  professional work
  •  professional ethical standards
  •  computing professional
It might seem, to the professional tester, that the policy could be trying to bridge the gap between being and acting by describing claims or considerations that define the being and govern the decisions that lead to actions.

Some of these claims are very strict
I will ... be fair and take action not to discriminate.but others are more aspirational:
I will ... strive to achieve the highest quality, effectiveness and dignity in both the process and products of professional work.You'd think this latter one would be interesting to an alert professional tester, as it means that there is room for a professional to do low quality work and remain within the ethical code - so long as the intent and efforts made were in line with the code.

The professional tester might note that the code only references testing once - perhaps understandable given that it has come from the ACM:
To minimize the possibility of indirectly harming others, computing professionals must minimize malfunctions by following generally accepted standards for system design and testing.But notice how it refers to "standards for .. testing." The professional tester might file that away for another time ...

... because the professional tester might be expected to recognise that this is too much detail for the average client in the average situation - think of the man on the Clapham Omnibus - and also to notice when the client is not average, or the situation is not average.

This is not an average situation. The client is a testing workshop and the mission is both broad and deep and concerned largely with the generation of ideas. A professional tester should be able to deal with that, we'd hope, wouldn't we, and deliver something that met the client's brief to the extent that they'd understood it and confirmed it.

Which brings us to delivery. The question was What is Professional Testing?

I think what I've just done here enumerates a set of things that it could plausibly include. But it's not a complete enumeration and a small change in any number of details could mean that anything done here would be considered unprofessional.

Take the question of definitions. Here, my proxy for a professional tester decided not to pursue them. On some other project, where lives were at stake, say, it might be be critical to pursue definitions at an early stage. As Potter Stewart famously didn't quite say:
Professional Testing is hard to define but I know it when I see itBut most clients would be justified in wanting something more than that, wouldn't they? So here's my stab at it, based on what I've found out in this exploration of the question:

Professional testing is testing
  • … by someone in a testing role, for a client
  • … at some time, for some project
which might 
  • … include specified practices
  • … involve people nominated as professional testers
  • … involve acting professionally
and which is
  • … unlikely to be amenable to tight definition
  • … but might accept an envelope like the AST’s
  • … and so have the client’s interests foremost
  • … but offers no guarantees about outcome

And to finish, here's a question that I hope that our theoretical professional tester would ask at the end of a piece of work like this: was What is What is Professional Testing? professional testing?
Image: playbuzz

Here are my slides:

Categories: Blogs

Luncheon Meet

Fri, 04/08/2016 - 07:25

As manager of a test team I try to get everyone on the team presenting regularly, to the team itself at least. A significant part of the tester role is communication and practising this in a safe environment is useful. There's double benefit, I like to think, because the information presented is generally relevant to the rest of the team. We most often do this by taking turns to present on product features we're working on.

I encourage new members of the team to find something to talk about reasonably early in their time with us too. This has some additional motivations, for example to help them to begin to feel at ease in front of the rest of us, and to give the rest of us some knowledge of, familiarity with and empathy for them.

I invite (and encourage the team to invite) members of staff from other teams to come to talk to us in our weekly team meeting as well. Again, there are different motivations at play here but most often it is simple data transfer. I used to do this more, and with a side motivation of building links across teams and exposing us to new and perhaps unexpected ideas, or generating background knowledge. But at one of our retrospectives it became clear that some of the testers felt that some of the presentations were not relevant enough to them and they'd rather get on with work.

It pays to listen to your team.

So, along with Harnessed Tester, I set up Team Eating which is a cross-company brown bag lunch. And it's just reached its first anniversary! And, yes, I know its name is a terrible pun. (But I love terrible puns.)

Here's a list of the topics we've had in the first year:

We've had three guest speakers (Chris George, Neil Younger and Gita Malinovska) and, as you can see, there's been a bit of a bias towards testing topics, although that reflects the interests of the speakers more than anything else. There's no constraints on the format (beyond practical ones) so we've had live demos, more traditional talks and, this week, an interactive storytelling workshop.

The response from the company has been good and we've had attendees from all teams and presenters from most. The more popular talks were probably those by Roger, our new (and first) UX specialist. He's done two: first to introduce the company to some ideas about what UX is and then later on how he was beginning to apply his expertise to a flagship project.

I've been really pleased with the atmosphere. There's a positive vibe from people who want to be there listening to their colleagues who have something that they want to share.

One surprise to me has been a reluctance from some of the audience to have their lunch in the meetings. Some people, I now find, consider it impolite to be eating while the presenter is talking. Given the feedback from my team which prompted us to start Team Eating, I was keen that it shouldn't take time away from participants' work and so fitting it into a lunch break seemed ideal.

But although eating is in the name, the team part is much the more important to me and I feel like it's serving the kind of purpose that I wanted in that respect. Quite apart from anything else, I'm personally really enjoying them and so here's to the next 12 months of rapport-building, information-sharing, fun-having Team Eating.

Categories: Blogs

Quality is Value-Value to Somebody

Tue, 04/05/2016 - 22:19
A couple of years ago, in It's a Mandate, I described mandated science: science carried out with the express purpose of informing public policy. There can be tension in such science because it is being directed to find results which can speak to specific policy questions while still being expected to conform to the norms of scientific investigation. Further, such work is often misinterpreted, or its results reframed, to serve the needs of the policy maker.

Last night I was watching a lecture by Harry Collins in which he talks about the relationship between science and democracy and policy. The slide below shows how the values of science and democracy overlap (evidence-based, preference for reproducible results, clarity and others) but how science's results are wrapped by democracy's interests and perspectives and politics to create policies.

I spent some time thinking about how these views can serve as analogies for testing as a service to stakeholders.

But Collins says more that's relevant to that relationship in the lecture - much of it from his book Are We All Scientific Experts Now? In particular he argues that non-scientists' opinions on scientific matters should not generally be taken as seriously as those of scientists, those people who have dedicated their lives to the quest for deep understanding in an area.

He stratifies the non-scientific populus, though, making room for classes of expertise that are hard-earned - he terms them interactional expertise and experience-based expertise - that non-scientists can achieve and which make conversation with scientists, and even meaningful contribution to scientific debate, possible.

I find this a useful layer on the tester-stakeholder picture. Sure, most of our stakeholders might not know as much as we (think we) do about testing, about the craft and techniques of testing. But that doesn't mean that there aren't those who can talk to us knowledgeably about it. This kind of expertise might be from, say, reading (interactional) or perhaps from past testing practice or knowledge of the domain in which testing is taking place (experience-based) and I like to think that I am, and that we should be, open to it being valuable to us and whatever testing effort we're engaged in.
Categories: Blogs

Getting Out of Here

Thu, 03/24/2016 - 07:30
Some of the testers from Linguamatics went to Cambridge Escape Rooms yesterday. Why? This is from their FAQ:
You are locked in a room ... and have to solve a series of puzzles and mysteries to escape. You will have to search high and low for clues and work together ... You have 60 minutes to escape using only the power of your brain..Some things that I liked: we talked to each other a lot, we explored, we naturally split up when there was work to do in parallel, we came together when we were stuck, we switched roles and tasks when we were getting nowhere, we paired and changed pairs, we checked each others' conclusions, we were open to being checked (and sometimes we found new insight, made progress, in the checking), we generated and tested hypotheses, we explored solutions, we challenged each other and ourselves, we were open to being challenged, we repeatedly found another approach when the one we were using wasn't working, we made connections, we used tools that we found and carried them with us in case they were useful again (and some were), we were distracted by red herrings,  we were exhilarated by the intellectual side for its own sake.

So, yes, I think it overlaps in interesting ways with things that we do in our day jobs. That's what I hoped for when I booked it up.

But there are things that don't align, and they are interesting to me too. Here's one: there's direct physical contact with the objects in the rooms that you don't get when you're testing software. At work, all interaction with the thing we're testing is mediated by some other tool, or more usually a tall stack of tools such as the hardware, firmware, OS, libraries, a runtime, the software, the software's API, the UI of the application talking to the API ...

The physicality of the situation of contributed to the enjoyment for me. I was able to literally stand back, I could literally explore the space, I could literally view a problem from another angle. In some cases I could change the context of the problem and in some cases  I could change my context.

The directness with which I could do those things, the intuitive way that I felt like I was doing them, and observing that I was doing them was intriguing. In some way I was a physical metaphor for my mental processes. And vice versa. And they were feeding off each other. I wonder whether I can do more physical things at work to help with the mental side of my job?

Amateur philosophy aside, we had a blast and I'd recommend it as a team-building exercise, a teamwork exercise and an intellectual exercise.
Categories: Blogs

Coverage, Ins and Outs

Tue, 03/22/2016 - 08:55
Colouring in affords great opportunity for mind-wandering, I find. And it was while I was assisting my eldest daughter in a felt pen marathon the other evening that I happened upon the thought that, to some approximation, colouring in is a coverage task:
  • you start with a piece of paper empty save for lines 
  • and set yourself a mission to cover paper with ink 
  • until you have finished (by some criteria that matters to someone who matters).  
Note that this does not mean that every square inch of paper has to be inked (although to my kids that often seems to be the goal).

We'll typically achieve our desired level of coverage by slavishly observing the areas demarcated by lines. Or, rather, by doing our best to do that. Striving for this accuracy has a cost. For many pictures, particularly the very detailed ones my two daughters seem to be doing more and more these days, a significant cost is in the time taken to complete the task and another is the glaring obviousness of small mistakes in the midst of well-regimented blocks of colour.

But, frequently, in our house at least, the time it would take to complete the task given the adherence to the lines is too long, and so many pictures remain unfinished; the mission incomplete; the goal never reached; lower coverage than was desired.

It seems to me that when colouring we use the behaviour pattern we've used before, and that we've seen others use before, and which seem to be prescribed to us. But this is only one type of coverage, this colouring in, I realised. So I asked my daughter whether she'd ever thought about something different, about, say, colouring out.

And she hadn't. For her, convention constrained how she'd consider colouring.

But would she be prepared to try it now? She would. And so we coloured out!

And she liked it because she saw value in it - it was faster, novel, opened up new possibilities, was more forgiving of mistakes, allowed a new range of compromises, permitted the superimposition of multiple goals.

And in colouring out, she found that she had also freed herself to do something else: sub-dividing the areas that were given to her initially, making new areas to colour in different ways and then, when we ran out of pictures, drawing on the cardboard cover of the pad the pictures came from and colouring that.

But we noted that it also failed spectacularly sometimes (for us):

Afterwards she went to someone who matters, told her about colouring out and asked her to judge the results of our mission. My wife didn't get where she is today without recognising analogies between colouring and test coverage when she sees them. So she assumed the role of experienced product owner and told us both well done, before introducing Hazel to one of her own analogies for stopping heuristics: upcoming release date and proximity to bedtime.

Thank you to Hazel for letting me use her pictures.
Categories: Blogs

Glove It!

Sun, 03/13/2016 - 12:41
Comedians and testers both know that the world can be different and strive to show how, in ways that mean something to someone else. I'm very interested in the connection between the creative processes in, and outcomes of, joking and testing. 
The Comedian's Comedian podcast often tries to get into the methods that comedians use to come up with their material. Many times the comedians themselves don't know. There are some, like Phil Kay and Gary Delaney, who, while probably not philosophers of comedy - I don't recall ever hearing any conversation about theories of humour on the podcast, for example - are at least philosophers of their comedy.

I got the impression that Spencer Jones is another who thinks about what he's thinking about, and how he thinks about it. Here's a couple of quotes from his recent interview that chimed with me as a parent, tester and manager.
[Kids] are like comedians without any rules ... I've got this yellow box, it's called a Kaossilator 2 or something.
I did three or four hours on this thing trying to get something funny out of it and all I got was this dirge, house music ... And then in walks [my young son], massive head, little arms ... and he walks in and just picks up, puts it up to his head and goes "heeellllloh!" And his cheek touched the pad and the pad went "Woooooooooommmmmmmm" and I went went "of course, of course ... it can do something else than make music."And on finding ways to work, to force creativity when you have limited time and significant other commitments:
Before kids I'd wonder around and mess around with stuff and be unfocused. And then I found with kids you just don't have the time ...So what I do now is I dance around like a dick for 20 minutes to whatever music, get the camera set up, get yourself really stupid, pick up a prop and do 20 minutes with it, in character, without breaking. For example, rubber gloves. There's only so many things you can do with a rubber glove ... and so you mess around, stay in character, stay in character, and after like literally 14 minutes something weird happens and you start doing stuff that you'd never do before. After 20 minutes you stop and then you move on to the next prop. And this goes on for an hour and then you watch it back. And it's the worst thing to watch. But at about 14 minutes in you go "Ooh, that's something!"When you've given up, when you're just a stupid broken twat, something happens. Now, I'm not suggesting that we should all make twats of ourselves (although I know that I've had my moments in that regard) but I am suggesting that there's something we can take from this: be open to suggestions; be open to finding inspiration; be open to finding a way of working that meets your needs (and change when it doesn't).

As I write that last paragraph, I find myself reflecting on a discussion with Michael Bolton during the recent Rapid Software Testing workshop at Linguamatics when he said every tester needs to invent testing for themselves. 

I challenged him on that: (a) we are all different so necessarily do different testing anyway, and (b) surely he's not saying everyone should start from scratch. And he agreed; it isn't an exhortation to create from whole cloth but  to actively seek out something that suits us and our contexts.

I can sign up to that; and add that we should be prepared for it to take those 14 minutes, but it'll likely be worth it.
Image: Korg
Categories: Blogs


Wed, 03/09/2016 - 23:48
Ideas. Ideas. Ideas. Ahhh, ideas.

Ideas are the fabric from which other ideas are made, the surface on which other ideas are sketched and the giants on whose shoulders other ideas stand. Ideas spawn ideas but can also weaken, damage and even destroy them. Ideas are plentiful and powerful and sometimes pathological. But we can't advance without them.

The Cambridge Exploratory Workshop on Testing is a place for ideas. CEWT:
  • Cambridge: the local tester community; participants have been to recent meetups.
  • Exploratory: beyond the topic there's no agenda; bring on the ideas.
  • Workshop: not lectures but discussion; not leaders but peers; not handouts but arms open.
  • Testing: I reckon you'll know about testing...
The second CEWT ran on 28th February 2016 with the theme When Testing Went Wrong. There were ten presentations of ten minutes, each followed by 20 minutes of discussion. A short blog post can't do justice to the range of material we covered, so here's a handful of the threads that appealed to me with lines from my notes aggregated across talks:

ReactionsWhen things have gone wrong (such as - yikes! - your application deleting the C drive of your customers' customers!) there are many ways in which you can react. The way you choose says a lot about you. The way your company reacts says an awful lot about them. Will there be a witch hunt? Will there be knee-jerk policy change? Will there be blinkered focus on the risks that were, not the risk that are now? Why do we tend to overcompensate? Why is so much of our risk management focussed on the outcome rather than the potential for outcomes? Two bugs are known in production, each with equal likelihood of causing some problem of equal magnitude. The one which is seen by a customer will almost always get disproportionate attention.

KnowledgeLabels are powerful; peer opinion is powerful; preconceptions are powerful. These things are fences to keep ideas isolated and can be hard to break down once established. Things can go wrong when we don't look past the barriers. We can fail to see useful connections. We can fail find solutions that would be useful in our contexts. Trying to broker agreement between two sides divided by a conceptual barrier is challenging. Standing on the "wrong" side of a conceptual barrier can be challenging.

SystemsThings that go wrong are rarely wrong in isolation. (And by the same token, although not the topic of this workshop, the things that go right are rarely right for one reason alone.) Predicting progress is hard: in order to have control we need feedback; feedback is dependent on learning; learning is non-linear. Testers are part of a larger system of software development and - as part of a quest for value to users - may do work not obviously designated as "testing". This might make sense in the larger system but it's important to be aware of the potential impact elsewhere in that system.

This kind of event energises me, which is why I'll make sure we do it again. CEWT 3 here we come.

Categories: Blogs