Skip to content

Hiccupps - James Thomas
Syndicate content
James Thomas
Updated: 1 hour 12 min ago

Cambridge Lean Coffee

Thu, 05/26/2016 - 06:40

This month's Lean Coffee was hosted by Cambridge Consultants. Here's some brief, aggregated comments on topics covered by the group I was in.

What is your biggest problem right now? How are you addressing it?
  • A common answer was managing multi-site test teams (in-house and/or off-shore)
  • Issues: sharing information, context, emergent specialisations in the teams, communication
  • Weinberg says all problems are people problems
  • ... but the core people problem is communication
  • Examples: chinese whispers, lack of information flow, expertise silos, lack of visual cues (e.g. in IM or email)
  • Exacerbated by time zone and cultural differences; lack/difficulty of ability to sit down together,  ...
  • Trying to set up communities of practice (e.g. Spotify Guilds) to help communication, iron out issues
  • Team splits tend to be imposed by management
  • But note that most of the problems can exist in a colocated team too

  • Another issue was adoption of Agile
  • Issues: lack of desire to undo silos, too many parallel projects, too little breaking down of tasks, insufficient catering for uncertainty, resources maxed out
  • People often expect Agile approaches to "speed things up" immediately
  • On the way to this Lean Coffee I was listening to Lisa Crispin on Test Talks:"you’re going to slow down for quite a long time, but you’re going to build a platform ... that, in the future, will enable you to go faster"

How do you get developers to be open about bugs?
  • Some developers know about bugs in the codebase but aren't sharing that information. 
  • Example: code reviewer doesn't flag up side-effects of a change in another developer's code
  • Example: developers get bored of working in an area so move on to something else, leaving unfinished functionality
  • Example: requirements are poorly defined and there's no appetite to clarify them so code has ambiguous aims
  • Example: code is built incrementally over time with no common design motivation and becomes shaky
  • Is there a checklist for code review that both sides can see?
  • Does bug triage include a risk assessment?
  • Do we know why the developers aren't motivated to share the information?
  • Talking to developers, asking to be shown code and talked through algorithms can help
  • Watching commits go through; looking at the speed of peer review can suggest places where effort was low

Testers should code; coders should test
  • Discussion was largely about testers in production code
  • Writing production code (even under guidance in non-critical areas) gives insight into the production
  • ... but perhaps it takes testers away from core skills; those where they add value to the team?
  • ... but perhaps testers need to be wary of not simply reinforcing skills/biases we already have?
  • Coders do test! Even static code review is testing
  • Why is coding special? Why shouldn't testers do UX, BA, Marketing, architecting, documentation, ...
  • Testing is dong other people's jobs
  • ... or is it?
  • These kinds of discussion seem to be predicated on the idea that  manual testing is devalued
  • Some discussion about whether test code can get worse when developers work on it
  • ... some say that they have never seen that happen
  • ... some say that developers have been seen to deliberately over-complicate such code in order to make it an interesting coding task
  • ... some have seen developers add very poor test data to frameworks 
  • ... but surely the same is true of some testers?
  • We should consider automation as a tool, rather than an all (writing product code) or nothing (manual tester). Use it when it makes sense to, e.g. to generate test data

Ways to convince others that testing is adding value
  • Difference between being seen as personally valuable against the test team adding value
  • Overheard: "Testing is necessary waste"
  • Find issues that your stakeholders care about
  • ... these needn't be in the product, they can be e.g. holes in requirements
  • ... but the stakeholders need to see what the impact of proceeding without addressing the issues could be
  • Be humble and efficient and professional and consistent and show respect to your colleagues and the project
  • Make your reporting really solid - what we did (and didn't); what we found; what the value of that work was (and why)
  • ... even when you find no issues

Categories: Blogs

Joe Blogs: A Meeting

Tue, 05/10/2016 - 06:50
After Neil Younger's talk on Lean Coffee for team meetings at the Cambridge Tester Meetup last night, we ran a Lean Coffee session. These are my notes on the topics we covered:

How do you teach testing?
  • The question was set up on the premise that "you can teach/there is a lot of available material for software development, but not so much for testing". 
  • This was disputed: was the assertion confusing (availability of material for) learning programming languages with being able to program or being a good developer?
  • When teaching or coaching testing, particularly to non-testers, a detective metaphor is useful.
  • Testing is about a problem-solving mindset ... but so is programming, right?

How do you keep up with technology?
  • When your product uses or interacts with some new technology, how do you get up to speed with it?
  • How do you get sufficient depth to be able to talk to experts on your team?
  • Testing is about learning, whether it's your product or some new technology. Use your testing skills.
  • Learn in small increments, by using the thing you are learning about. 
  • Talk to the experts, probe their knowledge and learn from them. Use your context-free testing skills to try to find cracks in their knowledge, your implementation of the technology etc.
  • Be aware that you'll never know everything about all technologies.
  • Look for meta knowledge: over time you'll see similarities across technologies.
  • Do a Google search for e.g. failures in others' use of the technology and look for analogous cases in your context.

How do you unblock yourself?
  • When you're out of ideas, how do you get a fresh perspective?
  • Pair with a colleague, or even swap roles with a colleague.
  • "Unthink" by going for a walk, removing all your distractions, doing a different piece of work.
  • Look for patterns in your blockages and try to break them. Perhaps you're always blocked at just before lunch?
  • Use a different way of getting ideas out, e.g. a mindmap if you're usually a list person.
  • Wear a different hat, Use personas to spur ideas.
  • Stop overthinking! Perhaps you are just finished.
  • Use mnemonics, checklists.
  • Look at historical data for the thing you are testing (bug reports, charters, meeting notes, user stories etc).
  • Find a different way in to the problem, e.g. start the application a different way, with a different browser, on a different OS, with the mouse set up so the buttons are backwards etc.

Why did you come tonight?
  • Because it was Neil talking.
  • To learn more about Lean Coffee.
  • Interested in facilitation techniques.
  • To try something different.
  • To make connections to other local testers; I work on my own.
  • Needed something to do in the evenings.
  • To find some different ways of doing things.
  • To have a forum to ask questions.
  • For those Eureka! moments - there's always one.
  • To get inspiration.
  • To speak to other testers about testing.
  • Research, to speak to testers about the tools they use.
  • To see whether the hype about tester meetups is justified.

... and did you get what you wanted out of it?
  • Yes!
Categories: Blogs

Joe Blogs: Meetings

Tue, 05/10/2016 - 06:40

Neil Younger spoke about using Lean Coffee for his test team meetings at last night's Cambridge Tester Meetup. Inspired by the Cambridge Lean Coffee meetings - which are also part of this meetup and which he has hosted at DisplayLink since the earliest days - he replaced a failing monthly team meeting with Lean Coffee. And he hasn't looked back. Here's a few bullet points pulled out of the talk and subsequent discussion.

The monthly test team meeting was failing for various reasons, including:

  • As the company transitioned from waterfall to agile there were other forums for people to report status
  • ... and these were generally more timely
  • ... and the monthly meeting became mostly repetition.
  • With a cross-site team the physical constraints of the meeting rooms - round a table at each end, with a monitor onto the other team - seemed like a barrier to interaction.

They changed the Lean Coffee format in several ways, including:

  • Cross-site means that post-its are impractical so they use Trello for their proposals, voting and to record the Kanban (To Do, Doing, Done).
  • Often discussions will need to produce actions and these need to be recorded. They have an additional column on their Kanban for this.
  • They felt the need for a way to inject topics that would be discussed without voting - announcements from management, for example. Over time, they've found other ways to disseminate this information instead.
  • A facilitator is nominated to keep track of time, but also sometimes keep discussion on track.

There are some other changes from the earlier meetings too, including:

  • They use bigger screens to view the other site.
  • They sit in a semi-circle facing the screen so that everyone on both sides can see everyone else's face.
  • The Lean Coffee is optional.
  • No topic is off-limits (and topics have included salaries and concerns about the direction the business is taking).
  • The focus is no longer status and more: are we doing the right thing? can we get better? what do others think of this?

Neil's been very happy with how it's working and shared a few observations, including:

  • They experimented with different time limits and found that between 7 and 10 minutes work well.
  • They tend to have few project-related discussions because there are other forums for that, including another meeting where testers share information about feature work.
  • Dot voting is a kind of self-policing mechanism, preventing people from riding their hobby horse every week or descending into office politics
  • ... and the facilitation means that anything going too far off-topic can be brought back.
  • The monthly cycle gives a chance for issues to be resolved before the meeting takes place.
  • The location of the facilitator changes the focus of the meeting - whichever site facilitates tends to lead more of the discussion.
  • Almost certainly some other format would work just as well
  • ... but the physical changes, optionality and voting are probably key to it working for Neil.
  • Other teams in DisplayLink are taking the format and tweaking it to work for them.
After the talk and questions we ran a Lean Coffee session. Details of that are in Joe Blogs: A Meeting.Image:
Categories: Blogs

Toujours Testing

Mon, 05/02/2016 - 06:46

Some time ago, maybe even a year ago now, one of my team said that she had been watching me. I acted cool - although that may have been the onset of a cold sweat - but fortunately my dark secrets remain mine and her observation was simply that, to her mind, I am always testing. She gave a couple of examples:
  • When the test team were being given an expert-level product demo, I took notes not only on the functionality but also on the way that that information was being communicated to us, verbally and in slides, and I fed that back to the business because what we were watching was similar to the content of our customer demos.
  • When I set a piece of work - for myself or others - I will frequently have a secondary aim other than simply getting the piece of work done, and that aim often has an evaluation or assessment element to it.

And on reflection I think she's right. This is something that I have done, and do do. These days, I believe, I do it more explicitly than I used to because, over a long period of time and particularly by observing myself, I realised that this was my intuition and instinct, and I have found it personally valuable.

Right now at work we're considering some potentially significant changes to the way we organise ourselves and I'm spending time thinking about possibilities and exploring ramifications of them, mostly as thought experiment and by talking to members of the team. When I had the opportunity to try, in a small way, one of the possibilities, I took it. It's a possibility I instinctively shy away from and so I was very interested in my reaction to it. And later I  followed up with a recipient of my action too, to understand their feelings and explain why I did what I did.

I like to make myself and my work open for others to test and, when I do, the questions that follow mean that I learn things and, as a team, we do a better job overall. I am happy when someone on my team finds one of my mistakes and we correct it, although it took me some time to see and understand my feelings of defensiveness in those kinds of situations. (And they never go away.) Through this kind of self-testing I have arrived at knowledge about myself that I can codify and use to guide how I want to behave in future.

I also test myself by issuing challenges. Talking at EuroSTAR 2015 was the culmination of a 12-month challenge to try to get over increasing nervousness at public speaking. (I'll have more to say about that another time.)

I try hard to write reports in terms of testable assertions. And then, before delivery, I test them. I'll frequently find places where I've made too general or specific a claim, or I might feel that I need to go and look again at some data to check that I can back up what I'm saying. For instance, in an earlier draft of this post, the next paragraph started "So, yes, I am always testing ..." But it doesn't start that way any more because in proofing I asked myself "really, always testing? Always?"

So perhaps I can agree that I am always checking, challenging, exploring, investigating, ... Maybe that's why testing felt like a good fit when I stumbled into it. I like the spirit of the suggestion and - to the extent I'm prepared to commit to a literal "always" - I am always testing. Or, to tune it still further, I am always doing things that I think are consistent with and conducive to being a good and improving tester.

So I was particularly intrigued to read Harnessed Tester writing about wanting to switch testing off:
... how do you switch off the tester in you (if you do even manage it at all)? Are you able to function in the “Real World” without slipping into your profession or do you find yourself testing things you shouldn’t or don’t need to test? Do you even see it as a bad thing?I don't ever want this to be switched off.  I want to work it, to exercise it, to train it. I seek out opportunities to put it to use. I love this quote from George Carlin:
The brain is a goal-seeking and problem-solving machine, and if you put into it the parameters of what it is you need or want or expect, and you feed it, it will do a lot of work without you even noticing.I quoted it in Your Testing is a Joke where I described how I use humour as just such a training device, as a tool for feeding my brain.

I don't ever want this to be switched off. Since having children, I have become interested in how they see and interact with the world. I encourage my two daughters to have their enquiring mind turned on at all times and I praise them when they find a new perspective or ask a question or seek information. I am prepared (most times) to keep answering those long chains of why questions until they get bored, and I try find opportunities to provoke thoughts that will start their thinking process off.

I don't ever want this to be switched off. Last Christmas I bought my family a shared present of How to be an Explorer of the World and we've done several of the experiments together. Last month we went on an adventure walk in Ely.  My youngest daughter was particularly inspired by one task that I set: find something hidden. She's since begun to read Pippi Longstocking and styles herself a "Thing Searcher" and, while I was weeding the drive (and on the side listening to an interview with James Bach) last weekend, she interrupted me:
"Dad, how can I find things that no-one else finds?" she asked."You could look where no-one else looks" I replied.And so that's exactly what she did, initially by standing on a wheelie bin to inspect the top of the hedge.

I don't want this to ever be switched off. What I want is for asking, probing, looking, questioning, reviewing, being creative, and exploring to be second nature. I think that these are valuable skills in life, but also that if they become what you just do  then, as a tester, you can more often get on with a task and not spend explicit time on the techniques.

I don't ever want this to be switched off. However, where I think that Harnessed Tester has got a point, and it's a strong one, is that it's important to be able to deploy these skills appropriately, to make sensible use of them, to report your findings from it in a way that's acceptable and beneficial to you, to whoever you are dealing with and to the context you find yourself in.

And that, by Weinberg's definitions, is about acting congruently, (see e.g. Managing Teams Congruently) which is one of the challenges I have set for myself and have been working on for the last couple of years.

And I've told my team that I'm dong it.

And I know that they are watching...
Categories: Blogs

Your Testing is a Joke

Thu, 04/28/2016 - 22:00

My second eBook has just been released! It's called Your Testing is a Joke and it's a slight edit of the piece that won the Best Paper prize at EuroSTAR 2015. Here's the blurb:
Edward de Bono, in his Lateral Thinking books, makes a strong connection between humour and creativity. Creativity is key to testing, but jokes? Well, the punchline for a joke could be a violation of some expectation, the exposure of some ambiguity, an observation that no one else has made, or just making a surprising connection. Jokes can make you think and then laugh. But they don't always work. Does that sound familiar? This eBook takes a genuine joke-making process and deconstructs it to make comparisons between aspects of joking and concepts from testing such as the difference between a fault and a failure, oracles, heuristics, factoring, modelling testing as the exploration of a space of possibilities, stopping strategies, bug advocacy and the possibility that a bug, today, in this context might not be one tomorrow or in another.  It goes on to wonder about the generality of the observations and what the value of them might be before suggesting ways in which joking can provide useful practice for testing skills.  There are some jokes in the eBook, of course. And also an explanation of why any groaning they provoke is a good sign… And in case you're wondering, my first eBook was called My Software Under Test and Other Animals. It's got some groanworthy moments too.
Categories: Blogs

Cambridge Lean Coffee

Thu, 04/21/2016 - 08:24

We hosted this month's Lean Coffee at Linguamatics. Here's some brief, aggregated comments on topics covered by the group I was in.

It is fair to refer to testers as "QA"?
  • One test manager talked about how he has renamed his test team as the QA team
  • He has found that it has changed how his team are regarded by their peers (in a positive way).
  • Interestingly, he doesn't call it "Quality Assurance" just "QA" 
  • His team have bought into it.
  • Role titles are a lot about perception: one colleague told him that "QA" feels more like "BA".
  • Another suggestion that "QA" could be "Quality Assisting"
  • We covered the angle that (traditional) QA is more about process and compliance than what most of us generally call testing.
  • We didn't discuss the fairness aspect of the original question.

What books have you read recently that contributed something to your testing?
  • The Linguamatics test team has a reading group for Perfect Software going on at the moment.
  • Although I've read the book several times, I always find a new perspective on some aspect of something when I dip into it. This time around it's been meta testing.
  • The book reinforces the message that a lot of testing (and work around the actual testing) is psychology.
  • But also that there is no simple recipe to apply in any situation.
  • We discussed police procedural novels and how the investigation, hypotheses, data gathering in them might be related to our day job

When should we not look at customer bugs?
  • When your product is a platform for your customers to run on, you may find bugs in customer products when testing yours.
  • How far should you go when you find a bug in customer code? 
  • Should you carry on investigating even after you've reported it to them?
  • In the end we boiled this question down to: as a problem-solver, how do you leave an unresolved issue alone?
  • Suggestions: time-box, remember that your interests are not necessarily the company priorities, automate (when you think you need lots of attempts to find a rare case), take the stakeholder's guidance, brainstorm with others, ... 
  • If the customer is still screaming, you should still be working. (An interesting metric.)

Categories: Blogs

It is Possible to be Professional Without Being in a Profession

Mon, 04/18/2016 - 21:32

This guest post is by Abby Bangser, writing on the recent MEWT 5. I enjoyed Abby's talk on the day, I enjoyed the way she spoke about testing both in debate and in conversation and I am very much enjoying her reflections now.

I'm also enjoying, and admiring, the open attitude of the MEWT organisers to Abby's comments on gender diversity at the workshop, later on in email and in this piece.  In particular, I like their eagerness to share their intentions, process and feelings about it at the event and then engage in the wider discussion in the testing community (e.g. 12, 3).

MEWT 5 was my first experience in a small peer conference, and the format provided a very interesting style of sharing and learning. A big thanks to Bill and Vernon for organizing, the Association for Software Testing for helping fund the event, and particularly Simon for identifying a common theme. The conference theme of What is a Professional Tester? was a tough one to prepare for, and it became apparent that other attendees had a diverse way of approaching it as well. Maybe that was what made it an interesting topic!

I want to briefly touch on the fact that we identified and discussed the difference between a capital P professional and a person working in a professional manner. Working in a profession does not, by itself, indicate a person's level of professionalism, and I enjoyed the conversation around what defines professionalism. Based on the discussion at the conference, my own definition of professionalism is achieving high standards of pride and integrity.

Just as with other heated topics related to testing, the term "professional" has a lot of baggage. This can include but is not limited to the idea that as a profession, there may need to be a board that regulates who gains access to, and who can be denied/revoked access to, the profession.

This point seemed to be the biggest reason why testing as a profession had a negative ring to many in the room. This, of course, runs too close to the debate around testing as an activity or a role for this to be omitted from our discussions. I want to use this blog post to dig a little more deeply into this.

In my opinion, the room had a bit of support on both sides of this debate but I think we made progress on why it has become such a heated topic. We seemed to identify why having a defined role of "tester" or "QA" is necessary in some contexts. By clearly articulating these needs as being focused around ease of recruiting and some industry regulations, it became clear to me that role does not need to equate to job title or job specification even though it often does these days.

There was a proposal that in lieu of roles we could look at areas of accountability, but this didn't quite sit well with me as it still has an air of assessing blame. I suggested (and prefer) thinking of roles as hats. Each person has a certain number of hats that they are skilled enough to wear, but are not required to wear them all at all times.

I can’t remember where I first heard this, but I like it for a number of reasons that I want to explore further:

  • Hats are not permanent; they are easy to take on and off: Each person should be able to find the ones that fit them, and be looking for ways to work new ones. While my day-to-day hat may be the testing one, I also enjoy putting on the infrastructure hat, the project management hat and the business analysis hat as the need arises.
  • You look a bit silly if you are wearing more than one: As stated above, changing hats is not only OK, it's pretty much required to be a successful team. But I still put a lot of value in focusing on a single hat at a time. This was referred to as time slicing instead of multi-tasking, and I really liked that distinction.
  • Hats are not unique to a single person: Just because you are wearing a certain hat does not mean someone else can’t put on the same type of hat. Some challenges may take a number of testing-focused people to solve, and others may take a variety of roles. In either case, the team should be able to self-organize.

I want to take the idea of changing hats just one step further. Throughout the day, there was a definite majority of the room who felt that successful team mates (not just testers) are those who step in and get the job done. They do not let job titles/specifications limit what they learn or where they provide support. This was a big reason I felt my topic of the “full stack” tester was well received.

I think that this topic has a lot of really interesting avenues left to personally explore, and I look forward to doing that both off- and on-line. If I had to sum up my current hope for a takeaway, it is that every team member has a responsibility to make their expertise accessible to others AND find ways to access others’ expertise. It is no longer acceptable to silo our team mates based on arbitrary terms like “technical”.

A final and important word on my experienceWhile I was very glad to be able to attend MEWT 5 and participate in the discussions, I would be remiss to not raise the lack of diversity in the room. While there are many axes that we could discuss diversity on, I am going to speak only of gender diversity here. The story told in the room of MEWT attendees has been told in countless other industries, organizations and events. A notable example is the article by the Guardian which noted that there are more FTSE 100 leaders named John than all the female chief executives and chairs combined. This definitely hit home, since the participants in our room named Dan outnumbered all the women combined by a ratio of 3:1.

There is no single answer on how to support diversity in these circumstances, but we have countless people paving the way who are showing that it is not only possible to succeed in doing so, but to actually thrive. I hope to attend another -EWT event in the future that can promote the kind of diversity shown by many including Rosie Sherry and her work at TestBash, Adi Bolboacă and Maaret Pyhäjärvi with European Testing Conference, and supporting organizations like Speak Easy started by Anne-Marie Charrett and Fiona Charles.
Categories: Blogs


Thu, 04/14/2016 - 05:52

I've said before on here that I enjoy ideas just for their own sake. The CEWT meetings that I run are all about that: exposure to new ideas, novel angles on existing ideas, a view of the evolution, merging, death and resurrection of ideas, and challenges to, or support of, my own ideas and perhaps even my ways of generating or thinking about ideas.

So I was delighted to be invited to attend someone else's workshop, MEWT 5, on the topic What is Professional Testing? Apart from anything else, I had a blast simply coming up with what I thought was an interesting angle. The talks and discussion were wide-ranging but I'll just pick out bits from my notes on three of the threads that ran across the day that I found particularly interesting.

RelativismWhether someone is acting professionally is entirely subjective. Variables in play include the person doing (the tester in this MEWT's context), the person that's having something done for them (a client; potentially the same person as the tester), the project, the project context. Change any of these and there's a chance that what is considered professional might change. Two different observers might see the same situation differently with respect to the professionalism of the participants. Dan Caseley saw something of this in the responses he got from colleagues when he asked them to characterise testers they'd worked with.

We talked about Jeremy Clarkson, the ex-presenter of Top Gear. He is well-known for controversial comments and actions which, for a long time, the BBC were prepared to put up with, and pay up for. Possibly - presumably - this was because he continued to return enough value to them to justify the risk and expense.

Eventually he did something sufficiently serious that his contract was not renewed. Question: was he acting professionally when he was deliberately controversial? Or was he just unprofessional?

Was it part of his remit, part of his employer's desires, that he was edgy (or plain old-fashioned offensive) in order, for example, to drive ratings and hence revenue? Did the BBC think he was professional? A professional? Acting professionally? Did the audience? Which parts of the audience? Was any given audience member consistent in their view of his professionalism over time?

Adam Knight brought another perspective: are we in the testing community, particularly the vocal and active context-driven community, creating and reinforcing a notion of what it means to be a professional tester? If so, can it be viewed as a reaction against perceived unprofessional "Factory Schoolers"? But wouldn't they in turn, and legitimately from their perspective, consider themselves professional testers and others less so? The debate that followed touched on subjective positions that motivate principles that in turn risk sliding into dogma.

A shared perspective isn't necessarily an invalid perspective, nor harmful, nor one without utility. But a uniform perspective can lead to uniform actions and, as Iain McCowatt put it, often the fringes are where the excellence is.

CommitmentMohinder Khosla quoted Steve Pressfield on professionalism:
Amateurs let adversity defeat them. The pro thinks differently. He shows up, he does his work, he keeps on truckin’, no matter what.Elsewhere the conversation included whether it was characteristic of a professional that they will knuckle down and get on with it, see a project to its conclusion, find ways to thrive in adversity, do whatever it takes to get it over the line.

But only up to a point? It was generally agreed that the professional will also recognise when a situation has become untenable, when they are unable to operate with integrity, when the only course of action that retains integrity is to bow out.

Related conversation spinning out of Abby Bangser's talk covered the idea that a professional would be able to ramp up on any topic and not be boxed in by some idea of what they should be. This in turn slid into contrasting testers who define themselves by some area - technical, performance, automation - with developers who will often define themselves by the language they favour - Java, Ruby, C++.

EthicsFor me, this was the most interesting theme of the day.  Mohinder referenced a blog by Uncle Bob Martin which plays out two variants of a doctor-patient scenario. In the first the doctor appears to be offhand and uncaring when the patient describes pain in their arm and in the second the doctor refuses to amputate a limb at the patient's request on the basis that it would violate a medical oath.

We revisited this and other medical scenarios, trying to make parallels between professional testing and other professions. It was suggested that the professional position was clear-cut in the arm example: the second doctor is the professional.

I'm less certain about this. I do think that one way we might define a professional (if we wanted to) is in terms of the ethical standards they hold themselves to. But I don't think that having standards or principles unequivocally forces the choice of any particular course of action.

The second doctor might be doing quite the wrong thing here if, for example, their patient has some condition - body dysmorphia, say - that might mean they would harm themselves or even commit suicide were the doctor not to remove the limb. A single abstract ethical principle can govern both courses of action - to operate or not - under appropriate circumstances.

Likewise, two different doctors, operating under the same code of ethics, could legitimately make different decisions in the same circumstances.

Some speakers - Doug Buck and Danny Dainton particularly - described personal ethics that govern their view of what it means to be a professional. These included
  • striving to do good work
  • taking responsibility
  • doing what you say you will
  • sharing knowledge
It was fascinating to see these enumerated. I am a practitioner and proponent of introspection and it takes no little strength of mind to decant motivations from the actions they provoked and the stories you tell yourself about them. The effort required to do that is seldom wasted, in my experience.

Much like the effort to organise and run a workshop like this. And I'd like to thank Bill, Simon, Vernon for doing it and and the Association for Software Testing for helping to fund it.
Categories: Blogs

What is What is Professional Testing?

Sun, 04/10/2016 - 10:32

I was invited to the fifth Midlands Exploratory Workshop on Testing (MEWT) yesterday. In it, the MEWTs were asking: 
What is Professional Testing?Wikipedia has this to say about what a professional is: A professional is a member of a profession or any person who earns their living from a specified activity. Hmm. So what’s a profession then?  A profession is a vocation founded upon specialized educational training, the purpose of which is to supply disinterested objective counsel and service to others, for a direct and definite compensation, wholly apart from expectation of other business gain. (also from Wikipedia.) In MEWT 5 we’ll be asking – what exactly does it mean to be a Professional Tester? And, since I already cited the quotes above. You won’t be allowed to. In fact, we’ll expect you to delve deep and reach far and wide into the many and varied facets of what it means to demonstrate professionalism in the face of a rapidly changing technological and sociological landscape.This post is an edit of the notes I made while I was preparing my talk.

I'm going to use the question What is Professional Testing? as a lens through which to view, inspect and explore itself. I'll imagine a project where I'll take on the role of tester; MEWT will be the client and the call for the workshop will be the brief.

In order to tease out intuitions about professional testing I'll work though the project by making assertions about what the average person might think it would be reasonable to expect from a professional tester in particular situations. Once I have those, I'll wonder whether we can generalise from them into some kind of definition.

It's probably not too controversial to suggest that a professional in most domains should think carefully about knee-jerk responses (except for professional cage fighters, perhaps) so my first action here will be to exploit the information in front of me and read the full brief.

The brief's headline asks What is Professional Testing? but the body of the text includes another question:
 ... in MEWT 5 we’ll be asking - what exactly does it mean to be a Professional Tester? Is the workshop asking about professional testers or professional testing? That's two different questions. Or, at least, it might be two different questions. But there's more:
 .... what it means to demonstrate professionalismSo perhaps there's even three concepts here?

A professional tester should be, reasonable people might think, alert to nuances of meaning. They will know that sometimes in those small gaps, those seemingly innocuous chinks between terms, the fuzzy areola of greyness around the sharp black centre of a label, there are vast conceptual differences lurking. Even Humpty Dumpty knew that semantics matter and was prone to useful aphorisms like this:
When I say "Professional Testing" it means just what I choose it to mean — neither more nor less.But by the same token, wouldn't reasonable people expect a professional tester to also know that sometimes there are not great differences in people's use of terminology and it's quite common for them to use varied vocabulary to refer to the same thing?

And the professional tester, I guess most of us would agree, should be able to navigate the world keeping ambiguities and multiple possibilities in mind until it's a sensible time to ask those involved whether it matters and whether the range of possibles need to be collapsed.

The brief also includes a definition of profession, with explicit instructions that it should not be quoted ... So here it is:
A profession is a vocation founded upon specialized educational training, the purpose of which is to supply disinterested objective counsel and service to others, for a direct and definite compensation, wholly apart from expectation of other business gain.The instruction given alongside the definition is a blocking manoeuvre and noting such things is a characteristic that the man on the Clapham Omnibus is likely to want in his professional tester.

Perhaps in this case it means nothing; perhaps it's simply a pointer that the client wants us to do our own research. But then again perhaps it's a coded message that they want us to start from their point of view, that they expect we will encounter other people with other perspectives on this project and they are trying to constrain our acceptance of them. Or perhaps they don't really care what we do with respect to definitions their intended point being that we shouldn't waste time on things that they would consider unnecessary. Or none of those things. Or all of them.

As an aside, would right-thinking people be upset if, having hired a professional tester and told them not to do something, they still did it? It probably depends on the something, right? What about if the tester did it because they had an instinct that it was the right thing to do for the project? Like I just did ...

To try to understand the motivations better, a professional tester might explore alternative definitions. Definitions are often a good place to start in any case; they can provide anchors or reference points for other information discovered along the way and supply related terms to be investigated in their own turn. On consulting Oxford Dictionaries, I find that there are multiple definitions of professional, and alternative definitions of profession:
Professional: (a) Relating to or belonging to a profession; (b) Engaged in a specified activity as one’s main paid occupation rather than as an amateur; (c) A person competent or skilled in a particular activityProfession: A paid occupation, especially one that involves prolonged training and a formal qualificationSomeone with a little experience of testing might expect that the professional tester would look a little further than the obvious. In this case, when I do that, I find that there are also other senses of profession that we might come across that we can disregard for the purposes of the current mission:
Profess: An open but often false claim: "his profession of delight rang hollow"Ah yes, the mission. Surely a professional tester would want to understand what their mission was, or at least some things that it wasn't, before going too far?

A professional tester attuned to language and blockers and ready to ask questions at the point when they are likely to add value, might choose now to talk to the client and highlight the potential for the brief to not be expressing the client's intent unambiguously enough to move forward.

And I couldn't think of a strong enough reason not to, so I asked Simon Knight, the content owner for MEWT 5, if he would take some questions. He said he would and so I emailed him a bunch:
Who is asking this question? (or on whose behalf are MEWT asking the question?)
The organisers, on behalf of the testing communityAnd why?
Because we needed a topic to discuss at MEWTDo they have a problem to resolve and has this question come up in their attempts to resolve it? If so, what is the problem?
See question belowWhat do they hope to achieve by asking the question?
Through further discussion we hope to find some heuristics and principles that can be utilised within the testing community to increase the profile of testing by improving behaviours, work-products etcWhat do they expect to do (or be able to do) once they have the answer?
We expect to be able to communicate the results of our discussions by sharing materials, slides, thoughts and experiencesOne might assume that a professional tester could glean a lot from this, from both the implicit information that surrounds the surface information and that surface information itself.

Look at how terse some of those answers are, look at how efficiently one answer is said to be covered by another, look at how the longer answers to the more open questions are shorter on specifics and look how the questions that could easily be answered directly have been answered directly and apparently without guile.

A professional tester might now attempt to test the relationship he or she is building with the client. Perhaps a drop of humour, edged with respect, illustrating that this is not the first rodeo for either of them? In this case:

Well played. Are you in senior management? ;-)
Cost/benefit analysis suggested spending less than 5 minutes of my time and giving you my most honest response would probably suffice. If you actually do need more info, let me know. A professional tester might make a working hypothesis that Simon is a pragmatist, thinking about his answers and why he is giving them in the way he is, sounds like he is prepared to talk straight, took the questions seriously but cares to guard his time. And, importantly for an information-seeker, is offering further information should it be desired.

Further, a professional tester - who we can undoubtedly assume is attuned to language, and communication, and the sociology of interpersonal relationships - might note that Simon did not use the word profession, nor any of its derivatives, in his response. And also that he didn't simply copy-paste bits of information from the brief or even refer back to the brief.

Armed with this new information, any professional tester worth his salt would surely reevaluate what they already knew, and try to state the mission (for their own benefit at least):
On the basis that there is no specific problem to solve here, and indeed no specific client to solve it for, questions of definition could justifiably become less part of the mission's own definition and more part of the deliverable, the report, the testing story. And in order to deliver the story the professional tester (and indeed any other kind of tester) needs ways to get to the content of the story. Because you can't have a story with no content, can you?

Unless you're a consultant.

Experience is a great guide to instinct here, but experience comes from practice. We might hope that a professional tester has practised and continues to practice. A professional tester, a practising professional tester, would certainly have battle-hardened tricks, an A-Z, for getting an investigation going, wouldn't they? They'd reach for tools such as analogy, anecdote, and antithesis to just list a handful of the A's.

Let's practice then; I'll pick a technique that I have found productive in the past: the use of antithesis to shape understanding. What might we contrast professional with? Perhaps amateur.

Remember that Galton talked about the wisdom of the crowd from a statistical perspective over 100 years ago and then think of uTest or 99tests. uTest touts itself as "The Professional Network for Testers". A professional tester might again notice the nuance here. The network is professional but are the testers considered professional? By who? In what way? And what does it mean for a network to be professional?

A professional tester might dig for a little more evidence. The uTest Facebook site says "uTest is the world's largest open community that exists to advance and promote the software testing profession." Another variant.

These kinds of companies sell testing services. They would almost certainly assert that they provide a professional service, if asked, but their projects are staffed by people who are not necessarily professional testers. A professional tester might be expected to make this kind of observation: does professional testing have to be carried out by professional testers?

Is there another kind of opposite of professional? Yes: Unprofessional.

A professional tester might be thought unprofessional because of some aspect of the way the work was carried out. e.g. making racist remarks or punching a member of his team. Is Jeremy Clarkson a professional TV presenter? Might we say he always behaves professionally?

This exposes the difference between being professional and acting professionally. Which does our client care about? And it's not a binary decision, necessarily. Also, testing is an activity. A tester can be doing more than one activity at a time and so acting both professionally and unprofessionally at once. For different observers, the same single action could be perceived as professional or unprofessional.

Another tool I sometimes use, and might hope that our imagined professional tester would also be familiar with, is looking for exemplars. In fact, this exercise exploring What is Professional Testing? itself could be seen that way. So what exemplars of professional testing are out there?

Well, there are bodies such as the Association for Software Testing. Its website says:
The Association for Software Testing (AST)  is an international non-profit professional association ... focused on supporting the development of professionalism in software testing, among practitioners and academics, at all levels of experience and education.There we go again, more variants: "professional association" and "professionalism in software testing".

I am a member of the AST (and good on them for providing grant funding for this MEWT) and one of the things this means is that I agree to abide by their Code of Ethics and Professional Conduct, itself adopted from another body, the Association for Computing Machinery. The code mentions various flavours of professional over 50 times. Some examples:
  •  professional conduct
  •  professional work
  •  professional ethical standards
  •  computing professional
It might seem, to the professional tester, that the policy could be trying to bridge the gap between being and acting by describing claims or considerations that define the being and govern the decisions that lead to actions.

Some of these claims are very strict
I will ... be fair and take action not to discriminate.but others are more aspirational:
I will ... strive to achieve the highest quality, effectiveness and dignity in both the process and products of professional work.You'd think this latter one would be interesting to an alert professional tester, as it means that there is room for a professional to do low quality work and remain within the ethical code - so long as the intent and efforts made were in line with the code.

The professional tester might note that the code only references testing once - perhaps understandable given that it has come from the ACM:
To minimize the possibility of indirectly harming others, computing professionals must minimize malfunctions by following generally accepted standards for system design and testing.But notice how it refers to "standards for .. testing." The professional tester might file that away for another time ...

... because the professional tester might be expected to recognise that this is too much detail for the average client in the average situation - think of the man on the Clapham Omnibus - and also to notice when the client is not average, or the situation is not average.

This is not an average situation. The client is a testing workshop and the mission is both broad and deep and concerned largely with the generation of ideas. A professional tester should be able to deal with that, we'd hope, wouldn't we, and deliver something that met the client's brief to the extent that they'd understood it and confirmed it.

Which brings us to delivery. The question was What is Professional Testing?

I think what I've just done here enumerates a set of things that it could plausibly include. But it's not a complete enumeration and a small change in any number of details could mean that anything done here would be considered unprofessional.

Take the question of definitions. Here, my proxy for a professional tester decided not to pursue them. On some other project, where lives were at stake, say, it might be be critical to pursue definitions at an early stage. As Potter Stewart famously didn't quite say:
Professional Testing is hard to define but I know it when I see itBut most clients would be justified in wanting something more than that, wouldn't they? So here's my stab at it, based on what I've found out in this exploration of the question:

Professional testing is testing
  • … by someone in a testing role, for a client
  • … at some time, for some project
which might 
  • … include specified practices
  • … involve people nominated as professional testers
  • … involve acting professionally
and which is
  • … unlikely to be amenable to tight definition
  • … but might accept an envelope like the AST’s
  • … and so have the client’s interests foremost
  • … but offers no guarantees about outcome

And to finish, here's a question that I hope that our theoretical professional tester would ask at the end of a piece of work like this: was What is What is Professional Testing? professional testing?
Image: playbuzz

Here are my slides:

Categories: Blogs

Luncheon Meet

Fri, 04/08/2016 - 07:25

As manager of a test team I try to get everyone on the team presenting regularly, to the team itself at least. A significant part of the tester role is communication and practising this in a safe environment is useful. There's double benefit, I like to think, because the information presented is generally relevant to the rest of the team. We most often do this by taking turns to present on product features we're working on.

I encourage new members of the team to find something to talk about reasonably early in their time with us too. This has some additional motivations, for example to help them to begin to feel at ease in front of the rest of us, and to give the rest of us some knowledge of, familiarity with and empathy for them.

I invite (and encourage the team to invite) members of staff from other teams to come to talk to us in our weekly team meeting as well. Again, there are different motivations at play here but most often it is simple data transfer. I used to do this more, and with a side motivation of building links across teams and exposing us to new and perhaps unexpected ideas, or generating background knowledge. But at one of our retrospectives it became clear that some of the testers felt that some of the presentations were not relevant enough to them and they'd rather get on with work.

It pays to listen to your team.

So, along with Harnessed Tester, I set up Team Eating which is a cross-company brown bag lunch. And it's just reached its first anniversary! And, yes, I know its name is a terrible pun. (But I love terrible puns.)

Here's a list of the topics we've had in the first year:

We've had three guest speakers (Chris George, Neil Younger and Gita Malinovska) and, as you can see, there's been a bit of a bias towards testing topics, although that reflects the interests of the speakers more than anything else. There's no constraints on the format (beyond practical ones) so we've had live demos, more traditional talks and, this week, an interactive storytelling workshop.

The response from the company has been good and we've had attendees from all teams and presenters from most. The more popular talks were probably those by Roger, our new (and first) UX specialist. He's done two: first to introduce the company to some ideas about what UX is and then later on how he was beginning to apply his expertise to a flagship project.

I've been really pleased with the atmosphere. There's a positive vibe from people who want to be there listening to their colleagues who have something that they want to share.

One surprise to me has been a reluctance from some of the audience to have their lunch in the meetings. Some people, I now find, consider it impolite to be eating while the presenter is talking. Given the feedback from my team which prompted us to start Team Eating, I was keen that it shouldn't take time away from participants' work and so fitting it into a lunch break seemed ideal.

But although eating is in the name, the team part is much the more important to me and I feel like it's serving the kind of purpose that I wanted in that respect. Quite apart from anything else, I'm personally really enjoying them and so here's to the next 12 months of rapport-building, information-sharing, fun-having Team Eating.

Categories: Blogs

Quality is Value-Value to Somebody

Tue, 04/05/2016 - 22:19
A couple of years ago, in It's a Mandate, I described mandated science: science carried out with the express purpose of informing public policy. There can be tension in such science because it is being directed to find results which can speak to specific policy questions while still being expected to conform to the norms of scientific investigation. Further, such work is often misinterpreted, or its results reframed, to serve the needs of the policy maker.

Last night I was watching a lecture by Harry Collins in which he talks about the relationship between science and democracy and policy. The slide below shows how the values of science and democracy overlap (evidence-based, preference for reproducible results, clarity and others) but how science's results are wrapped by democracy's interests and perspectives and politics to create policies.

I spent some time thinking about how these views can serve as analogies for testing as a service to stakeholders.

But Collins says more that's relevant to that relationship in the lecture - much of it from his book Are We All Scientific Experts Now? In particular he argues that non-scientists' opinions on scientific matters should not generally be taken as seriously as those of scientists, those people who have dedicated their lives to the quest for deep understanding in an area.

He stratifies the non-scientific populus, though, making room for classes of expertise that are hard-earned - he terms them interactional expertise and experience-based expertise - that non-scientists can achieve and which make conversation with scientists, and even meaningful contribution to scientific debate, possible.

I find this a useful layer on the tester-stakeholder picture. Sure, most of our stakeholders might not know as much as we (think we) do about testing, about the craft and techniques of testing. But that doesn't mean that there aren't those who can talk to us knowledgeably about it. This kind of expertise might be from, say, reading (interactional) or perhaps from past testing practice or knowledge of the domain in which testing is taking place (experience-based) and I like to think that I am, and that we should be, open to it being valuable to us and whatever testing effort we're engaged in.
Categories: Blogs