Skip to content

Hiccupps - James Thomas
Syndicate content
James Thomashttp://www.blogger.com/profile/01185262890702402757noreply@blogger.comBlogger235125
Updated: 9 hours 39 min ago

The Rat Trap

Tue, 06/28/2016 - 13:55

Another of the capsule insights I took from The Shape of Actions by Harry Collins (see also Auto Did Act) is the idea that a function of the value some technology gives us is the extent to which we are prepared to accommodate its behaviour.

What does that mean? Imagine that you have a large set of data to process. You might pull it into Excel and start hacking away at its rows and columns, you might use a statistical package like R to program your analysis, you might use command line tools like grep, awk and sed to cut out slices of the data for narrower manual inspection. Each of these will have compromises, for instance:
  • some tools have possibilities for interaction that other tools do not have (Excel has a GUI which grep does not)
  • some tools are more specialised for particular applications (R has more depth in statistics than Excel)
  • some tools are easier to plug into pipelines than others (Linux utilities can be chained together in a way that is apparently trickier in R

These are clear functional benefits and disbenefits, and surely many others could be enumerated, although they won't be universal but dependent on the user, the task in hand, the data, and so on.

In this book, Collins is talking about a different dimension altogether. He calls it RAT or Repair, Attribution and all That. As I read it, the essential aspect is that users tend to project unwarranted capabilities onto technology and ignore latent shortcomings.

For example, when a cheap calculator returns 6.9999996 for the calculation (7/11) x 11 we repair its result to 7. We conveniently forget this, or just naturally do not notice it, and attribute powers to the calculator which we are in fact providing, e.g. by translating data on the way in (to a form the technology can accept) and out (to correct the technology's flaws).

The all that is more amorphous but constitutes the kinds of things that need to be done to put the technology in a position to perform. For example, entering the data into a small display which can be hard to read under some lighting conditions using very fiddly rubber keys with multiple functions represented by indiscernible graphics.

Because these skills are ubiquitous in humans (for the most part), we think nothing of them. But imagine how useful a calculator would be if a human was not performing those actions.

I had some recent experience of this with a mapping app I bought to use as a Satnav when driving in the USA. I had some functional requirements, including:
  • offline maps (so that I wasn't dependent on a phone or network connection)
  • usable in the UK and the USA (so that I could practise with it at home)
  • usable on multiple devices (so that I can walk with it using my phone, or drive with it on a tablet)

I tried a few apps out and found one that suited my needs based on short experiments done on journeys around Cambridge. Despite accepting this app, I observed that it had some shortcomings, such as:
  • its built-in destination-finding capacity has holes
  • it is inconsistent in notifications about a road changing name or number while driving along it
  • it is idiosyncratic about whether a bend in the road is a turn or not
  • it is occasionally very late with verbal directions
  • its display can be unclear about which option to take at complex junctions

In these cases I am prepared to do the RAT by, for instance, looking up destinations on Google, reviewing a route myself in advance, asking a passenger for assistance in some cases. Why? Because the functionality I want wasn't as well-satisfied by other apps I tried; because in general it is good enough; because overall it is a time-saver; because even flawed it provides some insurance against getting lost; because recovery in the case of taking the wrong turning was generally very efficient; because human navigators are not perfect or bastions of clarity either; because my previous experience of a Satnav (a dedicated piece of hardware) was much, much worse; because while interacting with the software more I started to get used to the particular behaviours that the app exhibits and was able to interpret its meaning more accurately.

Having just read The Shape of Actions, this was an interesting experience and meta-experience and user experience. A takeaway for me is that software which can exploit the human tendency to repair and accommodate and all that - which aligns its behaviour with that of its users - gives itself a chance to feel more usable and more valuable more quickly.
Image: https://flic.kr/p/x2M76w
Categories: Blogs

Making the Earth Move

Sat, 06/25/2016 - 09:59

In our reading group at work recently we looked at Are Your Lights On? By Weinberg and Gauss. Opinions of it were mixed but I forgive any flaws it may have for this one definition:

  A problem is a difference between things as desired and things as perceived.

It's hard to beat for pithiness, but Michael Bolton's relative rule comes close. It runs:

  For any abstract X, X is X to some person, at some time.

And combining these gives us good starting points for attacking a problem of any magnitude:
  • the things
  • the perception of those things
  • the desires for those things
  • the person(s) desiring or perceiving
  • the context(s) in which the desiring or percieving is taking place
Aspiring problem solvers: we have a lever. Let's go and make the earth move for someone!
Image: Wikimedia Commons
Categories: Blogs

Auto Did Act

Fri, 06/17/2016 - 06:54

You are watching me and a machine interacting with the same system. Our actions are, to the extent that you can tell from your vantage point, identical and the system is in the same state at each point in the sequence of actions for both of us. You have been reassured that the systems are identical in all respects that are relevant to this exercise; you believe that all concerned in setting it up are acting honestly with no intention to mislead, deceive, distort or otherwise make a point. The machine and me performed the same actions on the same system with the same visible outcomes.

Are we doing the same task?

This is a testing blog. You are a tester. You have been around the block. More than once. You perhaps think that I haven't given you enough information to answer this question with any certainty. What task is being performed? Are the visible outcomes the only outcomes? To what extent does skill and adaptability form part of the task? To what extent does interpretation on the part of the actor need to happen for the task to be completed successfully? What does success mean in this task anyway? Was the task completed successfully in the examples you watched? What does it mean to be the "same task" here? And from whose perspective?

This is a testing blog. I am a tester. I've also been round the block. More than once. More than twice. I've recently finished reading Harry Collins' The Shape of Actions and, while I'll say up front that I found it reasonably hard-going, it was also highly thought-provoking. This post pulls out just one fragment of the argument made in that book, but one that I find particularly interesting:
Automation of some task becomes tractable at the point where we become indifferent to the details of it.There's probably some task that you perform regularly that was once tricky. Maybe it's one of those test setup tasks that involve getting the right components into the right configurations in relation to one another. One of the ones that means finding the right sequence of commands, in the right order, with the right timing, given the other things that are also in the environment.

As you re-ran this task, you began to learn what was significant to the task, which starting conditions influenced which steps, what could be done in parallel and what needed a particular sequence. You used to need to pay attention, exercise skill and judgment, take an active role. These days you just punch keys as efficiently as possible until its done. You don't look at the options on dialog boxes, you don't inspect the warnings that flash up on the console, you don't even stop checking Twitter on your other monitor. Muscle memory drives the process. Any tacit knowledge you were employing to coax your setup into being has been codified into explicit knowledge. You just need it to be done, and as quickly as possible.

You have effectively automated your task.

As a manager, I recognise an additional layer to this. Sometimes managers don't care (or, perhaps, don't care to think about) how a task is implemented and may thus mistake it for a task which can be automated. But the management perspective can be deceptive. Just because one actor in some task doesn't have to exercise skill, it doesn't mean that no skill is required for any aspect of the task by any actor.

Which reminds me of another Collins book, and a quote that I love from it: distance lends enchantment.
Image: https://flic.kr/p/8tf8q9
Categories: Blogs

Forward Looking

Wed, 06/15/2016 - 18:51

Aleksis Tulonen recently asked me for some thoughts on the future of testing to help in his preparation for a panel discussion. He sent these questions as a jumping-off point:
  • What will software development look like in 1, 3 or 5 years?
  • How will that impact testing approaches?
I was flattered to be asked and really enjoyed thinking about my answers. You can find them at The Future of Testing Part 3 along with those of James Bach, James Coplien, Lisa Crispin, Janet Gregory, Anders Dinsen, Karen Johnson, Alan Page, Amy Phillips, Maaret Pyhäjärvi, Huib Schoots, Sami Söderblom and Jerry Weinberg.
Image: https://flic.kr/p/pLsXJh
Categories: Blogs

Going Postel

Sun, 06/12/2016 - 23:29

Postel's Law - also known as the Robustness Principle - says that, in order to facilitate robust interoperability, (computer) systems should be tolerant of ill-formed input but take care to produce well-formed output. For example, a web service operating under HTTP standards should accept malformed (but interpretable) requests from its clients but return only conformant responses.  The principle became popular in the early days of Internet standards development and, for the historically and linguistically-minded, there's some interesting background in this post by Nick Gall.

On the face of it, the idea seems sensible: two systems can still talk - still provide a service - even if one side doesn't get everything quite right. Or, more generally: systems can extract signal from a noisy communication channel, and limit the noise they contribute to it.

The obvious alternative to Postel's Law is strict interpretation of whatever protocol is in effect and no talking - or no service - when one side errs even slightly. Which seems undesirable, right? But, as Joel Spolsky illustrates beautifully, following Postel's Law can lead to unwanted side-effects, confusions and costs over time as successive implementations of a system are built and tested against other contemporary and earlier implementations and bugs are either obscured or backwards compatibility hacks are required.

Talking to one of my team recently, I speculated that - even accepting its shortcomings - Postel's Law can provide a useful heuristic, a useful model, for human communication. We kicked that idea around for a short while and I've since spent a little time mulling it over. Here's some still quite raw thoughts. I'd be interested in others.

I like to use the Rule of Three when receiving input, when interpreting what I'm hearing or seeing. I find that it helps me get some perspective on whether or not I have reasonable confidence that I understand and gives me a chance to avoid locking into the first meaning I think of. (Although it's still a conscious effort on my part to remember to do it.) I feel like this also gives me the chance to be tolerant.

I will and do accept input that I don't believe is correct without clarification if I'm sufficiently confident that I understand the intended meaning. Perhaps I know the speaker well and that they have a tendency to say "please QA it" over "please test it" even though the former violates my preferred "standards". Context, as usual, is important. In different contexts I might decide to question the terminology (perhaps we are engaged in a private, friendly conversation) or let it slide (for example, we are in a formal meeting where there are significantly bigger fish to fry).

Unlike most, if not all, computer systems, I am able to have off-channel communications with other parties. I can be tolerant of input that I would prefer not to be and initiate a discussion about it (now or later; based on one instance or only after some number of similar occurrences) somewhere else. My choices are not binary.

I have attempted to head-off future potential interoperability problems by trying to agree shared terminology with parties that I need to communicate with. (This is particularly true when we need a language to discuss the problem we are trying to solve.) I have seen enough failures over time in this area that when I recognise the possibility of this being an issue I will consider investing time and effort and emotional capital in this meta-task.

Can we really say that there is a standard for human-human conversations? Simply, no. But there are conventions in different cultures, social situations, times and other contexts.

Despite this, when I'm producing output, I think that I want to conform to some basic standards of communication. (I've written about this kind of thing in e.g. 1 and 2) There are differences when communicating 1:1 versus 1:n, though. While I can tailor my output specifically for one person that I'm talking to right now, I can't easily do that when, say, speaking in front of, or writing for, a crowd.

I observe that sometimes people wilfully misunderstand, or even ignore, the point made by conversational partners in order to force the dialogue to their agenda or as a device to provoke more information, or for some other reason outside the scope of the content of the conversation itself, such as to show who is the boss. When on the receiving end of this kind of behaviour, is tolerance still a useful approach?

Sometimes I can't be sure that I understand and I have to ask for clarification. (And frequently people ask me for the same.) Some regular causes of my misunderstanding include unexpected terminology (e.g. using non-standard words for things), ambiguity (e.g. not making it clear which thing is being referred to), insufficient information (e.g. leaving out steps in reasoning chains).

Interestingly, all of these are likely to be relative issues. A speaker with some listener other than me might well have no problem, or different problems, or the same problems but with different responses. An analogy for this might be a web site serving the same page to multiple different browsers. The same input (the HTML) can result in multiple different interpretations (renderings in the browser). In some cases, nothing will be rendered; in other cases, the input might have been tailored for known differences (e.g. IE6 exceptions, but at cost to the writer of the web site); in still other cases something similar to the designer's idea will be provided; elsewhere a dependency (such as JavaScript) will be missing leading to some significant content not being present.

Spolsky talks about problems due to sequences of implementations of a system. Are there different implementations of me? Or the speakers I communication with? Yes, I think there are. We constantly evolve our approaches, recognise and attempt to override our biases, grow our knowledge, forget things, act in accordance with our mood, act in response to others' moods - or our interpretation of them, at least. These changes are largely invisible to those we communicate with, except for the impacts they might have on our behaviour. And interpreting internal changes from external behaviours is not a trivial undertaking.
Image: https://flic.kr/p/BojaF

Categories: Blogs

Cambridge Lean Coffee

Thu, 05/26/2016 - 06:40

This month's Lean Coffee was hosted by Cambridge Consultants. Here's some brief, aggregated comments on topics covered by the group I was in.

What is your biggest problem right now? How are you addressing it?
  • A common answer was managing multi-site test teams (in-house and/or off-shore)
  • Issues: sharing information, context, emergent specialisations in the teams, communication
  • Weinberg says all problems are people problems
  • ... but the core people problem is communication
  • Examples: chinese whispers, lack of information flow, expertise silos, lack of visual cues (e.g. in IM or email)
  • Exacerbated by time zone and cultural differences; lack/difficulty of ability to sit down together,  ...
  • Trying to set up communities of practice (e.g. Spotify Guilds) to help communication, iron out issues
  • Team splits tend to be imposed by management
  • But note that most of the problems can exist in a colocated team too

  • Another issue was adoption of Agile
  • Issues: lack of desire to undo silos, too many parallel projects, too little breaking down of tasks, insufficient catering for uncertainty, resources maxed out
  • People often expect Agile approaches to "speed things up" immediately
  • On the way to this Lean Coffee I was listening to Lisa Crispin on Test Talks:"you’re going to slow down for quite a long time, but you’re going to build a platform ... that, in the future, will enable you to go faster"

How do you get developers to be open about bugs?
  • Some developers know about bugs in the codebase but aren't sharing that information. 
  • Example: code reviewer doesn't flag up side-effects of a change in another developer's code
  • Example: developers get bored of working in an area so move on to something else, leaving unfinished functionality
  • Example: requirements are poorly defined and there's no appetite to clarify them so code has ambiguous aims
  • Example: code is built incrementally over time with no common design motivation and becomes shaky
  • Is there a checklist for code review that both sides can see?
  • Does bug triage include a risk assessment?
  • Do we know why the developers aren't motivated to share the information?
  • Talking to developers, asking to be shown code and talked through algorithms can help
  • Watching commits go through; looking at the speed of peer review can suggest places where effort was low

Testers should code; coders should test
  • Discussion was largely about testers in production code
  • Writing production code (even under guidance in non-critical areas) gives insight into the production
  • ... but perhaps it takes testers away from core skills; those where they add value to the team?
  • ... but perhaps testers need to be wary of not simply reinforcing skills/biases we already have?
  • Coders do test! Even static code review is testing
  • Why is coding special? Why shouldn't testers do UX, BA, Marketing, architecting, documentation, ...
  • Testing is dong other people's jobs
  • ... or is it?
  • These kinds of discussion seem to be predicated on the idea that  manual testing is devalued
  • Some discussion about whether test code can get worse when developers work on it
  • ... some say that they have never seen that happen
  • ... some say that developers have been seen to deliberately over-complicate such code in order to make it an interesting coding task
  • ... some have seen developers add very poor test data to frameworks 
  • ... but surely the same is true of some testers?
  • We should consider automation as a tool, rather than an all (writing product code) or nothing (manual tester). Use it when it makes sense to, e.g. to generate test data

Ways to convince others that testing is adding value
  • Difference between being seen as personally valuable against the test team adding value
  • Overheard: "Testing is necessary waste"
  • Find issues that your stakeholders care about
  • ... these needn't be in the product, they can be e.g. holes in requirements
  • ... but the stakeholders need to see what the impact of proceeding without addressing the issues could be
  • Be humble and efficient and professional and consistent and show respect to your colleagues and the project
  • Make your reporting really solid - what we did (and didn't); what we found; what the value of that work was (and why)
  • ... even when you find no issues
Image: https://flic.kr/p/5LH9o

Categories: Blogs

Joe Blogs: A Meeting

Tue, 05/10/2016 - 06:50
After Neil Younger's talk on Lean Coffee for team meetings at the Cambridge Tester Meetup last night, we ran a Lean Coffee session. These are my notes on the topics we covered:

How do you teach testing?
  • The question was set up on the premise that "you can teach/there is a lot of available material for software development, but not so much for testing". 
  • This was disputed: was the assertion confusing (availability of material for) learning programming languages with being able to program or being a good developer?
  • When teaching or coaching testing, particularly to non-testers, a detective metaphor is useful.
  • Testing is about a problem-solving mindset ... but so is programming, right?

How do you keep up with technology?
  • When your product uses or interacts with some new technology, how do you get up to speed with it?
  • How do you get sufficient depth to be able to talk to experts on your team?
  • Testing is about learning, whether it's your product or some new technology. Use your testing skills.
  • Learn in small increments, by using the thing you are learning about. 
  • Talk to the experts, probe their knowledge and learn from them. Use your context-free testing skills to try to find cracks in their knowledge, your implementation of the technology etc.
  • Be aware that you'll never know everything about all technologies.
  • Look for meta knowledge: over time you'll see similarities across technologies.
  • Do a Google search for e.g. failures in others' use of the technology and look for analogous cases in your context.

How do you unblock yourself?
  • When you're out of ideas, how do you get a fresh perspective?
  • Pair with a colleague, or even swap roles with a colleague.
  • "Unthink" by going for a walk, removing all your distractions, doing a different piece of work.
  • Look for patterns in your blockages and try to break them. Perhaps you're always blocked at just before lunch?
  • Use a different way of getting ideas out, e.g. a mindmap if you're usually a list person.
  • Wear a different hat, Use personas to spur ideas.
  • Stop overthinking! Perhaps you are just finished.
  • Use mnemonics, checklists.
  • Look at historical data for the thing you are testing (bug reports, charters, meeting notes, user stories etc).
  • Find a different way in to the problem, e.g. start the application a different way, with a different browser, on a different OS, with the mouse set up so the buttons are backwards etc.

Why did you come tonight?
  • Because it was Neil talking.
  • To learn more about Lean Coffee.
  • Interested in facilitation techniques.
  • To try something different.
  • To make connections to other local testers; I work on my own.
  • Needed something to do in the evenings.
  • To find some different ways of doing things.
  • To have a forum to ask questions.
  • For those Eureka! moments - there's always one.
  • To get inspiration.
  • To speak to other testers about testing.
  • Research, to speak to testers about the tools they use.
  • To see whether the hype about tester meetups is justified.

... and did you get what you wanted out of it?
  • Yes!
Image: https://flic.kr/p/7VUgug
Categories: Blogs

Joe Blogs: Meetings

Tue, 05/10/2016 - 06:40

Neil Younger spoke about using Lean Coffee for his test team meetings at last night's Cambridge Tester Meetup. Inspired by the Cambridge Lean Coffee meetings - which are also part of this meetup and which he has hosted at DisplayLink since the earliest days - he replaced a failing monthly team meeting with Lean Coffee. And he hasn't looked back. Here's a few bullet points pulled out of the talk and subsequent discussion.

The monthly test team meeting was failing for various reasons, including:

  • As the company transitioned from waterfall to agile there were other forums for people to report status
  • ... and these were generally more timely
  • ... and the monthly meeting became mostly repetition.
  • With a cross-site team the physical constraints of the meeting rooms - round a table at each end, with a monitor onto the other team - seemed like a barrier to interaction.

They changed the Lean Coffee format in several ways, including:

  • Cross-site means that post-its are impractical so they use Trello for their proposals, voting and to record the Kanban (To Do, Doing, Done).
  • Often discussions will need to produce actions and these need to be recorded. They have an additional column on their Kanban for this.
  • They felt the need for a way to inject topics that would be discussed without voting - announcements from management, for example. Over time, they've found other ways to disseminate this information instead.
  • A facilitator is nominated to keep track of time, but also sometimes keep discussion on track.

There are some other changes from the earlier meetings too, including:

  • They use bigger screens to view the other site.
  • They sit in a semi-circle facing the screen so that everyone on both sides can see everyone else's face.
  • The Lean Coffee is optional.
  • No topic is off-limits (and topics have included salaries and concerns about the direction the business is taking).
  • The focus is no longer status and more: are we doing the right thing? can we get better? what do others think of this?

Neil's been very happy with how it's working and shared a few observations, including:

  • They experimented with different time limits and found that between 7 and 10 minutes work well.
  • They tend to have few project-related discussions because there are other forums for that, including another meeting where testers share information about feature work.
  • Dot voting is a kind of self-policing mechanism, preventing people from riding their hobby horse every week or descending into office politics
  • ... and the facilitation means that anything going too far off-topic can be brought back.
  • The monthly cycle gives a chance for issues to be resolved before the meeting takes place.
  • The location of the facilitator changes the focus of the meeting - whichever site facilitates tends to lead more of the discussion.
  • Almost certainly some other format would work just as well
  • ... but the physical changes, optionality and voting are probably key to it working for Neil.
  • Other teams in DisplayLink are taking the format and tweaking it to work for them.
After the talk and questions we ran a Lean Coffee session. Details of that are in Joe Blogs: A Meeting.Image: https://flic.kr/p/6AEeRu
Categories: Blogs