Skip to content

Hiccupps - James Thomas
Syndicate content
James Thomashttp://www.blogger.com/profile/01185262890702402757noreply@blogger.comBlogger241125
Updated: 17 hours 39 min ago

A Glass Half Fool

Wed, 07/27/2016 - 07:48
While there's much to dislike about Twitter, one of the things I do enjoy is the cheap and frequent opportunities it provides for happy happenstance.
@noahsussman Only computers?

It's easy to put people in incongruous situations. The art is in not doing it accidentally.— James Thomas (@qahiccupps) July 27, 2016Without seeing Noah Sussman's tweet, I wouldn't have had my own thought, a useful thought for me, a handy reminder to myself of what I'm trying to do in my interactions with others, captured in a way I had never considered it before.
Image: https://flic.kr/p/a341bn 
Categories: Blogs

Go, Ape

Thu, 07/21/2016 - 22:44


A couple of years ago I read The One Minute Manager by Ken Blanchard on the recommendation of a tester on my team. As The One Line Reviewer I might write that it's an encouragement to do some generally reasonable things (set clear goals, monitor progress towards them, and provide precise and timely feedback) wrapped up in a parable full of clumsy prose and sprinkled liberally with business aphorisms.

Last week I was lent a copy of The One Minute Manager Meets the Monkey, one of what is clearly a not insubstantial franchise that's grown out of the original book. Unsurprisingly perhaps, given that it is part of a successful series, this book is similar to the first: another shop floor fable, more maxims, some sensible suggestions.

On this occasion, the advice is to do with delegation and, specifically, about managers who pull work to themselves rather than sharing it out. I might summarise the premise as:
  • Managers, while thinking they are servicing their team, may be blocking them.
  • The managerial role is to maximise the ratio of managerial effort to team output.
  • Which means leveraging the team as fully as possible.
  • Which in turn means giving people responsibility for pieces of work.

And I might summarise the advice as:
  • Specify the work to be done as far as is sensible.
  • Make it clear who is doing what, and give work to the team as far as is sensible.
  • Assess risks and find strategies to mitigate them.
  • Review on a schedule commensurate with the risks identified.

And I might describe the underlying conceit as: tasks and problems are monkeys to be passed from one person's back to another. (See Management Time: Who’s Got the Monkey?)  And also as: unnecessary.

So, as before, I liked the book's core message - the advice, to me, is a decent default - but not so much the way it is delivered. And, yes, of course, I should really have had someone read it for me.
Image: Amazon
Categories: Blogs

Iterate to Accumulate

Tue, 07/19/2016 - 06:08
I'm very interested in continual improvement and I experiment to achieve it. This applies to most aspects of my work and life and to the Cambridge Exploratory Workshop on Testing (CEWT) that I founded and now run with Chris George.

After CEWT #1 I solicited opinions, comments and suggestions from the participants and acted on many of them for CEWT #2.

In CEWT #2, in order to provide more opportunity for feedback, we deliberately scheduled some time for reflection on the content, the format and any other aspect of the workshop in the workshop itself. We used a rough-and-ready Stop, Start, Continue format and here's the results, aggregated and slightly edited for consistency:
Start
  • Speaker to present "seed" questions
  • Closing session (Identify common threads, topics; Share our findings more widely)
  • More opposing views (Perhaps set up opposition by inviting talks? Use thinking hats?)
  • Focused practical workshop (small huddles)
Stop
  • 10 talks too many?
  • Whole day event (About an hour or two too long; Make it half a day)
  • Running CEWT on a Sunday
  • Earlyish start
  • Voting for talks (perhaps group familiar ones?)
  • Don’t make [everyone] present
  • Prep for different length talks
Continue
  • CEWT :)
  • Loved it!
  • Great Venue
  • Good location
  • Lunch, logistics
  • One whole day worked well
  • Varied talks
  • Keep to 10 min talks
  • Short talks & long discussions are good
  • This amount of people
  • Informal, local, open
  • Topic discussions
  • Everyone got a chance to speak
  • Cards for facilitation
  • Flexible agenda
  • Ideas being the priority
Other
  • Energy seemed to drop during the day
Me and Chris have now started planning CEWT #3 and so we reviewed the retrospective comments and discussed changes we might make, balanced against our own desires (which, we find, differ in places) and the remit for CEWT itself, which is:
  • Cambridge: the local tester community; participants have been to recent meetups.
  • Exploratory: beyond the topic there's no agenda; bring on the ideas.
  • Workshop: not lectures but discussion; not leaders but peers; not handouts but arms open.
  • Testing: and anything relevant to it.
We first discussed the reported decrease in energy levels towards the end of the day during CEWT #2. We'd felt it too. We considered several options, including reducing the length. But we decided for now to keep to a whole day.

We like that length for several reasons, including: it allows conversation to go deep and broad; it allows time for reflection; it allows time for all to participate; it contributes to the distinction between CEWT and local meetups.

So if we're keeping the same length, what else could we try changing to keep energy levels up? The CEWT #2 feedback suggested a couple of directions:
  • stop: 10 talks too many; Don’t make [everyone] present
  • start: More opposing views; Focused practical workshops
We are personally interested in switching to some kind of group activity inside the workshop, maybe even ad hoc lightning talks, so we're going do something in that direction. We also - after much deliberation - decided to reduce the number of talks and to inject more potential views by increasing the number of participants to 12.

CEWT #1 had eight participants, CEWT #2 had 10. We felt that the social dynamic at those events was good. We are wary of growing to a point where anyone doesn't get a chance to speak on any topic they feel strongly about or that they have something interesting to contribute to. We will retain cards to facilitate discussion but we know from our own experience, and research amongst other peer workshop groups, that we need to be careful here.

At the two CEWTs to date all participants have presented. Personally I like that aspect of it; it feels inclusive, participatory, about peers sharing. But we are aware that asking people to stand up and talk is a deterrent to some and part of what we're about is encouraging local testers. Participation in a discussion might be an easier next step from meetups than speaking, even in a safe environment like CEWT. So we're going to try having only some participants present talks.

But we also don't want to stop providing an opportunity for people who have something to say and would like to practice presenting in front of an interested, motivated, friendly audience. One of the CEWT #2 participants, Claire, blogged on just that point:
I was asked if I wanted to attend CEWT2. I knew this would involve doing a presentation which I wasn't particularly thrilled about, but the topic really had me chomping at the bit to participate. It was an opportunity for me to finally lay some ghosts to rest about a particularly challenging situation I foolishly allowed to affect me to the extent I thought I was a rubbish tester. I deleted my previous blog and twitter as I no longer had any enthusiasm about testing and wasn't even sure it was a path I wanted to continue down. So, despite being nervous at the thought of presenting I was excited to be in a position to elicit the thoughts from other testers about my experience.
...
The reactions, questions and suggestions have healed that last little hole in my testing soul. It was a great experience to be in a positive environment amongst other testers, all with different skills and experiences, who I don't really know, all coming together to talk about testing.Chris and me talked a lot about how to implement the desire to have fewer talks. Some possibilities we covered:
  • invite only a select set of participants to speak
  • ask for pre-submission of papers and choose a set of them 
  • ask everyone to prepare and use voting on the day to decide who speaks
  • ask people when they sign up whether they want to speak
I have some strong opinions here, opinions that I will take a good deal of persuading to change:
  • I don't want CEWT to turn into a conference.
  • I don't want CEWT to turn into a bureaucracy.
  • I don't want anyone to prepare and not get the opportunity to present.
In CEWT #2 we used dot voting to order the talks and suggested that people be prepared to talk for longer (if their talk was voted early) or shorter (if late). As it happened, we decided on the day to change the schedule to let everyone speak for the same length of time but the two-length talk idea wasn't popular, as the stop "Prep for different length talks" feedback notes.

So this time we're going to try asking people whether they want to present or not, expecting that some will not and we'll have a transparent strategy for limiting the number in any case. (Perhaps simply an ordered list of presenters and reserve presenters, as we do for participants.) We'll have a quota of presenters in mind but we haven't finalised that quite yet; not until we've thought some more about the format of the day.

With some presenters and some non-presenters, we're concerned that we don't encourage or create a kind of two-level event with some people (perceived as) active and some passive. You'll notice I haven't referred to attendees in this post; we are about peers, about participation, and we want participants. Part of the CEWT #3 experiment will be to see how that goes on the day.

Clearly the changes we've chosen to make are not the only possible way to accommodate the feedback we received. But we have, consciously, chosen to make some changes. Our commitment here is to continually look to improve the experience and outcomes from the CEWTs (for the participants, the wider community and ourselves) and we believe that openness, experimentation, feedback and evaluation is a healthy way to do that.

Let's see what happens!
Image: https://flic.kr/p/qvpt1p
Categories: Blogs

Getting the Worm

Thu, 07/14/2016 - 06:57
Will Self wrote about his writing in The Guardian recently:
When I’m working on a novel I type the initial draft first thing in the morning. Really: first thing ... I believe the dreaming and imagining faculties are closely related, such that wreathed in night-time visions I find it possible to suspend disbelief in the very act of making stuff up, which, in the cold light of day would seem utterly preposterous. I’ve always been a morning writer, and frankly I believe 99% of the difficulties novices experience are as a result of their unwillingness to do the same.I am known (and teased) at work for being up and doing stuff at the crack of dawn and, although I don't aim to wake up early, when it happens I do aim to take advantage. I really do like working (or blogging, or reading) at this time. I feel fresher, more creative, less distracted.

I wouldn't be as aggressive as Self is about others who don't graft along with the sunrise (but he's not alone; even at bedtime I don't have to look hard to find articles like Why Productive People Get Up Insanely Early) because, for me, there are any number of reasons why novice writers, or testers or managers, or others experience difficulties. And I doubt more conscientious attention to an alarm clock would help in most of those cases.

Also, it's known that people differ in chronotype. I came to terms with my larkness a long time ago and now rarely try to go against it by, say, working in the evenings.

How about you?
Image: https://flic.kr/p/4a3yKL
Categories: Blogs

Put a Ring on It

Sat, 07/09/2016 - 11:28

Back in May I responded to the question "Which advice would you give your younger (#Tester) self?" like this:
Learn to deal with, rather than shy away from, uncertainty.#testing https://t.co/Db8Uj1HGyU— James Thomas (@qahiccupps) May 25, 2016Last week I was reminded of the question as I found myself sketching the same diagram three times for three different people on three different whiteboards.

The diagram represents my mind's-eye view of a problem space, a zone of uncertainty, a set of unresolved questions, a big cloud of don't know with a rather fuzzy border:


What I'll often want to do with this kind of thing is find some way to remove enough uncertainty that I can make my next move. 
For example, perhaps I am being pressed to make a decision about a project where there are many uknowns. I might try to find some aspect of the project to which I can anchor the rest and then give an answer relative to that. Something like this: "Yes, I agreed an approach in principle with Team X and until their prototype confirms the approach our detailed planning can't start."
I've still got a lot of uncertainty about exactly what I will do. But I found enough firm ground - in this case a statement in principle - that I can move the project forward.
In my head, I think of this as putting a band around the cloud and squeezing it:

And I'm left with a cleaner picture, the band effectively containing the uncertainty. Until the conditions that the band represents are confirmed or rejected I don't have to consider the untidy insides. (Which doesn't mean that I can't if I want to, of course.)

A useful heuristic for me is that if I find myself thinking about the insides too much - if something I expect to be in is leaking out - then probably I didn't tighten the band enough and I need to revisit.

When I'm exploring, the band can represent an assumption that I'm testing rather than some action that I've taken. "if this were true, then the remaining uncertainty would look that way and so I should be able to ..."

I like this way of picturing things even though the model itself doesn't help me with specific next moves. What it does do, which I find valuable, is remind me that when I have uncertainty I don't have to work it out in one shot.
Image: https://flic.kr/p/5EdSAW

P.S. While writing this, I realised that I've effectively written it before, although in a much more technical way: On Being a Test Charter.
Categories: Blogs

Good Conduct

Sat, 07/02/2016 - 07:23

I've been reading Here Comes Everybody by Clay Shirky. It's about how, in around 2007, technology and social media was beginning to change the ways in which people were able to organise themselves. Interesting to me on sociological, leadership and managerial grounds, here's a handful of quotes that I particularly enjoyed:
If you have ever wondered why so much of what workers in large organizations know is shielded from the CEO and vice versa, wonder no longer: the idea of limiting communications, so that they flow only from one layer of the hierarchy to the next, was part of the very design of the system at the dawn of managerial culture. (p. 42, on Daniel McCallum's revolutionary ideas for hierarchical management)  In business, the investment cost of producing anything risks creating a systematic bias in the direction of acceptance of the substandard. You have experienced this effect if you have ever sat through a movie you didn't particularly like in order to "get your money's worth." (p. 249) If transaction costs are a barrier to taking advantage of the individual with one good idea (and in a commercial context they are), then one possible response is to lower the transaction costs by radically rearranging the relations between the contributors. (p. 252, where transaction costs are the inherent costs of participation) [Successful peer collaboration needs] a plausible promise, an effective tool, and an acceptable bargain with the users. The promise is the basic "why"  ... The tool helps with the "how" ... the bargain sets the rules of the road: if you are interested in the promise and adopt the tools, what can you expect, and what will be expected of you? (p. 260)I've been listening to a Ted Radio Hour podcast  called Trust and Consequences. It's about how, in different contexts, one person's trust for another can facilitate different kinds of outcomes. Interesting to me on sociological, leadership and managerial grounds, here's a handful of quotes that I particularly enjoyed:
[it] is like holding a small bird in your hand. If you hold it too tightly, you crush it. If you hold it too loosely, it flies away. (Charles Hazlewood, on being an orchestra conductor)I have to say, in those days, I couldn't really even find the bird. (Charles Hazlewood, on his early career as a conductor)When you're in a position of not trusting, what do you do? You overcompensate. And in my game, that means you over-gesticulate. You end up like some kind of rabid windmill. And the bigger your gesture gets, the more ill-defined, blurry and, frankly, useless it is to the orchestra. (Charles Hazlewood, on the importance of clarity when directing)We call them leaders because they take the risk before anybody else does. And when we ask them, "why would you do that? Why would you give your blood and sweat and tears for that person?" They all say the same thing - because they would've done it for me. And isn't that the organization we would all like to work in? (Simon Sinek)Image: https://flic.kr/p/bBnTPF
Categories: Blogs

The Rat Trap

Tue, 06/28/2016 - 13:55

Another of the capsule insights I took from The Shape of Actions by Harry Collins (see also Auto Did Act) is the idea that a function of the value some technology gives us is the extent to which we are prepared to accommodate its behaviour.

What does that mean? Imagine that you have a large set of data to process. You might pull it into Excel and start hacking away at its rows and columns, you might use a statistical package like R to program your analysis, you might use command line tools like grep, awk and sed to cut out slices of the data for narrower manual inspection. Each of these will have compromises, for instance:
  • some tools have possibilities for interaction that other tools do not have (Excel has a GUI which grep does not)
  • some tools are more specialised for particular applications (R has more depth in statistics than Excel)
  • some tools are easier to plug into pipelines than others (Linux utilities can be chained together in a way that is apparently trickier in R

These are clear functional benefits and disbenefits, and surely many others could be enumerated, although they won't be universal but dependent on the user, the task in hand, the data, and so on.

In this book, Collins is talking about a different dimension altogether. He calls it RAT or Repair, Attribution and all That. As I read it, the essential aspect is that users tend to project unwarranted capabilities onto technology and ignore latent shortcomings.

For example, when a cheap calculator returns 6.9999996 for the calculation (7/11) x 11 we repair its result to 7. We conveniently forget this, or just naturally do not notice it, and attribute powers to the calculator which we are in fact providing, e.g. by translating data on the way in (to a form the technology can accept) and out (to correct the technology's flaws).

The all that is more amorphous but constitutes the kinds of things that need to be done to put the technology in a position to perform. For example, entering the data into a small display which can be hard to read under some lighting conditions using very fiddly rubber keys with multiple functions represented by indiscernible graphics.

Because these skills are ubiquitous in humans (for the most part), we think nothing of them. But imagine how useful a calculator would be if a human was not performing those actions.

I had some recent experience of this with a mapping app I bought to use as a Satnav when driving in the USA. I had some functional requirements, including:
  • offline maps (so that I wasn't dependent on a phone or network connection)
  • usable in the UK and the USA (so that I could practise with it at home)
  • usable on multiple devices (so that I can walk with it using my phone, or drive with it on a tablet)

I tried a few apps out and found one that suited my needs based on short experiments done on journeys around Cambridge. Despite accepting this app, I observed that it had some shortcomings, such as:
  • its built-in destination-finding capacity has holes
  • it is inconsistent in notifications about a road changing name or number while driving along it
  • it is idiosyncratic about whether a bend in the road is a turn or not
  • it is occasionally very late with verbal directions
  • its display can be unclear about which option to take at complex junctions

In these cases I am prepared to do the RAT by, for instance, looking up destinations on Google, reviewing a route myself in advance, asking a passenger for assistance in some cases. Why? Because the functionality I want wasn't as well-satisfied by other apps I tried; because in general it is good enough; because overall it is a time-saver; because even flawed it provides some insurance against getting lost; because recovery in the case of taking the wrong turning was generally very efficient; because human navigators are not perfect or bastions of clarity either; because my previous experience of a Satnav (a dedicated piece of hardware) was much, much worse; because while interacting with the software more I started to get used to the particular behaviours that the app exhibits and was able to interpret its meaning more accurately.

Having just read The Shape of Actions, this was an interesting experience and meta-experience and user experience. A takeaway for me is that software which can exploit the human tendency to repair and accommodate and all that - which aligns its behaviour with that of its users - gives itself a chance to feel more usable and more valuable more quickly.
Image: https://flic.kr/p/x2M76w
Categories: Blogs

Making the Earth Move

Sat, 06/25/2016 - 09:59

In our reading group at work recently we looked at Are Your Lights On? By Weinberg and Gauss. Opinions of it were mixed but I forgive any flaws it may have for this one definition:

  A problem is a difference between things as desired and things as perceived.

It's hard to beat for pithiness, but Michael Bolton's relative rule comes close. It runs:

  For any abstract X, X is X to some person, at some time.

And combining these gives us good starting points for attacking a problem of any magnitude:
  • the things
  • the perception of those things
  • the desires for those things
  • the person(s) desiring or perceiving
  • the context(s) in which the desiring or percieving is taking place
Aspiring problem solvers: we have a lever. Let's go and make the earth move for someone!
Image: Wikimedia Commons
Categories: Blogs

Auto Did Act

Fri, 06/17/2016 - 06:54

You are watching me and a machine interacting with the same system. Our actions are, to the extent that you can tell from your vantage point, identical and the system is in the same state at each point in the sequence of actions for both of us. You have been reassured that the systems are identical in all respects that are relevant to this exercise; you believe that all concerned in setting it up are acting honestly with no intention to mislead, deceive, distort or otherwise make a point. The machine and me performed the same actions on the same system with the same visible outcomes.

Are we doing the same task?

This is a testing blog. You are a tester. You have been around the block. More than once. You perhaps think that I haven't given you enough information to answer this question with any certainty. What task is being performed? Are the visible outcomes the only outcomes? To what extent does skill and adaptability form part of the task? To what extent does interpretation on the part of the actor need to happen for the task to be completed successfully? What does success mean in this task anyway? Was the task completed successfully in the examples you watched? What does it mean to be the "same task" here? And from whose perspective?

This is a testing blog. I am a tester. I've also been round the block. More than once. More than twice. I've recently finished reading Harry Collins' The Shape of Actions and, while I'll say up front that I found it reasonably hard-going, it was also highly thought-provoking. This post pulls out just one fragment of the argument made in that book, but one that I find particularly interesting:
Automation of some task becomes tractable at the point where we become indifferent to the details of it.There's probably some task that you perform regularly that was once tricky. Maybe it's one of those test setup tasks that involve getting the right components into the right configurations in relation to one another. One of the ones that means finding the right sequence of commands, in the right order, with the right timing, given the other things that are also in the environment.

As you re-ran this task, you began to learn what was significant to the task, which starting conditions influenced which steps, what could be done in parallel and what needed a particular sequence. You used to need to pay attention, exercise skill and judgment, take an active role. These days you just punch keys as efficiently as possible until its done. You don't look at the options on dialog boxes, you don't inspect the warnings that flash up on the console, you don't even stop checking Twitter on your other monitor. Muscle memory drives the process. Any tacit knowledge you were employing to coax your setup into being has been codified into explicit knowledge. You just need it to be done, and as quickly as possible.

You have effectively automated your task.

As a manager, I recognise an additional layer to this. Sometimes managers don't care (or, perhaps, don't care to think about) how a task is implemented and may thus mistake it for a task which can be automated. But the management perspective can be deceptive. Just because one actor in some task doesn't have to exercise skill, it doesn't mean that no skill is required for any aspect of the task by any actor.

Which reminds me of another Collins book, and a quote that I love from it: distance lends enchantment.
Image: https://flic.kr/p/8tf8q9
Categories: Blogs

Forward Looking

Wed, 06/15/2016 - 18:51

Aleksis Tulonen recently asked me for some thoughts on the future of testing to help in his preparation for a panel discussion. He sent these questions as a jumping-off point:
  • What will software development look like in 1, 3 or 5 years?
  • How will that impact testing approaches?
I was flattered to be asked and really enjoyed thinking about my answers. You can find them at The Future of Testing Part 3 along with those of James Bach, James Coplien, Lisa Crispin, Janet Gregory, Anders Dinsen, Karen Johnson, Alan Page, Amy Phillips, Maaret Pyhäjärvi, Huib Schoots, Sami Söderblom and Jerry Weinberg.
Image: https://flic.kr/p/pLsXJh
Categories: Blogs

Going Postel

Sun, 06/12/2016 - 23:29

Postel's Law - also known as the Robustness Principle - says that, in order to facilitate robust interoperability, (computer) systems should be tolerant of ill-formed input but take care to produce well-formed output. For example, a web service operating under HTTP standards should accept malformed (but interpretable) requests from its clients but return only conformant responses.  The principle became popular in the early days of Internet standards development and, for the historically and linguistically-minded, there's some interesting background in this post by Nick Gall.

On the face of it, the idea seems sensible: two systems can still talk - still provide a service - even if one side doesn't get everything quite right. Or, more generally: systems can extract signal from a noisy communication channel, and limit the noise they contribute to it.

The obvious alternative to Postel's Law is strict interpretation of whatever protocol is in effect and no talking - or no service - when one side errs even slightly. Which seems undesirable, right? But, as Joel Spolsky illustrates beautifully, following Postel's Law can lead to unwanted side-effects, confusions and costs over time as successive implementations of a system are built and tested against other contemporary and earlier implementations and bugs are either obscured or backwards compatibility hacks are required.

Talking to one of my team recently, I speculated that - even accepting its shortcomings - Postel's Law can provide a useful heuristic, a useful model, for human communication. We kicked that idea around for a short while and I've since spent a little time mulling it over. Here's some still quite raw thoughts. I'd be interested in others.

I like to use the Rule of Three when receiving input, when interpreting what I'm hearing or seeing. I find that it helps me get some perspective on whether or not I have reasonable confidence that I understand and gives me a chance to avoid locking into the first meaning I think of. (Although it's still a conscious effort on my part to remember to do it.) I feel like this also gives me the chance to be tolerant.

I will and do accept input that I don't believe is correct without clarification if I'm sufficiently confident that I understand the intended meaning. Perhaps I know the speaker well and that they have a tendency to say "please QA it" over "please test it" even though the former violates my preferred "standards". Context, as usual, is important. In different contexts I might decide to question the terminology (perhaps we are engaged in a private, friendly conversation) or let it slide (for example, we are in a formal meeting where there are significantly bigger fish to fry).

Unlike most, if not all, computer systems, I am able to have off-channel communications with other parties. I can be tolerant of input that I would prefer not to be and initiate a discussion about it (now or later; based on one instance or only after some number of similar occurrences) somewhere else. My choices are not binary.

I have attempted to head-off future potential interoperability problems by trying to agree shared terminology with parties that I need to communicate with. (This is particularly true when we need a language to discuss the problem we are trying to solve.) I have seen enough failures over time in this area that when I recognise the possibility of this being an issue I will consider investing time and effort and emotional capital in this meta-task.

Can we really say that there is a standard for human-human conversations? Simply, no. But there are conventions in different cultures, social situations, times and other contexts.

Despite this, when I'm producing output, I think that I want to conform to some basic standards of communication. (I've written about this kind of thing in e.g. 1 and 2) There are differences when communicating 1:1 versus 1:n, though. While I can tailor my output specifically for one person that I'm talking to right now, I can't easily do that when, say, speaking in front of, or writing for, a crowd.

I observe that sometimes people wilfully misunderstand, or even ignore, the point made by conversational partners in order to force the dialogue to their agenda or as a device to provoke more information, or for some other reason outside the scope of the content of the conversation itself, such as to show who is the boss. When on the receiving end of this kind of behaviour, is tolerance still a useful approach?

Sometimes I can't be sure that I understand and I have to ask for clarification. (And frequently people ask me for the same.) Some regular causes of my misunderstanding include unexpected terminology (e.g. using non-standard words for things), ambiguity (e.g. not making it clear which thing is being referred to), insufficient information (e.g. leaving out steps in reasoning chains).

Interestingly, all of these are likely to be relative issues. A speaker with some listener other than me might well have no problem, or different problems, or the same problems but with different responses. An analogy for this might be a web site serving the same page to multiple different browsers. The same input (the HTML) can result in multiple different interpretations (renderings in the browser). In some cases, nothing will be rendered; in other cases, the input might have been tailored for known differences (e.g. IE6 exceptions, but at cost to the writer of the web site); in still other cases something similar to the designer's idea will be provided; elsewhere a dependency (such as JavaScript) will be missing leading to some significant content not being present.

Spolsky talks about problems due to sequences of implementations of a system. Are there different implementations of me? Or the speakers I communication with? Yes, I think there are. We constantly evolve our approaches, recognise and attempt to override our biases, grow our knowledge, forget things, act in accordance with our mood, act in response to others' moods - or our interpretation of them, at least. These changes are largely invisible to those we communicate with, except for the impacts they might have on our behaviour. And interpreting internal changes from external behaviours is not a trivial undertaking.
Image: https://flic.kr/p/BojaF

Categories: Blogs