Skip to content

Hiccupps - James Thomas
Syndicate content
James Thomas
Updated: 16 hours 9 min ago

Rodent Controls

Sun, 03/26/2017 - 12:01

So I wasn't intending to blog again about The Design of Everyday Things by Don Norman but last night I was reading the final few pages and got to a section titled Easy Looking is Not Necessarily Easy to Use. From that:How many controls does a device need? The fewer the controls the easier it looks to use and the easier it is to find the relevant controls. As the number of controls increases, specific controls can be tailored for specific functions. The device may look more and more complex but will be easier to use.  We studied this relationship in our laboratory ... We found that  to make something easy to use, match the number of controls to the number of functions and organize the panels according to function. To make something look like it is easy, minimize the number of controls.How can these conflicting requirements be met simultaneously? Hide the controls not being used at the moment. By using a panel on which only the relevant controls are visible, you minimize the appearance of complexity. By having a separate control for each function, you minimize complexity of use. It is possible to eat your cake and have it, too.Whether with cake in hand, mouth, or both, I would note that easy saying is not necessarily easy doing. There's still a considerable amount of art in making that heuristic work for any specific situation.

One aspect of that art is deciding what functions it makes sense to expose at all. Fewer functions means fewer controls and less apparent complexity. Catherine Powell's Customer-Driven Knob  was revelatory for me on this:
Someone said, "Let's just let the customer set this. We can make it a knob." Okay, yes, we could do that. But how on earth is the customer going to know what value to choose?  As in my first post about The Design of Everyday Things, I find myself drawn to comparisons with The Shape of Actions. In this case, it's the concept of RAT, or Repair, Attribution and all That, the tendency of users to adapt themselves to accommodate the flaws in their technology.

When I wrote about it in The RAT Trap I didn't use the word design once, although I was clearly thinking about it:
A takeaway for me is that software which can exploit the human tendency to repair and accommodate and all that - which aligns its behaviour with that of its users - gives itself a chance to feel more usable and more valuable more quickly.Sometimes I feel like I'm going round in circles with my learning. But so long as I pick up something interesting - a connection, a reinforcement, a new piece of information, an idea - frequently enough I'm happy to invest the time.
Categories: Blogs

Can You Afford Me?

Wed, 03/22/2017 - 23:56

I'm reading The Design of Everyday Things by Donald Norman on the recommendation of the Dev manager, and borrowed from our UX specialist. (I have great team mates.)

There's much to like in this book, including
  • a taxonomy of error types: at the top level this distinguishes slips from mistakes. Slips are unconscious and generally due to dedicating insufficient attention to a task that is well-known and practised. Mistakes are conscious and reflect factors such as bad decision-making, bias, or disregard of evidence.
  • discussion of affordances: an affordance is the possibility of an action that something provides, and that is perceived by the user of that thing. An affordance of a chair is that you can stand on it. The chair affords (in some sense is for) supporting, and standing on it utilises that support.
  • focus on mappings: the idea that the layout and appearance of the functional elements significantly impacts on how a user relates them to their outcome. For example, light switch panels that mimic the layout of lights in a room are easier to use.
  • consideration of the various actors: the role of the designer is to satisfy their client; the client may or may not be the user; the designer may view themselves as a proxy user; the designer is almost never a proxy user; the users are users; there is rarely a single user (type) to be considered.

But the two things I've found particularly striking are the parallels with Harry Collins' thoughts in a couple of areas:
  • tacit and explicit knowledge: or knowledge in the head and knowledge in the world, as Norman has it. When you are new to some task, some object, you have only knowledge that is available in the world about it: those things that you can see or otherwise sense. It is on the designer to consider how the affordances suggested by an object affect its usability. This might mean - for example - following convention, e.g. the push side of doors shouldn't have handles and the plate to push on should be at a point where pushing is efficient.
  • action hierarchies: actions can be viewed at various granularities. In Norman's model they have seven stages and he gives an example of several academics trying to thread an unfamiliar projector. In The Shape of Actions, Collins talks about an experiment attempting to operate a laboratory air pump. Both authors deconstruct the high-level task (operate the apparatus) into sub-tasks, some of which are familiar to some extent - perhaps by analogy, or by theoretical knowledge, or by having seen someone else doing it - and some of which are completely unfamiliar and require explicit experience of that specific task on that specific object.

I love finding connections like this, even if I don't know quite what they can afford me, just yet.

Categories: Blogs

A Field of My Stone

Sat, 03/18/2017 - 09:04

The Fieldstone Method is Jerry Weinberg's way of gathering material to write about, using that material effectively, and using the time spent working the material efficiently. Although I've read much of Weinberg's work I'd never got round to Weinberg on Writing until last month, and after several prompts from one of my colleagues.

In the book, Weinberg describes his process in terms of an extended analogy between writing and building dry stone walls which - to do it no justice at all - goes something like this:
  • Do not wait until you start writing to start thinking about writing.
  • Gather your stones (interesting thoughts, suggestions, stories, pictures, quotes, connections, ideas) as you come across them. 
  • Always have multiple projects on the go at once. 
  • Maintain a pile of stones (a list of your gathered ideas) that you think will suit each project.
  • As you gather a stone, drop it onto the most suitable pile.
  • Also maintain a pile for stones you find attractive but have no project for at the moment.
  • When you come to write on a project, cast your eyes over the stones you have selected for it.
  • Be inspired by the stones, by their variety and their similarities.
  • Handle the stones, play with them, organise them, reorganise them.
  • Really feel the stones.
  • Use stones (and in a second metaphor they are also periods of time) opportunistically.
  • When you get stuck on one part of a project move to another part.
  • When you get stuck on one project move to another project.

The approach felt extremely familiar to me. Here's the start of an email I sent just over a year ago, spawned out of a Twitter conversation about organising work:
I like to have text files around [for each topic] so that as soon as I have a thought I can drop it into the file and get it out of my head. When I have time to work on whatever the thing is, I have the collected material in one place. Often I find that getting material together is a hard part of writing, so having a bunch of stuff that I can play with, re-order etc helps to spur the writing process.For my blogging I have a ton of open text files:

You can see this one, Fieldstoning_notes.txt and, to the right of it, another called notes.txt which is collected thoughts about how I take notes (duh!) that came out of a recent workshop on note-taking (DUH!) at our local meetup.

I've got enough in that file now to write about it next, but first here's a few of the stones I took from Weinberg on Writing itself:

Never attempt to write what you don’t care about.

Real professional writers seldom write one thing at a time.

The broader the audience, the more difficult the writer’s job.

Most often [people] stop writing because they do not understand the essential randomness involved in the creative process.

... it’s not the number of ideas that blocks you, it’s your reaction to the number of ideas.

Fieldstoning is about always doing something that’s advancing your writing projects.

The key to effective writing is the human emotional response to the stone.

If I’ve been looking for snug fits while gathering, I have much less mortaring to do when I’m finishing

Don’t get it right; get it written.

"Sloppy work" is not the opposite of "perfection." Sloppy work is the opposite of the best you can do at the time.
Categories: Blogs

Well, That was a Bad Idea

Fri, 03/10/2017 - 11:02

I was listening to Giselle Aldridge and Paul Merrill on the Reflection as a Service podcast one morning this week as I walked to work. They were talking about ideas in entrepreneurship, assessing their value, when and how and with who they should be discussed, and how to protect them when you do open them up to others' scrutiny.

I was thinking, while listening, that as an entrepreneur you need to be able to filter the ideas in front of you, seeking to find one that has a prospect of returning sufficiently well on an investment. Sometimes, you'll have none that fit the bill and so, in some sense, they are bad ideas (for you, at that time, for the opportunity you had in mind, at least). In that situation one approach is to junk what you have and look for new ideas.  But an alternative is to make a bad idea better.

I was speculating, as I was thinking, and listening, that there might be heuristics for turning those bad ideas into good ideas. So I went looking, and I found an interesting piece by Alan Dix, a lecturer at Birmingham University, titled Silly Ideas:
Thinking about bad ideas is part brainstorming, but more important about learning to think about any idea, new good ideas you have yourself, other people's existing ideas and products.Dix suggests that deliberately (stating that you are) starting with bad ideas is itself a useful heuristic. You are naturally less attached to bad ideas; they can provoke you into trains of thought that you might not otherwise have encountered; you will have more confidence that you can improve them; they will likely generate more questions and challenge your assumptions.

He gives a set of questions for interrogating an idea, something like a SWOT analysis:

  • what is good about it? in what contexts? why?
  • what is bad about it? in what contexts? why?
  • in what contexts it is optimal?
  • how would you sell it? how would you defend it?

For me, a key aspect of this analysis is the focus on context. An idea is not necessarily unequivocally good or bad. Aspects of it might be good, or bad, or better or worse, in different scenarios, for different purposes. Dix invites you to discover in which it might be which and for which. To draw another parallel, this feels akin to factoring.

Armed with data about the idea, you can now look to change it in ways that keep the good and lose the bad, and maybe change the context or manner in which it's used. Or throw it away completely and use the learnings you have from the domain to make a fresh start with a new idea.

The new idea I like best here is that of starting from a point that you assert is bad. I've encountered similar suggestions before: that functional fixedness can be reduced by starting a familiar process from an unfamiliar situation, that in brainstorming you shouldn't reject ideas as you come up with them, and that of not evaluating until you have options in the rule of three.

I enjoy ideas simply for the sake of having them. I am fascinated by the way in which ideas spawn ideas and by the way that connections are made between them. I celebrate the fact that multiple perspectives on the same idea can differ enormously. I particularly like exploring the ambiguity that can result from those perspectives at work, where the task is often to tease out and then squeeze out ambiguity, or for fun, making up corny puns. And corny puns are never a bad idea.
Image: ITV News
Categories: Blogs

commit -m "My idea is ..."

Tue, 03/07/2017 - 07:30

One of the many things I've learned over the years is that (for me) getting an idea out - on paper, on screen, on a whiteboard, into the air; in words, or pictures, or verbally, ... - is a strong heuristic for making it testable, for helping me to understand it, and for provoking new ideas.

Once out, and once I've forced myself to represent the idea in prose or some other kind of model, I usually find that I've teased out detail in some areas that were previously woolly. I can begin to challenge the idea, to see patterns and gaps between it and the other ideas, to search the space around it and see further ideas, perhaps better ideas.

Once out, I feel like I have freed up some brain room for more thoughts. I don't have to maintain the cloud of things that the idea was when it was only in mind and I was repeatedly running over it to keep it alive, to remember it.

Once out, once I've nailed it down that first time, I have a better idea of how to explain it to someone else. So I can choose to share the idea and get the benefits of others' challenges to it.

Don't get me wrong, I do a lot of thinking in my head. But pulling an idea out, even to somewhere only visible to me, is a commitment to the idea of the idea - which doesn't mean that I think it's a good idea; just that it's worth exploring.
Categories: Blogs

Works For Me

Mon, 03/06/2017 - 23:57

There's a tester position open at Linguamatics just now and, as I've said before on here, this usually means a period of reflection for me.

On this occasion the opening was created by someone leaving - I'm pleased to say that it was on good terms, for a really exciting opportunity, a chance to really make a difference at the new place - and so, although I wasn't looking for change, it has arrived. Again.

Change. There was a time when, for me, change was also challenge. Given the choice of change or not, I would tend to prefer not. These days I like to think I'm more pragmatic. Change comes with potential costs and benefits. The skill is in taking on those changes that return the right benefits at the right costs. When change is not a choice the skill is still in trading benefits and costs, but now of the ways you can think of to implement the change.

Change. My team has changed a lot in the last twelve months or so. We grew rapidly and also changed our structure. You may have noticed that I've written a lot about management in the last few months. This is not unrelated to the changes. In search of potential ways to implement change, and ways to assess the benefits and costs of change, and also the risks associated with changing, I read. And I wrote, as I'm writing now.

Change. Linguamatics, the company I co-founded, is also changing. In fact, now I look back I don't think it's ever stopped changing. In 15 years we've gone from the four of us in one tiny room to 100 of us in a couple of office suites and a handful of other locations on either side of the Atlantic. We're encountering some of the difficulties and some of the beauties of expansion, and we're exploring ways to deal with and embrace them.

If you're a tester and able to be responsive to change, if you can be an agent of change for the better, and if you fancy a change of scenery, perhaps you'd like to consider coming to work with us?
Categories: Blogs

The Testing Kraftwerk

Fri, 02/24/2017 - 10:20

If you're around testers or reading about testing it won't be long before someone mentions models. (Probably after context but some time before tacit knowledge.)

As a new tester in particular, you may find yourself asking what they are exactly, these models. It can be daunting when, having asked to see someone else's model, you are shown a complex flowchart, or a state diagram, or a stack of UML, a multi-coloured mindmap, or a barrage of blocked-out architectural components linked by complex arrangements of arrows with various degrees of dottedness.

But stay strong, my friend, because - while those things and many others can be models and can be useful - models are really just a way of describing a system, typically to aid understanding and often to permit predictions about how the system will behave under given conditions. What's more, the "system" need not be the entirety of whatever you're looking at nor all of the attributes of it.

It's part of the craft of testing to be able to build a model that suits the situation you are in at the time. For some web app, say, you could make a model of a text field, the dialog box it is in, the client application that launched it, the client-server architecture, or the hardware, software and comms stacks that support the client and server.

You can model different bits of the same system at the same time in different ways. And that can be powerful, for example when you realise that your models are inconsistent, because if that's the case, perhaps the system is inconsistent too ...

I'm a simple kind of chap and I like simple models, if I can get away with them. Here's a bunch of my favourite simple model structures and some simple ideas about when I might try to use them, rendered simply.

Horizontal LineYou're looking at some software in which events are triggered by other events. The order of the events is important to the correct functioning of the system. You could try to model this in numerous ways, but a simple way, a foothold, a first approximation, might be to simply draw a horizontal line and mark down the order you think things are happening in.

Well done. There's your model, of the temporal relationship between events. It's not sophisticated, but it represents what you think you know. Now test it by interacting with the system. Ah, you found out that you can alter the order. Bingo, your model was wrong, but now you can improve it. Add some additional horizontal lines to show relationships. Boom!

Edit: Synchronicity. On the day I published this post, Alex Kotliarsky published Plotting Ideas which also talks about how simple structures can help to understand, and extend understanding of, a space. The example given is a horizontal line being used to model types of automated testing.

Vertical PileSo horizontal lines are great, sure, but let's not leave the vertical out of it. While horizontal seems reasonably natural for temporal data, vertical fits nicely with stacks. That might be technology stacks, or call sequences, process phases, or something else.

Here's an example showing how some calls to a web server go through different libraries, and which might be a way in to understanding why some responses conform to HTTP standards and some don't. (Clue: the ones that don't are the ones you hacked up yourself.)

Scatter PlotCombine your horizontal and vertical and you've got a plane on which to plot a couple of variables. Imagine that you're wondering how responsiveness of your application varies with the number of objects created in its database. You run the experiments and you plot the results.

If you have a couple of different builds you might use different symbols to plot them both on the same chart, effectively increasing its dimensionality. Shape, size, annotations, and more can add additional dimensions.

Now you have your chart you can see where you have data and you can begin to wonder about the behaviour in those areas where you have no data. You can then arrange experiments to fill them, or use your developing understanding of the application to predict them. (And then consider testing your prediction, right?)

Just two lines and a few dots, a biro and a scrap of paper. This is your model, ladies and gentlemen.

TableA picture is worth a thousand words, they say. A table can hold its own in that company. When confronted with a mass of text describing how similar things behave in different ways under similar conditions I will often reach for a table so that I can compare like with like, and see the whole space in one view. This kind of approach fits well when there are several things that you want to compare in several dimensions.

In this picture, I'm imagining that I've taken written reports about the work that was done to test some versions of a piece of software against successive versions of the same specification. As large blocks of text, the comparisons are hard to make. Laid out as a table I have visibility of the data and I have the makings of a model of the test coverage.

The patterns that this exposes might be interesting. Also, the places that there are gaps might be interesting. Sometimes those gaps highlight things that were missed in the description, sometimes they're disallowed data points, sometimes they were missed in the analysis. And sometimes they point to an error in the labels. Who knows, this time? Well, you will soon. Because you've seen that the gaps are there you can go and find out, can't you?

I could have increased the data density of this table in various ways. I could have put traffic lights in each populated cell to give some idea of the risk highlighted by the testing done, for example. But I didn't. Because I didn't need to yet and didn't think I'd want to and it'd take more time.

Sometimes that's the right decision and sometimes not. You rarely know for sure. Models themselves, and the act of model building, are part of your exploratory toolkit and subject to the same kinds of cost/value trade-offs as everything else.

A special mention here for Truth tables which I frequently find myself using to model inputs and corresponding outcomes, and which tester isn't fascinated by those two little blighters?

CircleThe simple circle. Once drawn you have a bipartition, two classes. Inside and outside. Which of the users of our system run vi and Emacs? What's that? Johnny is in both camps? Houston, we have a problem.

This is essentially a two variable model, so why wouldn't we use a scatter plot? Good question. In this case, to start with I wasn't so interested in understanding the extent of vi use against Emacs use for a given user base. My starting assumption was that our users are members of one editor religion or another and I want to see who belongs in each set. The circle gives me that. (I also used a circle model for separating work I will do from work I won't do in Put a Ring on It.)

But it also brings Johnny into the open. The model has exposed my incorrect assumption. If Johnny had happened not to be in my data set, then my model would fit my assumptions and I might happily continue to predict that new users would fall into one of the two camps.

Implicit in that last paragraph are other assumptions, for example that the data is good, and that it is plotted accurately. It's important to remember that models are not the thing that they model. When you see something that looks unexpected in your model, you will usefully ask yourself these kinds of questions:

  • is the system wrong?
  • is the data wrong?
  • is the model wrong?
  • is my interpretation wrong?
  • ...
Venn DiagramThe circle's elder sister. Where the circle makes two sets, the Venn makes arbtrarily many. I used a Venn diagram only this week - the spur for this post, as it happens - to model a collection of text filters whose functionality overlaps. I wanted to understand which filters overlapped with each other. This is where I got to:

In this case I also used the size of the circles as an additional visual aid. I think filter A has more scope than any of the others so I made it much larger. (I also used a kind of Venn diagram model of my testing space in Your Testing is a Joke.)

And now I have something that I can pass on to others on my team - which I did - and perhaps we can treat each of the areas on the diagram as an initial stab at a set of equivalence classes that might serve useful when testing this component.

In this post, I've given a small set of model types that I use frequently. I don't think that any of the examples I've given couldn't be modelled another way and on any given day I might have modelled them other ways. In fact, I will often hop between attempts to model a system using different types as a way to provoke thought, to provide another perspective, to find a way in to the problem I'm looking at.

And having written that last sentence I now see that this blog post is the beginnings of a model of how I use models. But sometimes that's the way it works too - the model is an emergent property of the investigation and then feeds back into the investigation. It's all part of the craft.
Image: In Deep Music Archive

Edit: While later seeking some software to draw a more complex version of the Venn Diagram model I found out that what I've actually drawn here is an Euler Diagram.
Categories: Blogs

Before Testing

Mon, 02/20/2017 - 07:25

I happened across Why testers? by Joel Spolsky at the weekend. Written back in 2010, and - if we're being sceptical - perhaps a kind of honeytrap for Fog Creek's tester recruitment process, it has some memorable lines, including:
what testers are supposed to do ... is evaluate new code, find the good things, find the bad things, and give positive and negative reinforcement to the developers.Otherwise it’s depressing to be a programmer. Here I am, typing away, writing all this awesome code, and nobody really need very smart people as testers, even if they don’t have relevant experience. Many of the best testers I’ve worked with didn’t even realize they wanted to be testers until someone offered them the job.The job advert that the post points at is still there and reinforces the focus on testing as a service to developers and the sentiments about feedback, although it looks like, these days, they do require test experience.

It's common to hear testers say that they "fell into testing" and I've offered jobs to, and actually managed to recruit from, non-tester roles. On the back of reading Spolsky's blog I tweeted this:
#Testers, one tweet please. What did you do before testing? What's the most significant difference (in any respect) between that and now?— James Thomas (@qahiccupps) February 18, 2017 And, while it's a biased and also self-selected sample (to those who happen to be close enough to me in the Twitter network, and those who happened to see it in their timeline, and those who cared to respond) which has no statistical validity, I enjoyed reading the responses and wondering about patterns.

Please feel free to add your own story about the years BT (Before Testing) to either the thread or the comments here.
Categories: Blogs

People are Strange

Tue, 02/14/2017 - 19:01

Managers. They're the light in the fridge: when the door is open their value can be seen. But when the door is closed ... well, who knows?

Johanna Rothman and Esther Derby reckon they have a good idea. And they aim to show, in the form of an extended story following one manager as he takes over an existing team with problems, the kinds of things that managers can do and do do and - if they're after a decent default starting point - should consider doing.

What their book, Behind Closed Doors, isn't - and doesn't claim to be - is the answer to every management problem. The cast of characters in the story represent some of the kinds of personalities you'll find yourself dealing with as a manager, but the depth of the scenarios covered is limited, the set of outcomes covered is generally positive, and the timescales covered are reasonably short.

Michael Lopp, in Managing Humans, implores managers to remember that their staff are chaotic beautiful snowflakes. Unique. Individual. Special. Jim Morrison just says, simply, brusquely, that people are strange. (And don't forget that managers are people, despite evidence to the contrary.)

Either way, it's on the manager to care to look and listen carefully and find ways to help those they manage to be the best that they can be in ways that suit them. Management books necessarily use archetypes as a practical way to give suggestions and share experiences, but those new to management especially should be wary of misinterpreting the stories as a how-to guide to be naively applied without consideration of the context.

What Behind Closed Doors also isn't, unlike so much writing on management, is dry, or full of heroistic aphorisms, or preachy. In fact, I found it an extremely easy read for several reasons: it's well-written; it's short; the story format helps the reader along; following a consistent story gives context to situations as the book progresses; sidebars and an appendix keep detail aside for later consumption; I'm familiar with work by both of these authors already; I'm a fan of Jerry Weinberg's writing on management and interpersonal relationships and this book owes much to his insights (he wrote the foreword here); I agree with much of the advice.

What I found myself wanting - and I'd buy Rothman and Derby's version of this like a shot - is more detailed versions of some of the dialogues in this book with commentary in the form of the internal monologues of the participants. I'd like to hear Sam, the manager, thinking though the options he has when trying to help Kevin to learn to delegate and understand how he chose the approach that he took. I'd like to hear Keven trying to work out what he thinks Sam's motives are and perhaps rejecting some of Sam's premises.  I'd also like to see a deeper focus on a specific relationship over an extended period of time, with failures, and techniques for rebuilding trust in the face of them.

But while I wait for that, here's a few quotes that I enjoyed, loosely grouped.

On the contexts in which management takes place:
Generally speaking, you can observe only the public behaviors of managers and how your managers interact with you. Sometimes people who have never been in a management role believe that managers can simply tell other people what to do and that’s that. The higher you are in the organization, the more other people magnify your reactions. Because managers amplify the work of others, the human costs of bad management can be even higher than the economic costs. Chaos hides problems—both with people and projects. When chaos recedes, problems emerge. The moral of this fable is: Focus on the funded work.On making a technical contribution as a manager:
Some first-level managers still do some technical work, but they cannot assign themselves to the critical path.

It’s easier to know when technical work is complete than to know when management work is complete.

The more people you have in your group, the harder it is to make a technical contribution.

The payoff for delegation isn’t always immediate.

It takes courage to delegate.On coaching:
You always have the option not to coach. You can choose to give your team member feedback (information about the past), without providing advice on options for future behavior.

Coaching doesn’t mean you rush in to solve the problem. Coaching helps the other person see more options and choose from them.

Coaching helps another person develop new capability with support.

And it goes without saying, but if you offer help, you need to follow through and provide the help requested, or people will be disinclined to ask again.

Helping someone think through the implications is the meat of coaching.On team-building:
Jelled teams don’t happen by accident; teams jell when someone pays attention to building trust and commitment

Over time they build trust by exchanging and honoring commitments to each other.

Evaluations are different from feedback.

A one-on-one meeting is a great place to give appreciations.

[people] care whether the sincere appreciation is public or private ... It’s always appropriate to give appreciation for their contribution in a private meeting.

Each person on your team is unique. Some will need feedback on personal behaviors. Some will need help defining career development goals. Some will need coaching on how to influence across the organization.

Make sure the career development plans are integrated into the person’s day-to-day work. Otherwise, career development won’t happen.

"Career development" that happens only once a year is a sham.On problem solving:
Our rule of thumb is to generate at least three reasonable options for solving any problem.

Even if you do choose the first option, you’ll understand the issue better after considering several options.

If you’re in a position to know a problem exists, consider this guideline for problem solving: the people who perform the work need to be part of the solution.

We often assume that deadlines are immutable, that a process is unchangeable, or that we have to solve something alone. Use thought experiments to remove artificial constraints,

It’s tempting to stop with the first reasonable option that pops into your head. But with any messy problem, generating multiple options leads to a richer understanding of the problem and potential solutions

Before you jump to solutions, collect some data. Data collection doesn’t have to be formal. Look for quantitative and qualitative data.

If you hear yourself saying, “We’ll just do blah, blah, blah,” Stop! “Just” is a keyword that lets you know it just won’t work.

When the root cause points to the original issue, it’s likely a system problem.On managing:
Some people think management is all about the people, and some people think management is all about the tasks. But great management is about leading and developing people and managing tasks.

When managers are self-aware, they can respond to events rather than react in emotional outbursts.

And consider how your language affects your perspective and your ability to do your job.

Spending time with people is management work.

Part of being good at [Managing By Walking Around and Listening] is cultivating a curious mind, always observing, and questioning the meaning of what you see.

Great managers actively learn the craft of management.Image:
Categories: Blogs

The Bug in Lessons Learned

Fri, 02/10/2017 - 21:52

The Test team book club read Lessons Learned in Software Testing the other week. I couldn't find my copy at the time but Karo came across it today, on Rog's desk, and was delighted to tell me that she'd discovered a bug in it...
Categories: Blogs


Sun, 02/05/2017 - 06:36

What Really Happened in Y2K? That's the question Professor Martyn Thomas is asking in a forthcoming lecture and in a recent Chips With Everything podcast, from which I picked a few quotes that I particularly enjoyed.

On why choosing to use two digits for years was arguably a reasonable choice, in its time and context:
The problem arose originally because when most of the systems were being programmed before the 1990s computer power was extremely expensive and storage was extremely expensive. It's quite hard to recall that back in 1960 and 1970 a computer would occupy a room the size of a football pitch and be run 24 hours a day and still only support a single organisation.It was because those things were so expensive, because processing was expensive and in particular because storage was so expensive that full dates weren't stored. Only the year digits were stored in the data. On the lack of appreciation that, despite the eventual understated outcome, Y2K exposed major issues:
I regard it as a signal event. One of these near-misses that it's very important that you learn from, and I don't think we've learned from it yet. I don't think we've taken the right lessons out of the year 2000 problem. And all the people who say it was all a myth prevent those lessons being learned.On what bothers him today:
I'm [worried about] cyber security. I think that is a threat that's not yet being addressed strategically. We have to fix it at the root, which is by making the software far less vulnerable to cyber attack ... Driverless cars scare the hell out of me, viewed through the lens of cyber security.We seem to feel that the right solution to the cyber security problem is to train as many people as we can to really understand how to look for cyber security vulnerabilities and then just send them out into companies ... without recognising that all we're doing is training a bunch of people find all the loopholes in the systems and then encourage companies to let them in and discover all their secrets.Similarly, training lots of school students to write bad software, which is essentially what we're doing by encouraging app development in schools, is just increasing the mountain of bad software in the world, which is a problem. It's not the solution.On building software:
People don't approach building software with the same degree of rigour that engineers approach building other artefacts that are equally important. The consequence of that is that most software contains a lot of errors. And most software is not managed very well.One of the big problems in the run-up to Y2K was that most major companies could not find the source code for their big systems, for their key business systems. And could not therefore recreate the software even in the form that it was currently running on their computers.  The lack of professionalism around managing software development and software was revealed by Y2K ... but we still build software on the assumption that you can test it to show that it's fit for purpose.On the frequency of errors in software:
A typical programmer makes a mistake in, if they're good, every 30 lines of program. If they're very, very good they make a mistake in every 100 lines. If they're typical it's in about 10 lines of code. And you don't find all of those by testing.  On his prescription:
The people who make the money out of selling us computer systems don't carry the cost of those systems failing. We could fix that. We could say that in a decade's time - to give the industry a chance to shape up - we were going to introduce strict liability in the way that we have strict liability in the safety of children's toys for example.Image: 
Categories: Blogs

You Rang!

Fri, 02/03/2017 - 00:01

So, last year I blogged about an approach I take to managing uncertainty: Put a Ring on It.

The post was inspired by a conversation I'd had with several colleagues in a short space of time, where I'd described my mental model of a band I put around all the bits of the problem I can't deal with now, leaving behind the bits that are tractable.

After doing that, I can proceed, right now, on whatever is left. I've encircled the uncertainty with a dependency on some outside factor, and I don't need to think about the parts inside it until the dependency is resolved. (Or the context changes.)

And this week I was treated to a beautifully simple implementation of it, from one of those colleagues. In a situation in which many things might need doing - but the number and nature is unknown - she controlled the uncertainty with a to-do list and a micro-algorithm:
  • do the thing now, completely, only if it's easy and important
  • do a pragmatic piece now, if it's needed but not easy, and revisit it later (via the list) 
  • otherwise, put it on the list

Uncertainty encountered. And ringed with a list. And mental energy conserved. And progress consistently made.
Categories: Blogs

Elis, Other People

Tue, 01/31/2017 - 11:40

I've written before about the links I see between joking and testing - about the setting up of assumptions, the reframing, and the violated expectations, amongst other things. I like to listen to the The Comedian's Comedian podcast because it encourages comics to talk about their craft, and I sometimes find ideas that cross over, or just provoke a thought in me. Here's a few quotes that popped out of the recent Elis James episode.

On testing in the moment:
Almost everyone works better with a bit of adrenaline in them. In the same way that I could never write good stuff in the house, all of my best jokes come within 20 minutes to performing or within 20 minutes of performing ... 'cos all of my best decisions are informed by adrenaline.On the value of multiple perspectives, experiences, skills:I've even tried sitting in the house and bantering with myself like I'm in a pub because I hate the myth that standups are all these weird auteurs and we should do everything on our own. The thing with being bilingual is that I have a different personality in Welsh and English. My onstage persona is different.On the gestalt possibilities of collaboration:
I love collaborating ... being in a room with another comic ... that's the funnest part of comedy, bouncing off each other and developing an idea together. The difference between thinking of an idea on your own and wondering if it's funny, and then immediately asking the person next to you, who's a trusted friend whose opinion you respect, and then they say "yeah!" and say one little tweak and it sends you off down a completely different path. The king of this is Henry Packer. If you take anything to him he will give you an angle that is from such a bizarre place and suddenly it will be a great routine. On actively looking for variety, especially similar-but-different:
I will occasionally write out a routine longhand and I'll put all the words into a thesaurus. The thing with a thesaurus - it's an extraordinary tool - is that the reason that 'seldom' and 'doggerel' are funny is that you know what they mean but you'd never use them. They're not quite on the tip of your tongue, they're sort of half-way back.Image:
Categories: Blogs