Skip to content

Hiccupps - James Thomas
Syndicate content
James Thomashttp://www.blogger.com/profile/01185262890702402757noreply@blogger.comBlogger166125
Updated: 10 hours 19 min ago

Personal Development

Sun, 04/26/2015 - 12:07
Here's a couple of personal, experience-based, posts by developers that I came across recently and really like:While Warne is up front about the fact that he's talking heuristics, Sanford doesn't explicitly say but is thinking that way too ("should probably always be"). In both cases the suggestions they make include general team-working advice for those working in software.Image: https://flic.kr/p/59RmYJ
Categories: Blogs

Book Notes

Sun, 04/12/2015 - 11:10
I've had a good run of reading in the first three months this year and I thought I'd try to summarise it  by picking out one or two thoughts on each book.


Becoming a Technical Leader, Gerald M. Weinberg 

I'm not a vain or pretentious kind of chap, but I did enjoy discovering that leadership is all about moi. That is Motivation, Opportunity and Ideas.  A good technical leader will look to supply these things in the right measures at the right times for their teams; a great technical leader will look at themselves to see which of these is their weakest and find ways to work at improving it.


What Did You Say? The Art of Giving and Receiving Feedback, Charles N. Seashore, Edith W. Seashore, Gerald M. Weinberg

Before I started regular 1-1s with my test team last year, I cast around for ideas and found a series of Manager Tools podcasts which, despite being much longer than it needed to be, I liked a great deal. A few months in, I was looking for ways grow my own capacity in delivering feedback. One quote from What Did You Say? hits the spot:
Don’t concentrate on giving feedback; concentrate on being congruent–responding to the other person, to yourself, and to the here-and-now situation. Don’t go around hunting for opportunities to give feedback, because feedback is effective only when the need arises naturally out of congruent interactions.And just last week I came across No Magic Words  which aligns really well with the notion (and I hope practice) of 1-1 that I have arrived at.


Are Your Lights On? Donald C. Gause, Gerald M. Weinberg

Weinberg has a knack for pithy yet panoptic definitions. You're undoubtedly familiar with the widely-quoted definition of quality and in this book he provides another:
A problem is a difference between things as desired and thing as perceived.Both desire and perception are up for grabs in any resolution of the problem.

You'll often see an iceberg metaphor being employed to illustrate limited visibility of some bounded-but-extent-unknown larger issue. Reading Weinberg for me is like taking that iceberg and picking off a snowflake from the top. It's easy to behold, even easy to hold, easy to comprehend at the high level but as you begin to think about it, deeper and denser than you imagined. In fact, as you look more closely you realise that that snowflake is itself an iceberg and, actually, it's icebergs all the way down.

The Gift of Time, Fiona Charles (ed.)

This collection of essays is a tribute to (and 75th birthday present for) Weinberg and is strongly influenced by his teachings. Michael Bolton's It's All Relative considers the way in which Weinberg frequently casts his analyses in terms of relationships - as in the problem definition above - and derives from it a generalisation which he calls the relative rule:
 A description of something intangible as "X" really means "X, to some person at some time"This gives us more variables to play with in any situation and so an approach to some problem might be to look at it from different time perspective or the viewpoint of a different person. He references a 1980s lecture by Jonathan Miller in which the ability of humour to alter perceptions of a scenario was proposed. A joke's punchline often hinges on revealing that what you thought you knew, or expected, was wrong.

Which lead me to ...

Laughing Matters, John Durant and Jonathan Miller (eds.)

Another collection of essays, the first of which is by Miller himself and covers the kind of ground that Bolton mentions. The copy I ordered arrived with perfect timing as I was preparing my proposal for EuroSTAR 2015, entitled Your Testing is a Joke, and dealing with the analogy I see between joking and testing. Try this short extract:
In all procedures of life there are rules of thumb which enable us to go on to 'automatic pilot' ... We depend on the existence of these categories in order to go about our everyday business. Jokes allow us to stand back from these rules and inspect them.Yes. Yes. Yes. And testing too.

Agile Product Management with Scrum, Roman Pichler

The test team at Linguamatics services three different development teams working in different ways. Most recently, our Solutions team has started using Scrum and the SolDev Manager lent me this book. It didn't tell me much I hadn't already picked up from other reading, but it is clear, concise and readable for non-developers.


The Signal and The Noise, Nate Silver

One the Dev Manager lent me. In very roughly the same kind of area as Nicholas Taleb and dense, like Weinberg, although without the latter's easy style, this book was occasionally hard-going (for me) but worth persevering with. Each chapter is effectively self-contained so you can easily skip to the next chunk.

Silver tells a great story about Gary Kasparov and the chess computer Deep Blue in a series of high-profile matches in the late 1980s. In one game, the computer made an unexpected move which Kasparov noted and took time to analyse afterwards.

The only explanation he could come up with was that the move was motivated by a strategy that suggested Deep Blue was capable of looking ahead more than 20 moves.  This was unheard of and placed the computer at a significant advantage if true. Unfortunately for Kasparov, he subsequently acted as if it were true - attributing ability and chess wisdom to the software which adversely affected how he played against it - while in fact it was simply a bug.

Lauren Ipsum, Carlos Bueno

I was turned onto this one by a Testhead blog post. It's an Alice-like story about a girl who finds herself in a strange land populated by strange characters with strange ideas. The ideas come from computer science, although they are not presented that way in the story, and the characters include Hugh Rustic (oh yes, and there's plenty of other puns) who says, of the problem of buying tomatoes from the market:
 ... to find the best tomato, you'd have to compare them all, right? ... don't waste your time looking for the best tomato when there are plenty that are Good Enough.I read it to my seven year-old daughter who loved it. We had some conceptually deep but still fun and fantastical discussions on the back of it and played with some Logo apps - the book features "poems" which are really Logo-style programs. I've now lent it to my mate to read to his son.


More Secrets of Consulting, Gerald M. Weinberg

Although I talked about Weinberg's writing in terms of icebergs earlier, this book is more like a tornado. It's a whirlwind of rules, parables, anecdotes and insight tied together by the concept of a Wisdom Box, or a toolkit for consultants. Here's one rule:
When a triangle separates you from your data, choose the hypotenuse.For example, if X says that Y thinks something, and you care what Y thinks. Confirm Y's position with Y before proceeding.
Agile Testing, Lisa Crispin, Janet Gregory

Which I've borrowed from one of my team on her recommendation but only just started.
Image: https://flic.kr/p/99NLP and the sites I've linked to for each book.
Categories: Blogs

Who'd've Guest?

Tue, 04/07/2015 - 21:48
I was flattered to be asked to contribute articles to The Testing Planet and the uTest blog recently. Here's what I gave them:
  • Not Sure About Uncertainty: thoughts on known/unknowns, quantifiable and unquantifiable risks, testing models incorporating them and their relationship to risk-based testing.
  • Make Like a Tester: thoughts for testers starting a new job, with butter, elephants, Stephen Hawking and a sponge.
Image: https://flic.kr/p/9k7Wk1
Categories: Blogs

What Are You Like?

Mon, 03/30/2015 - 07:58
As a tester, comparison, and confidence in your ability to compare, are key parts of your toolkit. So, like me, you might find this video humbling. It talks about metamers (where different things look the same) and anti-metamers (the same things look different) and shows how easy it is to mislead our visual systems even while explaining how the images were created and why the optical illusion works.Video:https://www.youtube.com/watch?v=tQ9oUfyEc1k
Categories: Blogs

On Being a Test Charter

Sat, 03/21/2015 - 08:53
Managing a complex set of variables, of variables that interact, of interactions that are interdependent, of interdependencies that are opaque, of opacities that are ... well, you get the idea. That can be hard. And that's just the job some days.

Investigating an apparent performance issue recently, I had variables including platform, product version, build type (debug vs release), compiler options, hardware, machine configuration, data sources and more. I was working with both our Dev and Ops teams to determine which of these seemed most likely in what combination to be able to explain the observations I'd made.

Up to my neck in a potential combinatorial explosion, it occurred to me that in order to proceed I was adopting an approach similar to the ideas behind chart parsing in linguistics. Essentially:
  • keep track of all findings to date, but don't necessarily commit to them (yet)
  • maintain multiple potentially contradictory analyses (hopefully efficiently)
  • pack all of the variables that are consistent to some level in some scenario together while looking at other factors
Some background: parsing is the process of analysing a sequence of symbols for conformance to a set of grammatical rules. You've probably come across this in the context of computer programs - when the compiler or interpreter rejects your carefully crafted code by pointing at a stupid schoolboy syntax error, it's a parser that's laughing at you.

Programming languages will generally be engineered to reduce ambiguity in their syntax in order to reduce the scope for ambiguity in the meaning of any statement. It's advantageous to a programmer if they can be reasonably certain that the compiler or interpreter will understand the same thing that they do for any given program. (And in that respect Perl haters should get a chuckle from this.)

But natural languages such as English are a whole different story. These kinds of languages are by definition not designed and much effort has been expended by linguists to create grammars that describe them. The task is difficult for several reasons, amongst which is the sheer number of possible syntactic analyses in general. And this is a decent analogy for open-ended investigations.

Here's an incomplete syntactic analysis of the simple sentence Sam saw a baby with a telescope - note that the PP node is not attached to the rest.


The parse combines words in the sentence together into a structures according to grammatical rules like these which are conceptually very similar to the kind of grammar you'll see for programming languages such as Python or in, say, the XML specs:
 NP -> DET N
 VP -> V NP
 PP -> P NP
 NP -> Det N PP
 VP -> V NP PP
 S -> NP VP
The bottom level of these structures are the grammatical category of each word in the sentence e.g. nouns (N), verbs (V), determiners such as "a" or "the" (DET) and prepositions like "in" or "with" (P).

Above this level, a noun phrase (NP) can be a determiner followed by a noun (e.g. the product) and a verb phrase (VP) can be a verb followed by a noun phrase (tested the product) and a sentence can be a noun phrase followed by a verb phrase (A tester tested the product).

The sentence we're considering is taken from a paper by Doug Arnold:
 Sam saw the baby with the telescopeIn a first pass, looking only at the words we can see that saw is ambiguous between a noun and a verb. Perhaps you'd think that because you understand the sentence it'd be easy to reject the noun interpretation, but there are similar examples with the same structure which are probably acceptable to you such as:
 Bill Boot the gangster with the gunSo, on the basis of simple syntax alone, we probably don't want to reject anything yet - although we might assign a higher or lower weight to the possibilities. In the case of chart parsing, both are preserved in a single chart data structure which will aggregate information through the parse:
In the analogy with an exploratory investigation, this corresponds to an experimental result with multiple potential causes. We need to keep both in mind but we can prefer one over the other to some extent at any stage, and change our minds as new information is discovered.

As a parser attempts to fit some subset of its rules to a sentence there's a chance that it'll discover the same potential analyses multiple times. For efficiency reasons we'd prefer not to spend time working out that a baby is a noun phrase from first principles over and over.

The chart data structure achieves this by holding information discovered in previous passes for reuse in subsequent ones, but crucially doesn't preclude some other analysis also being found by some other rule. So, although a baby fits one rule well, another rule might say that baby with is a potential, if contradictory, analysis. Both will be available in the chart.

Mapping this to testing, we might say that multiple experiments can generate data which supports a particular analysis and we should provide ourselves the opportunity to recognise when data does this, but not be side-tracked into thinking that there are not other interpretations which cross-cut one another.

In some cases of ambiguity in parsing we'll find that high-level analyses can be satisfied by multiple different lower-level analyses. Recall that the example syntactic analysis given above did not have the PP with the telescope incorporated into it. How might it fit? Well, two possible interpretations involve seeing a baby through a telescope or seeing a baby who has a telescope.

This kind of ambiguity comes from the problem of prepositional phrase attachment: which other element in the parse does the PP with the telescope modify: the seeing (so it attaches to the VP) or the baby (so NP)?

Interestingly, at the syntactic level, both of these result in a verb phrase covering the words saw the baby with the telescope and so in any candidate parse we can consider the rest of the sentence without reference to any of the internal structure below the VP. Here's a chart showing just the two VP interpretations:

You can think of this as a kind of "temporary black box" approach that can usefully reduce complexity when coping with permutations of variables in experiments.

The example sentence and grammar used here are trivial: real natural language grammars might have hundreds of rules and real-life sentences can have hundreds of potential parses. In the course of generating, running and interpreting experiments, however, we don't necessarily yet know the words in the sentence, or know that we have the complete set of rules, so there's another dimension of complexity to consider.

I've tried to restrict to simple syntax in this discussion, but other factors will come into play when determining whether or not a potential parse is plausible - knowledge of the frequencies with which particular sets of words occur in combination would be one. The same will be true in the experimental context, for example you won't always need to complete an experiment to know that the results are going to be useless because you have knowledge from elsewhere.

Also, in making this analogy I'm not suggesting that any particular chart parsing algorithm provides a useful way through the experimental complexity, although there's probably some mapping between such algorithms and ways of exploring the test space.  I am suggesting that being aware of data structures that are designed to cope with complexity can be useful when confronted with it.
Images: Doug Arnoldhttps://flic.kr/p/dxFxXG
Categories: Blogs

Why Not a Testing Standard?

Tue, 03/10/2015 - 07:01

The Cambridge Tester Meetup last night was a discussion on testing standards. Not on the specific question of ISO 29119 (for which see Huib Schoots' excellent resource) but more generally on the possibility of there being a standard at all. It was structured along the lines of Lean Coffee with thoughts and questions being thrown down on post-its and then grouped together for brief discussion.

I've recorded the content of the stickies here with just a little post-hoc editing to remove some duplication or occasionally disambiguate. The titles were just handles to attach stickies to once we had a few in an area and I haven't tried to rationalise them or rearrange the content.

Enhance/Inhibit Testing
  • Testing is a creative process so can't be standardised.
  • Testing doesn't fit into a standard format, so how can there be a standard for it? (Do we mean "good testing" whatever that is?)
  • New tools, technology might not fit into a standard.
  • Standardisation destroys great design ideas by encouraging/forcing overly broad application.
  • Can a general standard really fit specific project constraints?
  • Each tester is different.
  • A standard limits new thinking.
  • Could a standard be simply "Do the best you can in the time you have"?
Who Benefits?
  • Who do certifications serve anyway? What do they want from them?
  • As litigation becomes more prevalent who is protected by a standard? Customers, producers, users?
  • With a standard, companies can be "trusted" (QA-approved sticker).
  • People outside of test are usually very opinionated. Do standards help or hinder?
  • End users care because of the possible added costs.
  • A testing standard would provide false reassurance for companies.
Communication
  • How does an agile team fit in the standard?
  • Too much documentation? Standards may cause the need for more documentation to show compliance.
  • Standard language for communicating test ideas.
  • Divide the testing community - good or bad?
  • Respond to feedback and criticism.
How Much? or Alternatives
  • Do we need an alternative at all?
  • Where are the standards for science, consultancy, product management, development?
  • Use as much or as little of a standards as needed?
  • Could a standard be subjective?
  • Standards for products, or the process of creating products?
  • What else do we need or want instead?
  • Could a standard cover the minimum at least?
  • A standard should be flexible to adapt to project constraints.
Useful Subsets
  • Can a single standard fit different products? (Angry Birds vs nuclear reactor).
  • Uniformisation of some testing (bring up the baseline).
  • There are already some government standards.
  • Infinite space of testing. Can a standard capture that?
  • Can some aspects of testing be covered by standards? If so, which?
Can't we Just Explore?
  • Scientists do. Why can't we? (But what about mandated science?)
  • Approaches in methodologies used set out in a common understood format could help consistency.
Fear of Being Assessed?
  • Are testers just scared of being evaluated or taking responsibility?
  • I'm too shy.
  • Could it open up law suites, blame and other consequences?
  • Should you insure yourself or your company against any not conforming to the standard?
  • Anything unstructured used as an addition to, rather than part of, the primary approach. Stops people hiding?
Show Me the Money?
  • What is the motivation of those seeking to create certification? (Rent-seekers?) 
  • It's just to make money for ISO companies.
  • Adds organisation to a "messy" activity.
Certify Testers Not Testing
  • Can you differentiate certifications for testers from certifications for pieces of work? (c.f. Kaner
  • Can you say "product tester by a tester certified XYZ"?
  • How would recruitment distinguish between testers and checkers?
  • An independent body to audit the testing/tester on real project work? (Who audits the auditor?)
  • Qualification vs certification vs standardisation.
Standards in Other Industries
  • Learn more about standards in other industries and how they dealt with their first standard.
  • Standards in e.g. car safety are on the result of the work not the methodology? 
  • Universities and schools start teaching testing. Should they teach about the standards?
  • Standards to help produce evidence of testing not just test plans, which are usually fiction.
  • "Informed" standards (courses, talks etc), "in-house" standards?
Misc
  • Are objections to certification objections to theoretical risks but in practice it's possible to have something good enough?
  • Would companies without testers need a testing standard?
  • Development standards to be closely linked to testing standards.
  • Easy to find jobs abroad (if there were standardisation).
  • A standard would be good as a product.
  • Would a standard really impact our day-to-day job?
  • Is the standard simply a reason to justify testing?
  • Is the idea of a standard predicated on an outdated idea of testing?
As you can see there was no shortage of ground to cover but, with only a couple of hours, plenty was necessarily shallow or not dug into at all.

To pull out a handful of  points that I found particularly interesting: we were not shy about asking questions and we were prepared to aim them at ourselves; we bumped into the distinction between certifying product, tester and testing multiple times; we didn't really explore what we meant by standards, certification and qualification and what the differences between them might be; while the discussion was entered into with an open mind (which was the remit) there were sometimes implicit assumptions about what a standard must entail (inflexibility; lots of documentation etc) which were mostly negative and where positives were proposed they tended to be viewed more as possibilities.

P.S. There's a few photos.
Image: https://flic.kr/p/7rddEr
Categories: Blogs