Updated: 19 hours 9 min ago
This month's Lean Coffee was hosted by DisplayLink. Here's some brief, aggregated comments and questions on topics covered by the group I was in.
How to spread knowledge between testers in different teams, and how often should people rotate between teams?
- How to know what is the right length of time for someone to spend in a team?
- When is someone ready to move on?
- How do you trade off e.g. good team spirit against overspecialisation?
- When should you push someone out of their comfort zone, show them how much they don't know?
- Fortnightly test team meetings playing videos of conference talks.
- Secondment to other teams.
- Lean Coffee for the test team.
- Daily team standup, pairing, weekly presentations, ad hoc sharing sessions after standup.
- Is there a desire to share?
- Yes. Well, they all want to know more about what the others do.
- People don't want to be doing the same thing all the time.
- Could you rotate the work in the team rather than rotate people out of the team?
- It might be harder to do in scenarios where each team is very different, e.g. in terms of technologies being tested.
- There are side-effects on the team too.
- There can't be a particular standard period of time after which a switch is made - the team, person, project etc must be taken into account too.
- Can you rotate junior testers around teams to gain breadth of experience?
What piece of testing wisdom would you give to a new tester?
- Be aware of communities of practice. Lots of people have been doing this for years.
- ... for over 50 years, in fact, and a lot of what the early testers were doing is still relevant today.
- There is value in not knowing - because you can ask questions no-one else is asking.
- Always trust your instinct and gut when you're trying to explore a new feature or an area.
- Learn to deal with complexity, uncertainty and ambiguity. You need to be able to operate in spite of them.
- Learn about people. You will be working with them.
- ... and don't forget that you are a person too.
- Use the knowledge of the experienced testers around you. Ask questions. Ask again.
- Make a list of what could be tested, and how much each item matters to relevant stakeholders.
- Pick skills and practice them.
Where you look from changes what you see.
- I was testing a server (using an unfamiliar technology) from a client machine and got a result I wasn't sure was reasonable.
- ... after a while I switched to another client and got a different result.
- Would a deeper technical understanding have helped?
- Probably. In analogous cases where I have expertise I can more easily think about what factors are likely to be important and what kinds of scenarios I might consider.
- Try to question everything that you see: am I sure? How could I disprove this?
- Ask what assumptions are being made.
- What you look at changes what you see: we had an issue which wasn't repeatable with what looked like a relevant export from the database, only with the whole database.
- Part of the skill of testing is finding comparison points.
- Can you take an expert's perspective, e.g. by co-opting an expert.
Using mindmaps well for large groups of test cases.
- With such a large mindmap I can't see the whole thing at once.
- Do you want to see the whole thing at once?
- I want to organised mindmaps so that I can expand sub-trees independently because they aren't overly related.
- Is wanting to see everything a smell? Perhaps that the structure isn't right?
- Perhaps it's revealing an unwarranted degree of complexity in the product.
- Or in your thinking.
- A mindmap is your mindmap. It should exist to support you.
- What are you trying to visualise?
- Could you make it bigger?
- Who is the audience?
- I don't like to use a mindmap to keep track of project progress (e.g. with status).
- I like a mindmap to get thoughts down
- I use a mindmap to keep track of software dependencies.
Good Products Bad Products by James L. Adams seeks, according to its cover, to describe "essential elements to achieving superior quality." Sounds good! As I said in my first (and failed) attempt to blog about this book, I'm interested in quality. But in the introduction (p. 2) Adams is cautious about what he means by it:
Quality is a slippery, complex, and sometimes abstract concept ... Philosophers have spent a great deal of time dealing with the concept of quality. This is not a book on semantics or philosophy, so for our purposes we will simply assume that quality means "good." But, of course, that leaves us with "good for whom?" "good for what?" "good when?" "good where?" and if you really like to pick nits, "what do you mean by good?" I won't go there, either.My bias is towards being interested in the semantics and so I'd have liked not to have seen a concept apparently so fundamental to the book being essentially dismissed in the space of a paragraph on the second page of the introduction. Which isn't to say that quality is not referred to frequently in the book itself, nor that Adams has nothing to say about quality. He does, for example when thinking about why improving the quality of a manufacturing process is often considered a more tractable problem than improving the quality of the product being manufactured (p. 25):
characteristics of good products, such as elegance, and the emotions involved with outstanding products, namely love, are not easily described by [words, maths, experiment and quantification] - you can't put a number on elegance or love.Further, he's clearly thought long and hard about the topic, and I'd be surprised if he hasn't wrestled at length with definitions of quality - having spent no little time exploring my own definition of testing, I have sympathy for anyone trying to define anything they know and care about - before deciding to pursue this line. What's reassuring to see is that Adams is clear that whatever quality or goodness of a product is, it's relative to people, task, time and place.
He references David Garvin's Competing on the Eight Dimensions of Quality, which I don't recall coming across before, and which includes two dimensions that I found particularly interesting: serviceability (the extent to which you can fix a product when it breaks, and the timeliness with which that takes place) and perceived quality (which is to do with branding, reputation, context and so on).
I was reading recently about how experiments in the experience of eating show that, amongst many other factors, heavier cutlery - which we might naturally perceive to be better quality - enhances the perception of the taste of the food:
... we hypothesized that cutlery of better quality could have an influence on the perceived quality of the food consumed with it. Understanding the factors that determine the influence of the cutlery could be of great interest to designers, chefs, and the general public alike.Adams also provides a set of human factors that he deems important in relation to quality: physical fit, sensory fit, cognitive fit, safety and health, and complexity. He correctly, in my opinion, notes that complexity is a factor that influences the others, and deems it worthy of separation.
A particularly novel aspect for me is that he talks of it in part as a consideration that has influence across products. For example, while any given car might be sufficiently uncomplex to operate, the differences in details between cars can make using an unfamiliar one a disconcerting experience (p.91): "I ... am tired of starting the windshield wipers instead of the turn signal." He admits a tension between desiring standardisation in products and wanting designers to be free to be creative. (And this is the nub of Don Norman's book, The Design of Everyday Things, that I wrote about recently.)
It's not a surprise to me that factors external to the product itself - such as familiarity and branding - govern its perceived quality, but it's interesting to see those extrinsic factors considered as a dimension of intrinsic quality. I wondered whether Weinberg's classic definition of quality has something to say about this. According to Weinberg (see for example Agile and the Definition of Quality):
Quality is value to some person.
And value is a measure of the amount that the person would pay for the product. Consider I'm eating a meal at a restaurant: if my enjoyment of the food is enhanced by heavier cutlery, but the cost to me remains the same as with lighter cutlery, then in some real sense the value of the food to me is higher and so I can consider the food to be of higher quality. The context can affect the product.
Alternatively, perhaps in that experiment, what I'm buying is the whole dining experience, and not the plate of food. In which case, the experiential factors are not contextual at all but fundamental parts of the product. (And, in fact, note that I can consider quality of aspects of that whole differently.)
Weinberg's definition exists in a space where, as he puts it,
the definition of "quality" is always political and emotional, because it always involves a series of decisions about whose opinions count, and how much they count relative to one another. Of course, much of the time these political/emotional decisions – like all important political/emotional decisions – are hidden from public view. Political, yes, and also personal. Adams writes (p. 43)
Thanks to computers and research it seems to me that we have gotten better at purely technical problem solving but not necessarily at how to make products that increase the quality of people's lives - a situation that has attracted more and more of my interest.And so there's another dimension to consider: even a low quality item (by some measure, such as how well it is built) can improve a person's quality of life. I buy some things from the pound shop, knowing that they won't last, knowing that there are better quality versions of those items, because the trade-off for me, for now, between cost and benefit is the right one.
Bad product: good product, I might say.
I recently became interested in turning bad ideas into good ones after listening to Reflection as a Service. At around that time I was flicking through the references in Weinberg on Writing - I forget what for - when I spotted a note about Conceptual Blockbusting by James L. Adams:
A classic work on problem solving that identifies some of the major blocks - intellectual, emotional, social, and cultural - that interfere with ideation and design.I went looking for that book and found Adams' web site and a blog post where he was talking about another of his books, Good Products Bad Products:
For many (60?) years I have been interested in what makes some products of industry "good", and others "bad". I have been involved in designing them, making them, selling them, buying them, and using them. I guess I wanted to say some things about product quality that I think do not receive as much attention as they should by people who make them and buy themI hadn't long finished The Design of Everyday Things by Don Norman but didn't recall much discussion of quality in it. I checked my notes, and the posts (1, 2) I wrote about the book, and found that none of them mention quality either.
I'm interested in quality, generally. And my company builds products. And Adams is saying that he has a perspective that is underappreciated. And he comes recommended by an author I respect. And so I ordered the book.
Shortly after I'd started reading it I was asked to review a book by Rich Rogers. Some of the material in Good Products Bad Products was relevant to it: some overlapping concepts, some agreement, and some differences. I don't think it played a major part in the *ahem* quality of my review, but I can say that I was able to offer different, I hope useful, feedback because of what I'd read elsewhere, but only been exposed to by a series of coincidences and choices.
I continue to be fascinated by chains of connections like these. But I'm also fascinated by the idea that there are many more connections that I could have made but never did, and also that by chasing the connections that I chose to, I never got some information that would allow me to make yet further connections. As I write this sentence, other ideas are spilling out. In fact, I stopped writing that sentence in order to note them down at the bottom of the document I'm working in.
In Weinberg on Writing there's a lot of talk about the collection and curation of fieldstones, Weinberg's term for the ideas that seed pieces of writing. Sometimes, for me, that process is like crawling blind through a swamp - the paucity of solid rock and the difficulty of finding it and holding on to it seems insurmountable. But sometimes it's more like a brick factory running at full tilt on little more than air. A wisp of raw materials is fed in and pallets full of blocks pour out of the other end.
Here's a couple of the thoughts I noted down a minute ago, expanded:
Making connections repeatedly reinforces those connections. And there's a risk of thinking becoming insular because of that. How can I give myself a sporting chance of making new connections to unfamiliar material? Deliberately, consciously seeking out and choosing unfamiliar material is one way. This week I went to a talk, Why Easter is good news for scientists, at the invitation of one of my colleagues. I am an atheist, but I enjoy listening to people who know their stuff and who have a passion for it, having my views challenged and being exposed to an alternative perspective.
It's also a chance to practice my critical thinking. To give one example: the speaker made an argument that involved background knowledge that I don't have and can't contest: that there are Roman records of a man called Jesus, alive at the right kind of time, and crucified by Pontius Pilate. But, interestingly, I can form a view of the strength of his case by the fact that he didn't cite Roman records of the resurrection itself. Michael Shermer makes a similar point in How Might a Scientist Think about the Resurrection?
Without this talk, at this time, I would not have had these thoughts, not have searched online and come across Shermer (who I was unfamiliar with but now interested in), and not have thought about the idea that absence of cited evidence can be evidence of absence of evidence to cite (to complicate a common refrain).
I am interested in the opportunity cost of pursuing one line of interest vs another. In the hour that I spent at the talk (my dinner hour, as it happens) I could have been doing something else (I'd usually be walking round the Science Park listening to a podcast) and would perhaps have found other interesting connections from that.
Another concept often associated with cost is benefit. any connections I make now might have immediate benefit, later benefit or no benefit. Similarly, any information I consume now might facilitate immediate connections, later connections or no connections ever.
Which connects all of this back to the beauty and the pain of our line of work. In a quest to provide evidence about the "goodness" or "badness" of some product (whatever that means, and with apologies to James Adams it'll have to be another blog post now) we follow certain lines of enquiry and so naturally don't follow others.
It's my instinct and experience that exposing myself to challenge, reading widely, and not standing still helps me when choosing lines of enquiry and when choosing to quit lines of enquiry. But I may just not have sufficient evidence to the contrary. Yet ...
Edit: I wrote a review of Good Products Bad Products later.
Last month's Cambridge Tester meetup was puzzling. And one of the puzzles was an empty wordsearch that I'd made for my youngest daughter's "Crafternoon" fundraiser. At Crafternoon, Emma set up eight different activities at our house and invited some of her school friends to come and do them, with the entrance fee being a donation to charity.
The idea of the wordsearch activity is simple: take the blank wordsearch grid and make a puzzle from it using the list of words provided. Then give it to someone as a present.
If you fancy a go, download it here: Animal Alphabet Wordsearch (PDF)
(You're free to use it for your own workshops, meetups, team exercises or whatever. We hope you have fun and, if you do, please let us know about it and consider donating to an animal charity. Emma supports Wood Green.)
After Crafternoon, I offered the puzzle to Karo for the Cambridge Tester meetup and she wrote about in Testing Puzzles: Questions, Assumptions, Strategy. It's fun to read about how the testers addressed the task. It's also fun to compare it to what the children did. Broadly, I think that the kids were less concerned by a sense of expectation about the outcome - and that's not a remotely original observation, I appreciate.
Everyone who took part had some "knowledge in the head" about the task (conventions from their own experiences) and there is some "knowledge in the world" about it too, such as whatever instructions have been given and the guidelines for the person who is gifted the completed wordsearch.
Some of the testers gently played with convention by, for example:
- filling in all blank cells with the letter A
- using symbols outside of the Roman alphabet
- mixing upper and lower case in the grid
But the kids in general went further by:
- writing more than one letter in a cell
- writing letters outside of cells
- writing words around corners
- leaving some cells blank
- crossing out the words from the list if they couldn't fit them in the grid
- spelling something wrong to make it fit
In our jobs we're often thinking about how a product could be used in ways that it wasn't intended. It's an education watching children trample all over a task like this, deriving their own enjoyment from it, unselfconsciously making it into whatever works for them at that moment, constrained much more by the practical restrictions (pen, paper, the location of Crafternoon, ...) than any theoretical ideas or social norms.
While I was thinking about this - washing up last night, as it happens - I was listening to Russell Brand on The Comedian's Comedian podcast. He's a thoughtful chap, worth hearing, and he came out with this beautiful quote:
Only things that there are words for are being said. A challenge ... is to make up different words if you want to say different and unusual things.And that's fitting in a blog post about finding words, but it generalises: the children were willing and able to invent a lexicon of actions that was permitted by the context they found themselves in. As a tester, are you?
Months ago, Rich Rogers asked on Twitter for volunteers to review the book that he's writing, and I put my hand up. This week a draft arrived, and I put my hands up.
After the shock of having to follow through on my offer had worn off, I pondered how to go about the task. I've reviewed loads of books, but always with the luxury of pulling out just the things that interest me. I can't recall giving detailed feedback on something book-length before and I wanted to do a thorough job.
I liked the idea of an approach that combined reaction and reflection. Reaction is an "inline" activity. I can read the book and provide my System 1 responses as I go. Because they are instinctive I don't need to interrupt my flow significantly to capture them. My reactions can then be data for my reflection, where my System 2 processes try to make sense of the whole.
That seemed reasonable. But I'm a tester so I framed it as a mission:
Explore the manuscript using both reaction and reflection to provide Rich with a review that's more than skin deep.The reactions were easy to deliver as comments in the online editor that Rich is using. I applied a few ground rules: no reading ahead, no reading of any passage more than three times, no reflection. I broke only the last rule, and only once: there's a pivotal chapter in the book that didn't hang together well for me, even after repeated reading, and I took a few moments to give an overview when I'd finished it.
Once I'd got to the end of the final section, I put the book away and did nothing on it for day or so. Reflections that had been forming while I read began to solidify and I started looking for a way to organise them. As usual, I wrote, and in writing I saw a pattern. I had three classes of notes:
- feelings: I did not attempt to justify
- observations: I had tried to justify
- suggestions: I could both justify and explain
In any case, I delivered my reflections to Rich. He didn't agree with everything, but seemed happy enough on the whole:
Very grateful to @qahiccupps for reviewing my book and for making some excellent suggestions. Some work for me to do yet @PublishHeddon— Rich Rogers (@richrtesting) April 6, 2017
And now I get something for myself: reflecting on what I did.
So you’re thinking you might like to move into software testing? Perhaps you’re already in software and fancy a change. Perhaps you’re working in another industry and fancy a change. Perhaps you’re fresh out of college and just fancy finding a job … that you can later change.
No doubt you’ve spent some time Google-wrangling and found those numerous lists of things that software testers need to be able to do, or skills that great software testers always display, or attributes that employers think that testers must have.
Things like this:
- You shouldn’t be a tester if you don’t have attention to detail
- You shouldn’t be a tester if you don’t have great communication skills
- You shouldn’t be a tester if you’re not patient
- You shouldn’t be a tester if you’re not willing to learn
- You shouldn’t be a tester if you don’t have prioritization skills
- You shouldn’t be a tester if you don’t have a technical background
- You shouldn’t be a tester if you can’t code
- You shouldn’t be a tester if you’re not a good listener
- You shouldn’t be a tester if you can’t work in a team
- You shouldn’t be a tester if you don’t like to break things
- You shouldn’t be a tester if you don’t love a puzzle
- You shouldn’t be a tester if you don’t think like a customer
- You shouldn’t be a tester if you’re not passionate
Unfortunately, as far as I’m concerned, those kinds of lists are mostly cobblers.
For me, maybe you shouldn’t be a tester if you weren’t thinking, as you went down that list, of scenarios in which those statements could be false, of situations where a tester like that might be actively detrimental.
I’d wonder whether you were tester material if you hadn’t observed that many of those attributes apply generically to jobs in software development, and many of them apply to jobs where thinking is required, and many of them apply to, well, jobs.
As you found those sorts of lists on the web, I’d hope you had sceptical thoughts about the motivations of people who write them. If not, I’d be wary about your aptitude for testing.
You may not suit a testing role, I’d say, if you are not right now wondering why I am writing this.
I’d say your capacity to discover ways in which software might not suit its purpose is probably limited if you don’t think it’s possible to find a headcount job in testing which needs none of the things on the list above.
A belief that you should conform to a list of context-free statements about what a tester must be would concern me. I'd ask whether you really have testerly tendencies if you prefer that idea to a pragmatic attitude, to doing the best thing you can think of, for the task in hand, under the constraints that exist at that point.
You’ll have noticed, I hope, that the kind of anti-list list I’ve built up here is carefully qualified. These are my ideas of what someone who could make a good tester might do, the kinds of thoughts that I'd value, an approach that I like to see.
There’s plenty more. Here's one: for me, you shouldn’t be a tester if you can’t think critically about any piece of writing that purports to tell you something about the way the world was, is, or could be.
And that includes this one.
With thanks to @massimo and Sneha Bhat for comments and suggestions.
So I wasn't intending to blog again about The Design of Everyday Things by Don Norman but last night I was reading the final few pages and got to a section titled Easy Looking is Not Necessarily Easy to Use. From that:How many controls does a device need? The fewer the controls the easier it looks to use and the easier it is to find the relevant controls. As the number of controls increases, specific controls can be tailored for specific functions. The device may look more and more complex but will be easier to use. We studied this relationship in our laboratory ... We found that to make something easy to use, match the number of controls to the number of functions and organize the panels according to function. To make something look like it is easy, minimize the number of controls.How can these conflicting requirements be met simultaneously? Hide the controls not being used at the moment. By using a panel on which only the relevant controls are visible, you minimize the appearance of complexity. By having a separate control for each function, you minimize complexity of use. It is possible to eat your cake and have it, too.Whether with cake in hand, mouth, or both, I would note that easy saying is not necessarily easy doing. There's still a considerable amount of art in making that heuristic work for any specific situation.
One aspect of that art is deciding what functions it makes sense to expose at all. Fewer functions means fewer controls and less apparent complexity. Catherine Powell's Customer-Driven Knob was revelatory for me on this:
Someone said, "Let's just let the customer set this. We can make it a knob." Okay, yes, we could do that. But how on earth is the customer going to know what value to choose? As in my first post about The Design of Everyday Things, I find myself drawn to comparisons with The Shape of Actions. In this case, it's the concept of RAT, or Repair, Attribution and all That, the tendency of users to adapt themselves to accommodate the flaws in their technology.
When I wrote about it in The RAT Trap I didn't use the word design once, although I was clearly thinking about it:
A takeaway for me is that software which can exploit the human tendency to repair and accommodate and all that - which aligns its behaviour with that of its users - gives itself a chance to feel more usable and more valuable more quickly.Sometimes I feel like I'm going round in circles with my learning. But so long as I pick up something interesting - a connection, a reinforcement, a new piece of information, an idea - frequently enough I'm happy to invest the time.
I'm reading The Design of Everyday Things by Donald Norman on the recommendation of the Dev manager, and borrowed from our UX specialist. (I have great team mates.)
There's much to like in this book, including
- a taxonomy of error types: at the top level this distinguishes slips from mistakes. Slips are unconscious and generally due to dedicating insufficient attention to a task that is well-known and practised. Mistakes are conscious and reflect factors such as bad decision-making, bias, or disregard of evidence.
- discussion of affordances: an affordance is the possibility of an action that something provides, and that is perceived by the user of that thing. An affordance of a chair is that you can stand on it. The chair affords (in some sense is for) supporting, and standing on it utilises that support.
- focus on mappings: the idea that the layout and appearance of the functional elements significantly impacts on how a user relates them to their outcome. For example, light switch panels that mimic the layout of lights in a room are easier to use.
- consideration of the various actors: the role of the designer is to satisfy their client; the client may or may not be the user; the designer may view themselves as a proxy user; the designer is almost never a proxy user; the users are users; there is rarely a single user (type) to be considered.
But the two things I've found particularly striking are the parallels with Harry Collins' thoughts in a couple of areas:
- tacit and explicit knowledge: or knowledge in the head and knowledge in the world, as Norman has it. When you are new to some task, some object, you have only knowledge that is available in the world about it: those things that you can see or otherwise sense. It is on the designer to consider how the affordances suggested by an object affect its usability. This might mean - for example - following convention, e.g. the push side of doors shouldn't have handles and the plate to push on should be at a point where pushing is efficient.
- action hierarchies: actions can be viewed at various granularities. In Norman's model they have seven stages and he gives an example of several academics trying to thread an unfamiliar projector. In The Shape of Actions, Collins talks about an experiment attempting to operate a laboratory air pump. Both authors deconstruct the high-level task (operate the apparatus) into sub-tasks, some of which are familiar to some extent - perhaps by analogy, or by theoretical knowledge, or by having seen someone else doing it - and some of which are completely unfamiliar and require explicit experience of that specific task on that specific object.
I love finding connections like this, even if I don't know quite what they can afford me, just yet.
The Fieldstone Method is Jerry Weinberg's way of gathering material to write about, using that material effectively, and using the time spent working the material efficiently. Although I've read much of Weinberg's work I'd never got round to Weinberg on Writing until last month, and after several prompts from one of my colleagues.
In the book, Weinberg describes his process in terms of an extended analogy between writing and building dry stone walls which - to do it no justice at all - goes something like this:
- Do not wait until you start writing to start thinking about writing.
- Gather your stones (interesting thoughts, suggestions, stories, pictures, quotes, connections, ideas) as you come across them.
- Always have multiple projects on the go at once.
- Maintain a pile of stones (a list of your gathered ideas) that you think will suit each project.
- As you gather a stone, drop it onto the most suitable pile.
- Also maintain a pile for stones you find attractive but have no project for at the moment.
- When you come to write on a project, cast your eyes over the stones you have selected for it.
- Be inspired by the stones, by their variety and their similarities.
- Handle the stones, play with them, organise them, reorganise them.
- Really feel the stones.
- Use stones (and in a second metaphor they are also periods of time) opportunistically.
- When you get stuck on one part of a project move to another part.
- When you get stuck on one project move to another project.
The approach felt extremely familiar to me. Here's the start of an email I sent just over a year ago, spawned out of a Twitter conversation about organising work:
I like to have text files around [for each topic] so that as soon as I have a thought I can drop it into the file and get it out of my head. When I have time to work on whatever the thing is, I have the collected material in one place. Often I find that getting material together is a hard part of writing, so having a bunch of stuff that I can play with, re-order etc helps to spur the writing process.For my blogging I have a ton of open text files:
You can see this one, Fieldstoning_notes.txt and, to the right of it, another called notes.txt which is collected thoughts about how I take notes (duh!) that came out of a recent workshop on note-taking (DUH!) at our local meetup.
I've got enough in that file now to write about it next, but first here's a few of the stones I took from Weinberg on Writing itself:
Never attempt to write what you don’t care about.
Real professional writers seldom write one thing at a time.
The broader the audience, the more difficult the writer’s job.
Most often [people] stop writing because they do not understand the essential randomness involved in the creative process.
... it’s not the number of ideas that blocks you, it’s your reaction to the number of ideas.
Fieldstoning is about always doing something that’s advancing your writing projects.
The key to effective writing is the human emotional response to the stone.
If I’ve been looking for snug fits while gathering, I have much less mortaring to do when I’m finishing
Don’t get it right; get it written.
"Sloppy work" is not the opposite of "perfection." Sloppy work is the opposite of the best you can do at the time.
I was listening to Giselle Aldridge and Paul Merrill on the Reflection as a Service podcast one morning this week as I walked to work. They were talking about ideas in entrepreneurship, assessing their value, when and how and with who they should be discussed, and how to protect them when you do open them up to others' scrutiny.
I was thinking, while listening, that as an entrepreneur you need to be able to filter the ideas in front of you, seeking to find one that has a prospect of returning sufficiently well on an investment. Sometimes, you'll have none that fit the bill and so, in some sense, they are bad ideas (for you, at that time, for the opportunity you had in mind, at least). In that situation one approach is to junk what you have and look for new ideas. But an alternative is to make a bad idea better.
I was speculating, as I was thinking, and listening, that there might be heuristics for turning those bad ideas into good ideas. So I went looking, and I found an interesting piece by Alan Dix, a lecturer at Birmingham University, titled Silly Ideas:
Thinking about bad ideas is part brainstorming, but more important about learning to think about any idea, new good ideas you have yourself, other people's existing ideas and products.Dix suggests that deliberately (stating that you are) starting with bad ideas is itself a useful heuristic. You are naturally less attached to bad ideas; they can provoke you into trains of thought that you might not otherwise have encountered; you will have more confidence that you can improve them; they will likely generate more questions and challenge your assumptions.
He gives a set of questions for interrogating an idea, something like a SWOT analysis:
- what is good about it? in what contexts? why?
- what is bad about it? in what contexts? why?
- in what contexts it is optimal?
- how would you sell it? how would you defend it?
For me, a key aspect of this analysis is the focus on context. An idea is not necessarily unequivocally good or bad. Aspects of it might be good, or bad, or better or worse, in different scenarios, for different purposes. Dix invites you to discover in which it might be which and for which. To draw another parallel, this feels akin to factoring.
Armed with data about the idea, you can now look to change it in ways that keep the good and lose the bad, and maybe change the context or manner in which it's used. Or throw it away completely and use the learnings you have from the domain to make a fresh start with a new idea.
The new idea I like best here is that of starting from a point that you assert is bad. I've encountered similar suggestions before: that functional fixedness can be reduced by starting a familiar process from an unfamiliar situation, that in brainstorming you shouldn't reject ideas as you come up with them, and that of not evaluating until you have options in the rule of three.
I enjoy ideas simply for the sake of having them. I am fascinated by the way in which ideas spawn ideas and by the way that connections are made between them. I celebrate the fact that multiple perspectives on the same idea can differ enormously. I particularly like exploring the ambiguity that can result from those perspectives at work, where the task is often to tease out and then squeeze out ambiguity, or for fun, making up corny puns. And corny puns are never a bad idea.
Image: ITV News
One of the many things I've learned over the years is that (for me) getting an idea out - on paper, on screen, on a whiteboard, into the air; in words, or pictures, or verbally, ... - is a strong heuristic for making it testable, for helping me to understand it, and for provoking new ideas.
Once out, and once I've forced myself to represent the idea in prose or some other kind of model, I usually find that I've teased out detail in some areas that were previously woolly. I can begin to challenge the idea, to see patterns and gaps between it and the other ideas, to search the space around it and see further ideas, perhaps better ideas.
Once out, I feel like I have freed up some brain room for more thoughts. I don't have to maintain the cloud of things that the idea was when it was only in mind and I was repeatedly running over it to keep it alive, to remember it.
Once out, once I've nailed it down that first time, I have a better idea of how to explain it to someone else. So I can choose to share the idea and get the benefits of others' challenges to it.
Don't get me wrong, I do a lot of thinking in my head. But pulling an idea out, even to somewhere only visible to me, is a commitment to the idea of the idea - which doesn't mean that I think it's a good idea; just that it's worth exploring.
There's a tester position open at Linguamatics just now and, as I've said before on here, this usually means a period of reflection for me.
On this occasion the opening was created by someone leaving - I'm pleased to say that it was on good terms, for a really exciting opportunity, a chance to really make a difference at the new place - and so, although I wasn't looking for change, it has arrived. Again.
Change. There was a time when, for me, change was also challenge. Given the choice of change or not, I would tend to prefer not. These days I like to think I'm more pragmatic. Change comes with potential costs and benefits. The skill is in taking on those changes that return the right benefits at the right costs. When change is not a choice the skill is still in trading benefits and costs, but now of the ways you can think of to implement the change.
Change. My team has changed a lot in the last twelve months or so. We grew rapidly and also changed our structure. You may have noticed that I've written a lot about management in the last few months. This is not unrelated to the changes. In search of potential ways to implement change, and ways to assess the benefits and costs of change, and also the risks associated with changing, I read. And I wrote, as I'm writing now.
Change. Linguamatics, the company I co-founded, is also changing. In fact, now I look back I don't think it's ever stopped changing. In 15 years we've gone from the four of us in one tiny room to 100 of us in a couple of office suites and a handful of other locations on either side of the Atlantic. We're encountering some of the difficulties and some of the beauties of expansion, and we're exploring ways to deal with and embrace them.
If you're a tester and able to be responsive to change, if you can be an agent of change for the better, and if you fancy a change of scenery, perhaps you'd like to consider coming to work with us?