Skip to content

Hiccupps - James Thomas
Syndicate content
James Thomashttp://www.blogger.com/profile/01185262890702402757noreply@blogger.comBlogger286125
Updated: 17 hours 32 min ago

Before Testing

Mon, 02/20/2017 - 07:25

I happened across Why testers? by Joel Spolsky at the weekend. Written back in 2010, and - if we're being sceptical - perhaps a kind of honeytrap for Fog Creek's tester recruitment process, it has some memorable lines, including:
what testers are supposed to do ... is evaluate new code, find the good things, find the bad things, and give positive and negative reinforcement to the developers.Otherwise it’s depressing to be a programmer. Here I am, typing away, writing all this awesome code, and nobody cares.you really need very smart people as testers, even if they don’t have relevant experience. Many of the best testers I’ve worked with didn’t even realize they wanted to be testers until someone offered them the job.The job advert that the post points at is still there and reinforces the focus on testing as a service to developers and the sentiments about feedback, although it looks like, these days, they do require test experience.

It's common to hear testers say that they "fell into testing" and I've offered jobs to, and actually managed to recruit from, non-tester roles. On the back of reading Spolsky's blog I tweeted this:
#Testers, one tweet please. What did you do before testing? What's the most significant difference (in any respect) between that and now?— James Thomas (@qahiccupps) February 18, 2017 And, while it's a biased and also self-selected sample (to those who happen to be close enough to me in the Twitter network, and those who happened to see it in their timeline, and those who cared to respond) which has no statistical validity, I enjoyed reading the responses and wondering about patterns.

Please feel free to add your own story about the years BT (Before Testing) to either the thread or the comments here.
Image: https://flic.kr/p/rgXeNz
Categories: Blogs

People are Strange

Tue, 02/14/2017 - 19:01

Managers. They're the light in the fridge: when the door is open their value can be seen. But when the door is closed ... well, who knows?

Johanna Rothman and Esther Derby reckon they have a good idea. And they aim to show, in the form of an extended story following one manager as he takes over an existing team with problems, the kinds of things that managers can do and do do and - if they're after a decent default starting point - should consider doing.

What their book, Behind Closed Doors, isn't - and doesn't claim to be - is the answer to every management problem. The cast of characters in the story represent some of the kinds of personalities you'll find yourself dealing with as a manager, but the depth of the scenarios covered is limited, the set of outcomes covered is generally positive, and the timescales covered are reasonably short.

Michael Lopp, in Managing Humans, implores managers to remember that their staff are chaotic beautiful snowflakes. Unique. Individual. Special. Jim Morrison just says, simply, brusquely, that people are strange. (And don't forget that managers are people, despite evidence to the contrary.)

Either way, it's on the manager to care to look and listen carefully and find ways to help those they manage to be the best that they can be in ways that suit them. Management books necessarily use archetypes as a practical way to give suggestions and share experiences, but those new to management especially should be wary of misinterpreting the stories as a how-to guide to be naively applied without consideration of the context.

What Behind Closed Doors also isn't, unlike so much writing on management, is dry, or full of heroistic aphorisms, or preachy. In fact, I found it an extremely easy read for several reasons: it's well-written; it's short; the story format helps the reader along; following a consistent story gives context to situations as the book progresses; sidebars and an appendix keep detail aside for later consumption; I'm familiar with work by both of these authors already; I'm a fan of Jerry Weinberg's writing on management and interpersonal relationships and this book owes much to his insights (he wrote the foreword here); I agree with much of the advice.

What I found myself wanting - and I'd buy Rothman and Derby's version of this like a shot - is more detailed versions of some of the dialogues in this book with commentary in the form of the internal monologues of the participants. I'd like to hear Sam, the manager, thinking though the options he has when trying to help Kevin to learn to delegate and understand how he chose the approach that he took. I'd like to hear Keven trying to work out what he thinks Sam's motives are and perhaps rejecting some of Sam's premises.  I'd also like to see a deeper focus on a specific relationship over an extended period of time, with failures, and techniques for rebuilding trust in the face of them.

But while I wait for that, here's a few quotes that I enjoyed, loosely grouped.

On the contexts in which management takes place:
Generally speaking, you can observe only the public behaviors of managers and how your managers interact with you. Sometimes people who have never been in a management role believe that managers can simply tell other people what to do and that’s that. The higher you are in the organization, the more other people magnify your reactions. Because managers amplify the work of others, the human costs of bad management can be even higher than the economic costs. Chaos hides problems—both with people and projects. When chaos recedes, problems emerge. The moral of this fable is: Focus on the funded work.On making a technical contribution as a manager:
Some first-level managers still do some technical work, but they cannot assign themselves to the critical path.

It’s easier to know when technical work is complete than to know when management work is complete.

The more people you have in your group, the harder it is to make a technical contribution.

The payoff for delegation isn’t always immediate.

It takes courage to delegate.On coaching:
You always have the option not to coach. You can choose to give your team member feedback (information about the past), without providing advice on options for future behavior.

Coaching doesn’t mean you rush in to solve the problem. Coaching helps the other person see more options and choose from them.

Coaching helps another person develop new capability with support.

And it goes without saying, but if you offer help, you need to follow through and provide the help requested, or people will be disinclined to ask again.

Helping someone think through the implications is the meat of coaching.On team-building:
Jelled teams don’t happen by accident; teams jell when someone pays attention to building trust and commitment

Over time they build trust by exchanging and honoring commitments to each other.

Evaluations are different from feedback.

A one-on-one meeting is a great place to give appreciations.

[people] care whether the sincere appreciation is public or private ... It’s always appropriate to give appreciation for their contribution in a private meeting.

Each person on your team is unique. Some will need feedback on personal behaviors. Some will need help defining career development goals. Some will need coaching on how to influence across the organization.

Make sure the career development plans are integrated into the person’s day-to-day work. Otherwise, career development won’t happen.

"Career development" that happens only once a year is a sham.On problem solving:
Our rule of thumb is to generate at least three reasonable options for solving any problem.

Even if you do choose the first option, you’ll understand the issue better after considering several options.

If you’re in a position to know a problem exists, consider this guideline for problem solving: the people who perform the work need to be part of the solution.

We often assume that deadlines are immutable, that a process is unchangeable, or that we have to solve something alone. Use thought experiments to remove artificial constraints,

It’s tempting to stop with the first reasonable option that pops into your head. But with any messy problem, generating multiple options leads to a richer understanding of the problem and potential solutions

Before you jump to solutions, collect some data. Data collection doesn’t have to be formal. Look for quantitative and qualitative data.

If you hear yourself saying, “We’ll just do blah, blah, blah,” Stop! “Just” is a keyword that lets you know it just won’t work.

When the root cause points to the original issue, it’s likely a system problem.On managing:
Some people think management is all about the people, and some people think management is all about the tasks. But great management is about leading and developing people and managing tasks.

When managers are self-aware, they can respond to events rather than react in emotional outbursts.

And consider how your language affects your perspective and your ability to do your job.

Spending time with people is management work.

Part of being good at [Managing By Walking Around and Listening] is cultivating a curious mind, always observing, and questioning the meaning of what you see.

Great managers actively learn the craft of management.Image: http://www.45cat.com/record/j45762
Categories: Blogs

The Bug in Lessons Learned

Fri, 02/10/2017 - 21:52

The Test team book club read Lessons Learned in Software Testing the other week. I couldn't find my copy at the time but Karo came across it today, on Rog's desk, and was delighted to tell me that she'd discovered a bug in it...
Categories: Blogs

Y2K

Sun, 02/05/2017 - 06:36

What Really Happened in Y2K? That's the question Professor Martyn Thomas is asking in a forthcoming lecture and in a recent Chips With Everything podcast, from which I picked a few quotes that I particularly enjoyed.

On why choosing to use two digits for years was arguably a reasonable choice, in its time and context:
The problem arose originally because when most of the systems were being programmed before the 1990s computer power was extremely expensive and storage was extremely expensive. It's quite hard to recall that back in 1960 and 1970 a computer would occupy a room the size of a football pitch and be run 24 hours a day and still only support a single organisation.It was because those things were so expensive, because processing was expensive and in particular because storage was so expensive that full dates weren't stored. Only the year digits were stored in the data. On the lack of appreciation that, despite the eventual understated outcome, Y2K exposed major issues:
I regard it as a signal event. One of these near-misses that it's very important that you learn from, and I don't think we've learned from it yet. I don't think we've taken the right lessons out of the year 2000 problem. And all the people who say it was all a myth prevent those lessons being learned.On what bothers him today:
I'm [worried about] cyber security. I think that is a threat that's not yet being addressed strategically. We have to fix it at the root, which is by making the software far less vulnerable to cyber attack ... Driverless cars scare the hell out of me, viewed through the lens of cyber security.We seem to feel that the right solution to the cyber security problem is to train as many people as we can to really understand how to look for cyber security vulnerabilities and then just send them out into companies ... without recognising that all we're doing is training a bunch of people find all the loopholes in the systems and then encourage companies to let them in and discover all their secrets.Similarly, training lots of school students to write bad software, which is essentially what we're doing by encouraging app development in schools, is just increasing the mountain of bad software in the world, which is a problem. It's not the solution.On building software:
People don't approach building software with the same degree of rigour that engineers approach building other artefacts that are equally important. The consequence of that is that most software contains a lot of errors. And most software is not managed very well.One of the big problems in the run-up to Y2K was that most major companies could not find the source code for their big systems, for their key business systems. And could not therefore recreate the software even in the form that it was currently running on their computers.  The lack of professionalism around managing software development and software was revealed by Y2K ... but we still build software on the assumption that you can test it to show that it's fit for purpose.On the frequency of errors in software:
A typical programmer makes a mistake in, if they're good, every 30 lines of program. If they're very, very good they make a mistake in every 100 lines. If they're typical it's in about 10 lines of code. And you don't find all of those by testing.  On his prescription:
The people who make the money out of selling us computer systems don't carry the cost of those systems failing. We could fix that. We could say that in a decade's time - to give the industry a chance to shape up - we were going to introduce strict liability in the way that we have strict liability in the safety of children's toys for example.Image: https://flic.kr/p/7wbBSu 
Categories: Blogs

You Rang!

Fri, 02/03/2017 - 00:01

So, last year I blogged about an approach I take to managing uncertainty: Put a Ring on It.

The post was inspired by a conversation I'd had with several colleagues in a short space of time, where I'd described my mental model of a band I put around all the bits of the problem I can't deal with now, leaving behind the bits that are tractable.

After doing that, I can proceed, right now, on whatever is left. I've encircled the uncertainty with a dependency on some outside factor, and I don't need to think about the parts inside it until the dependency is resolved. (Or the context changes.)

And this week I was treated to a beautifully simple implementation of it, from one of those colleagues. In a situation in which many things might need doing - but the number and nature is unknown - she controlled the uncertainty with a to-do list and a micro-algorithm:
  • do the thing now, completely, only if it's easy and important
  • do a pragmatic piece now, if it's needed but not easy, and revisit it later (via the list) 
  • otherwise, put it on the list

Uncertainty encountered. And ringed with a list. And mental energy conserved. And progress consistently made.
Categories: Blogs

Elis, Other People

Tue, 01/31/2017 - 11:40

I've written before about the links I see between joking and testing - about the setting up of assumptions, the reframing, and the violated expectations, amongst other things. I like to listen to the The Comedian's Comedian podcast because it encourages comics to talk about their craft, and I sometimes find ideas that cross over, or just provoke a thought in me. Here's a few quotes that popped out of the recent Elis James episode.

On testing in the moment:
Almost everyone works better with a bit of adrenaline in them. In the same way that I could never write good stuff in the house, all of my best jokes come within 20 minutes to performing or within 20 minutes of performing ... 'cos all of my best decisions are informed by adrenaline.On the value of multiple perspectives, experiences, skills:I've even tried sitting in the house and bantering with myself like I'm in a pub because I hate the myth that standups are all these weird auteurs and we should do everything on our own. The thing with being bilingual is that I have a different personality in Welsh and English. My onstage persona is different.On the gestalt possibilities of collaboration:
I love collaborating ... being in a room with another comic ... that's the funnest part of comedy, bouncing off each other and developing an idea together. The difference between thinking of an idea on your own and wondering if it's funny, and then immediately asking the person next to you, who's a trusted friend whose opinion you respect, and then they say "yeah!" and say one little tweak and it sends you off down a completely different path. The king of this is Henry Packer. If you take anything to him he will give you an angle that is from such a bizarre place and suddenly it will be a great routine. On actively looking for variety, especially similar-but-different:
I will occasionally write out a routine longhand and I'll put all the words into a thesaurus. The thing with a thesaurus - it's an extraordinary tool - is that the reason that 'seldom' and 'doggerel' are funny is that you know what they mean but you'd never use them. They're not quite on the tip of your tongue, they're sort of half-way back.Image: https://flic.kr/p/tQup4
Categories: Blogs

Cambridge Lean Coffee

Thu, 01/26/2017 - 08:00

This month's Lean Coffee was hosted by Redgate. Here's some brief, aggregated comments and questions  on topics covered by the group I was in.

Why 'test' rather than 'prove'?
  • The questioner had been asked by a developer why he wasn't proving that the software worked.
  • What is meant by proof? (The developer wanted to know that "it worked as expected")
  • What is meant by works?
  • What would constitute proof for the developer in this situation?
  • Do we tend to think of proof in an absolutist, mathematical sense, where axioms, assumptions, deductions and so on are deployed to make a general statement?
  • ... remember that a different set of axioms and assumptions can lead to different conclusions.
  • In this view, is proof 100% confidence?
  • There is formal research in proving correctness of software.
  • In the court system, we might have proof beyond reasonable doubt 
  • ... which seems to admit less than 100% confidence. 
  • Are we happier with that?
  • But still the question is proof of what?
  • We will prioritise testing 
  • ... and so not everything will be covered (if it even could be)
  • ... and so there can't be emprical evidence for those parts.
  • How long is enough when trying to prove that a program never crashes?

What would you be if not a tester? What skills cross over?
  • One of us started as a scientist and then went into teaching. Crossover skills: experimentalism, feedback, reading people, communicating with people, getting the best out of people.
  • One of us is a career tester who nearly went into forensic computing. Crossover skills: exploration, detail, analysis, presentation.
  • One of us was a developer and feels (he thinks) more sympathetic to developers as a result.
  • One of us is not a tester (Boo!) and is in fact a developer (Boo! Hiss! etc)
  • I am a test manager. For me, the testing mindset crosses over entirely. In interactions with people, projects, software, tools, and myself I set up hypothesis, act, observe the results, interpret, reflect.
  • ... is there an ethical question around effectively experimenting "on" others e.g. when trying some approach to coaching?
  • ... I think so, yes, but I try to be open about it - and I experiment with how open, when I say what I'm doing etc.

Pair Testing. Good or Bad?
  • The question starts from Katrina Clokie's pairing experiment.
  • The questioner's company are using pairing within the Test team and find it gives efficiency, better procedures, skill transfer.
  • It's good not to feel bad about asking for help or to work with someone.
  • It helps to create and strengthen bonds with colleagues.
  • It breaks out of the monotony of the day.
  • It forces you to be explain your thinking.
  • It allows you to question things.
  • It can quickly surface issues, and shallow agreements.
  • It can help you to deal with rejection of your ideas.
  • Could you get similar benefits without formal pairing?
  • Yes, to some extent.
  • What do we mean by pairing anyway?
  • ... Be at the same machine, working on something.
  • ... Not simply a demonstration.
  • ... Change who is driving during the session.
  • We have arranged pairing of new hires with everyone else in the test team.
  • ... and we want to keep those communication channels open.
  • We are just embarking on a pairing experiment based on Katrina's.

Reading Recommendations.
  • Edward Tufte, Beautiful Evidence, Envisioning Information: data analysis is about comparison so find ways to make comparisons easier, more productive etc on your data.
  • Michael Lopp, Managing Humans: we all have to deal with other people much of the time.
  • Amir Alexandar, Inifinitesimal: experimental vs theoretical (perhaps related to the idea of proof we discussed earlier).
Edit: Karo Stoltzenburg has blogged about the same session and Sneha Bhat wrote up notes from her group too.Image: https://flic.kr/p/aBqExB
Categories: Blogs

Listens Learned in Software Testing

Wed, 01/25/2017 - 22:50

I'm enjoying the way the Test team book club at Linguamatics has been using our reading material as a jumping-off point for discussion about our experiences, about testing theory, and about other writing. This week's book was a classic, Lessons Learned in Software Testing, and we set ourselves the goal of each finding three lessons from the book that we found interesting for some reason, and making up one lesson of our own to share.

Although Lessons Learned was the first book I bought when I became a tester, in recent times I have been more consciously influenced by Jerry Weinberg. So I was interested to see how my opinions compare to the hard-won suggestions of Kaner, Bach and Petttichord.

For this exercise I chose to focus on Chapter 9, Managing the Testing Group. There are 35 lessons here and, to the extent that it's possible to say with accuracy given the multifaceted recommendations in many of them, I reckon there's probably only a handful that I don't practice to some extent.

One example: When recruiting, ask for work samples (Lesson 243). This advice is not about materials produced during the  interview process. It is specifically about bug reports, code, white papers and so on from previous employments or open source projects. I can't think of an occasion when I've asked for that kind of thing from a tester.

Of the 30 or so lessons that I do recognise in my practices, here's three that speak to me particularly today: Help new testers succeed (Lesson 220), Evaluate your staff as executives (Lesson 213), The morale of your staff is an important asset (Lesson 226).

Why these? Well, on another day it might be three others - that's what I've found with this book over the years - but in the last 12 months or so our team has grown by 50% and we've introduced line management structure. This triplet speak to each of those changes and the team as a whole, as we work through them.

When bringing new people into our team we have evolved an induction process which has them pair with everyone else on the team at least a couple of times, provides a nominated "first point of contact", a checklist of stuff to cover in each of the first days, weeks and months, a bunch of introductions to the Test team and company (from within the team) and to other parts of the company (from friendly folk in other groups). We also have the new person present to the team informally about themselves and their testing, to help us to get to know them, develop some empathy for them, to give them some confidence about talking in front of the established staff. Part of the induction process is to provide feedback on the induction process (this is not Fight Club!) which we use to tune the next round.

Until this year, I have conducted all annual reviews for all of the testers in my team. This affords me the luxury of not having to have externalised the way in which I do it. That's not to say that I haven't thought about it (I have, believe me, a lot) or that I haven't evolved (again, I have) but more that I haven't felt the need to formally document it. Now that there are other line managers in the team I have begun that process by trying to write it down (almost always a valuable exercise for me) and explaining it verbally (likewise; particularly when doing it to a bunch of intelligent, questioning testers).

How to assess the performance and development needs of your team (fairly) is tricky. The notion of executive in Lesson 213 is from Drucker - "someone who manages the value of her own time and affects the ability of the organisation to perform" as Lesson 211 puts it - and essentially cautions against simple metrics for performance in favour of a wide spread of assessments, at least some of which are qualitative, that happen regularly and frequently. It recommends paying attention to your staff, but contrasts this with micromanagement.

Past experience tells me that there is almost never consensus on a course of action within my teams and I rarely get complete agreement on the value of an outcome either. However, it's important to me to be as inclusive and open and transparent as possible about what I'm doing, in part because that's my philosophical standpoint but also because I think that it contributes to team morale, and that is crucial to me (Lesson 226) because I think a happy team is in a position to do their best work.

When planning and going through the kind of growth and restructuring that our team has in the last 12 months, morale was one of my major concerns - of the new hires, of the existing team members, and also of myself. It's my intuition that important aspects of building and maintaining morale in this kind of situation include providing access to information, opportunity to contribute, and understanding of the motivation.

I didn't want anyone to feel that changes were just dropped on them, that they weren't aware that changes were coming, that they had no agency, that they hadn't had chance to ask questions or express their preferences or suggestions, and that I hadn't tried to explain what options were being considered and why I made the particular changes that I did, and what my concerns about it are, and that my full support is available to anyone who wants or needs it.

The lesson of my own that I chose to present is one I've come back to numerous times over the years:

  If you can, listen first.

Listening here is really a proxy for the intake of information through any channel, although listening itself is a particularly important skill in person-to-person interactions. Noticing the non-verbal cues that come along with those conversations is also important, and likewise remembering that the words as said are not necessarily the words as meant.

Since reading What Did You Say? I have become much more circumspect about offering feedback. Feedback is certainly part of the management role - any manager who is not willing to give it when requested is likely not supporting their staff to the fullest extent they could - but feeling that it's the manager's role to dispense feedback at (their own) will is something I've come to reject.

These days I try to practice congruent management and a substantial part of that is that it requires understanding - of the other person, the context and yourself. Getting data on the first two is important to aid understanding, and listening - really listening, not just not speaking - is a great data-gathering tactic. The Mom Test, which I read recently, makes essentially the same point over and over and over. And over.

Listening isn't always pleasant - I think, for example, of a 30-minute enumeration of things a colleague felt I hadn't done well - but I try to remember that the other person - if being honest - is expressing a legitimate perspective - theirs - and understanding it and where it comes from - for them - is likely to help me to understand the true meaning of the communication and help me to deal with it appropriately.

And that's a lesson in itself: managing is substantially about doing your best to deal with things appropriately.
Image: Wiley
Categories: Blogs

Speaking Easier

Wed, 01/18/2017 - 22:50
Wow.

I've been thinking about public speaking and me.

Wind back a year or so. Early in November 2015 I presented my talk, my maiden conference talk, the first conference talk I'd had accepted, in fact the only conference talk I had ever submitted, on a massive stage, in a huge auditorium, to an audience of expectant software testers who had paid large amounts of money to be there, and chosen my talk over three others on parallel tracks. That was EuroSTAR in Maastricht. I was mainlining cough medicine and menthol sweets for the heavy cold I'd developed and I was losing my voice. The thing lasted 45 minutes and when I was finished I felt like I was flying.

Wind back another year or so. At the end of July 2014 I said a few words and gave a leaving present to one of my colleagues in front of a few members of staff in the small kitchen at work. I was healthy and the only drug I could possibly be under the influence of was tea (although I do like it strong). The thing lasted probably two minutes and when I was finished I felt like I'd been flushed out of an aeroplane toilet at high altitude.

Wind forward to today, January 2017. In the last couple of years, in addition to EuroSTAR, I have spoken a few times at Team Eating, the Linguamatics brown bag lunch meeting, spoken to a crowded kitchen for another leaving presentation, spoken to the whole company on behalf of a colleague, spoken at several Cambridge tester meetups, spoken at all three of the Cambridge Exploratory Workshops on Testing, spoken at the Midlands Exploratory Workshop on Testing, spoken at the UK Test Management Forum, and spoken at a handful of local companies, and opened the conference (yes, really) at the most unusual wedding I've ever been to.

I'm under no illusion that I'm the greatest public speaker in the world; I'm probably not even the greatest public speaker in my house. But, and this is a big one, I'm now confident enough about my ability to stand in front of people and speak that it's no longer the ordeal it had turned into. In fact, at times I have even enjoyed it.

Now back to 2014 and that kitchen. I stood stiffly, statically, squarely front of the fridge. Someone tapped the glass for quiet and as I spoke my scrap of paper wobbled, and my voice trembled, and my knees knocked.


The worse I felt about the delivery, the worse the delivery seemed to get, and the worse I felt, and the worse it seemed to get ... After stumbling back to my desk I decided enough was enough: I was going to do something about my increasing nervousness at speaking in public. And so, on the spur of the moment, I challenged myself to speak at a testing conference.

Wow.

I found that the EuroSTAR call for papers was open, and I wrote my proposal, and got some comments on it from testers I respect, and I rewrote my proposal, and I sent it off, and I crossed my fingers without being quite sure whether I was hoping to be accepted or not. Then, if I'm honest, I made very little progress for a couple of months, until I came across Speak Easy.

Speak Easy team inexperienced speakers with experienced mentors to help with any aspect of conference presentations. It sounded relevant so I signed up and, within a few days, James Lyndsay got in touch. In our first exchange, this is what I told him I wanted:
  • Tips, strategies, heuristics for keeping the nerves in check - ultimately, I'd like to be able to stand in front of anyone and feel able to present.
  • Tips for building, crafting, structuring presentations and talks - I imagine that confidence in the material will help confidence in delivery.
  • Any other relevant suggestions.

Amongst other things, he asked me questions such as what did I mean by nerves? When did I get them? And what was I currently using to moderate them?

Amongst other things, he gave me a suggestion: "having confidence in your material can help, but not as much as knowing the stuff".

Amongst other things, he assigned me a task: visualising a variety of scenarios in which I was required to speak in front of different audiences (people I knew, experts in my field, experts in an unfamiliar field, ...) from different positions (presenter, audience member, ...).

Amongst other things he had me watch several talks, concentrating on the breathing patterns of the speakers rather than their words.

Based on my responses, he proposed further introspection or experimentation. In effect, he explored my perception of and reaction to my problem with a range of different tools, looking for something that might provide us with an "in". In retrospect, I think I could have done more of this myself. But, again in retrospect, I think I was too close to it, too bound up in the symptoms to be able to see that.

Amongst other things, and a little out of the blue, for both of us, he mentioned that I might look into Toastmasters on the basis of Tobias Mayer's blog post, Sacred Space, published just a few days previously. So I did. In fact, I went to the next meeting of Cambridge City Communicators, which was the following week, and I stood up to speak.

I reported back to James afterwards: I was thrown an "agony aunt" question and had to answer it there and then, with no prep time. I was nervous, I was pleased that I didn't gabble, I deliberately paused, and my voice didn't (I don't think) shake.  They told me that I was very static (they are hot on body language and gesture) and I ummed a little. But my personal feedback is that although I was able to some extent to overcome the shakes and the thumping chest, I wasn't my natural self.  I was concentrating so much on the medium that the message was very average. So I think I want to tune my goal in Speak Easy: I want to feel like myself when speaking in front of a group.

I can't emphasise how big a deal this last point was for me. It changed what I wanted to change. I realised that I could live with being nervous if it was me that was nervous and not someone else temporarily inhabiting my body.

Wow.

And that was just as well, because during this period I got an email from EuroSTAR. I'd been accepted. Joy! Fear!

So I signed up to Toastmasters and committed myself to stand up and speak at every meeting I attended, and to do so without notes from the very beginning, and to do it wholeheartedly. I learned a few things:
  • I can write a full draft and then speak it repeatedly to make it sound like it should be spoken.
  • That rehearsal lets me smooth out the places where I stumble initially, and find good lines that will be remembered and used again. 
  • Experimenting with how much rehearsal I need to get the balance between natural and stilted right was useful because I can now gauge my readiness when preparing (to some extent).
  • Standing and sitting to speak are different for me. Standing is much more nerve-wracking, even alone, so now I try to practice standing up. 
  • I can squeeze rehearsal into my day, if I try. For instance, I'll put my headphones on and (I hope) appear to be having a phone conversation to anyone I walk past as I do a lap of the Science Park at lunch times.
  • Speaking without notes from the start forced me to find ways to learn the material.
  • Doing it more helps, so I sought out opportunities to speak.

I attended Toastmasters religiously every two weeks and kept up my goal of speaking at every meeting in some capacity. The possibilities include scheduled talks, ad hoc "table topics" where people volunteer to speak on a subject that's given to them there and then, and various functional roles. Whatever I was doing, I'd look for a way to prepare something for it, or dive into the unexpected with full enthusiasm.

I frequently didn't enjoy either my performance or my assessment of my performance, but I found that I could see incremental improvement over time. I used James as a sounding board, reporting back to him every now and again about problems I'd had or victories that I felt I'd won, or about the positive things that attending Toastmasters was giving me:
  • The practice: to get up and speak on a regular basis in front of a bunch of people for whom, ultimately, it made no difference whether I was good, bad or indifferent.
  • The formality:  I found that the ceremony and rigidity removed uncertainty, allowing me to focus more on the speaking.
  • The common aim: the people there all want to improve as speakers, and want others to improve as speakers too, and that gives a strong sense of solidarity and security. 
  • The feedback: in addition to slips of paper filled in by each member for each speaker there is feedback on every speech from another Toastmaster, delivered by them as a speech in itself.

Talking of feedback, a summary of the advice I was given in the eight or nine months I was there might be: speak clearly, don't be afraid to pause, include variety in my voice, use my hands to emphasise and illustrate points, use some basic structural and rhetorical devices, stop rocking backwards and forwards and shuffling my feet, stop touching my nose

Other than the last couple, which are habits I had no idea I had, this is standard advice for beginner speakers. What's useful, I found, is to get it applied to you regularly about some speaking you've just been doing, rather than reading it in a blog post when you haven't been anywhere near presenting for months.

But enough of that, because suddenly it was the start of November and I was in a taxi, in a plane, in a taxi, on a train, in a taxi, in front of a stage at a conference centre in Maastricht waiting to deliver my talk.

And then I was on the stage. And I had a headset mic on - which I had never done before.  And I was coughing, and the sound tech was coughing. And we shared my cough sweets. And I was being introduced. And I was stepping forward from the side and ... and ... and ... amazingly I found that I was smiling.

And I was interacting with the audience. And I was making a joke. And they were laughing. And I wasn't shaking. And my voice wasn't catching. And I was delivering my talk in what felt like a natural way, with pauses, at a natural pace ... and although I can't be sure what I was doing with my feet, I can say that my head was very, very big.


Wow.

A few weeks later, I got an email from the organisers:
Thank you for contributing to the success of EuroSTAR Conference 2015, we hope you enjoyed the experience of speaking in Maastricht. We have amalgamated all the information from attending delegates and for your feedback scores and comments on your session are included below: Individual speakers were evaluated by delegates using a 1-10 basis (10 being excellent - 1 being poor).We categorize sessions by the following standards:
  • 9.00 – 10.00 Outstanding
  • 8.00 – 8.99 Excellent
  • 7.00 – 7.99 Good
  • 6.00 – 6.99 Low Scoring
  • Under 6.00 – Below minimum standard acceptable
Your score was  5.90 out of 67 respondents which according to above table, came in the Below Minimum bracket. The track session presentation overall average score (40 track sessions) was 7.51 Comments on Forms below:-
  • Well, fun but what am I going to do with this?? (+ some jokes don’t work on non-British people).
  • Accent!
  • as hard to understand if you're not a native in English language
  • The core ideas turned out more interesting than I expected, but needs post processing by me.
  • Good presentation but very specific to native speakers.  Really good work done on linking patterns but I think will not reach wide audience 
Wow.

And I'd got similar comments directly too. I'd known that including jokes themselves (in a talk called Your Testing is a Joke, about the links between testing and joking) was a risk to non-native speaker comprehension from my practice runs, and I'd changed the talk to reduce it. It's also indisputable that I have an accent (I'm from the Black Country and it shows) and I think that having a heavy cold probably contributed to any lack of clarity.

So it wasn't great getting this kind of feedback - duh! - but knowing what I wanted prevented me from being discouraged: on that stage on that day, however it came across to anyone else, I was myself.

Thankfully, usefully, I did also get some positive feedback from attendees at the conference and the content of my talk was validated by winning the Best Paper prize. But even without those things I think I'd have been able to take significant positives in spite of the audience reviews.

Back at work, I quickly had an opportunity to exorcise a demon by doing another leaving presentation. I treated it as I would a Toastmasters talk and wrote a draft in full, which I then repeated until I'd smoothed it out sufficiently. And then in the kitchen I wasn't rubber-legging and I wasn't heart-pounding and I wasn't knee-knocking, and tapped the glass and I spoke without notes and I got a laugh and I ad-libbed. And, sure, I stumbled a bit, but I was still there and doing it and doing it well. Or, at least, well enough.

I've been thinking about public speaking and me.

I wouldn't want to claim anything too grand. I haven't cracked the art of presenting. I still get nerves. I am not suggesting that you must do the same things as I did. I am not claiming that I haven't had some setbacks, and I don't have a magic wand to wave. But if I tried to summarise what I've done, I guess I'd say something like this:
  • I decided I wanted to change.
  • I found out what I wanted to change to.
  • I was open to ways to help me get there.
  • I looked for, or made, openings.
  • I reflected on what I was doing.
  • I stuck at it.

And I made my change happen.

Wow.
Images: Black Country T-ShirtsCambridge City Communicators
Categories: Blogs

Without Which ...

Wed, 01/11/2017 - 08:40

This week's Cambridge Tester meetup was a show-and-tell with a theme:
Is there a thing that you can't do without when testing? A tool, a practice, a habit, a method that just works for you and you wouldn't want to miss it? Here's a list, with a little commentary, of some of the things that were suggested:
  • Testability: mostly, in this discussion, it was tools for probing and assessing a product.
  • Interaction with developers: but there's usually a workaround if they're not available ..
  • Workarounds
  • The internet: because we use it all the time for quick answers to quick questions (but wonder about the impact this is having on us).
  • Caffeine: some people can't do anything without it.
  • Adaptability: although this is like making your first wish be infinite wishes.
  • People: Two of us suggested this. I wrote my notes up in Testing Show
  • Emacs
  • Money: for paying for staff, tools, services etc.
  • Visual modelling: as presented, this was mostly about system architecture, but could include e.g. mind maps.
  • Notebook and pen: writing gives clarity
  • Phone: for playing games as a break from work.
  • Explainability: "it's my job to eradicate inexplicability."
  • Freedom/free will: within the scope of the mission
  • Problems: because we'll be out of a job without them.

Image: https://flic.kr/p/5Wqpov
Categories: Blogs

Testing Show

Wed, 01/11/2017 - 08:22

This week's Cambridge Tester meetup was a show-and-tell with a theme:
Is there a thing that you can't do without when testing? A tool, a practice, a habit, a method that just works for you and you wouldn't want to miss it? Thinking about what I might present I remembered that Jerry Weinberg, in Perfect Software, says "The number one testing tool is not the computer, but the human brain — the brain in conjunction with eyes, ears, and other sense organs. No amount of computing power can compensate for brainless testing..."

And he's got a point. I mean, I'd find it hard to argue that any other tool would be useful without a brain to guide its operation, to understand the results it generates, and to interpret them in context.

In show-and-tell terms, the brain scores highly on "tell" and not so much on "show", at least without a trepanning drill. But, in any case, I was prepared to take it as a prerequisite for testing so I thought on, assuming I could count on my brain being there, and came up with this:
The thing I can't do without when testing is people. Why? Well, first and foremost, software is commissioned by people, and built by people, and functions to service the needs of people. Without those people there wouldn't be software for me to test. As a software tester I need software and software needs people. And so, by a transitive relationship, I need people.

Which is a nice line, but a bit trite. So I thought some more.

What do people give me when I'm testing? Depending on their position with respect to the software under test they might provide
  • background data such as requirements, scope, expectations, desires, motivations, cost-benefit analyses, ...
  • test ideas and feedback on my own test ideas
  • insight, inspiration, and innovation
  • reasons to test or not to test some aspects of the system
  • another perspective, or perspectives 
  • knowledge of the mistakes they've made in the past, so perhaps I need not make them   
  • coaching
  • the chance to improve my coaching
  • satisfy a basic human need for company and interaction
  • ...

There are methodologies and practices that recognise the value of people to other people. For example, XP, swarming, mobbing, pairing, 3 Amigos, code reviews, peer reviews, brainstorming, ... and then there are those approaches that provide proxies for other people such as personas, thinking hats, role playing, ...

Interactions with others needn't be direct: requirements, user stories, books, blogs, tweets, podcasts, videos, magazines, online forums, and newsletters, say, are all interactions. And they can be more or less formal, and facilitated,  like Slack channels, conferences, workshops, and even meetups. They're generally organised by people, and the content created by people for other people, and the currency they deal in is information. And it's information which is grist to the testing mill.

And that's an interesting point because, although I do pair test sometimes, for the majority of my hands-on testing I have tended to work alone. Despite this, the involvement of other people in that testing is significant, through the information they contribute.

Famously, to Weinberg and Bolton, people are crucial in both a definition of quality and indeed a significant proportion of everything else too.
  •  Quality is value to some person.
  •   X is X to some person at some time.

Fair enough, you might ask with a twinkle in your eye, but didn't Sartre say "Hell is other people"?

Yes he did, I might reply, and I've worked with enough other people now to know that there's more than a grain of truth in that. (Twinkling back atcha!) But in our world, for our needs, I think it's better to think of it this way: software is other people.
Image: https://flic.kr/p/gp2CDC

Edit: I've listed some of the other things that were suggested at the meetup in Without Which.
Categories: Blogs

State of Play

Wed, 01/04/2017 - 08:23
The State of Testing Survey for 2017 is now open. This will be the fourth iteration of the survey and last year's report says that there were over 1000 respondents worldwide, the most so far.

I think that the organisers should be applauded for the efforts they're putting into the survey. And, as I've said before, I think the value from it is likely to be in the trends rather than the particular data points, so they're playing a long game with dedication.

To this end, the 2016 report shows direct comparisons to 2015 in places and has statements like this in others:
We are starting to see a trend where testing teams are getting smaller year after year in comparison with the results from the previous surveys.I'd like to see this kind of analysis presented alongside the time-series data from previous years and perhaps comparisons to other relevant industries where data is available. Is this a trend in testing or a trend in software development, for instance?

I'd also like to see some thought going into how comparable the year-to-year data really is. For example: is the set of participants sufficiently similar (in statistically important respects) that direct comparisons are possible? Or do some adjustments need to be made to account for, say, a larger number of respondents from some part of the world or from some particular sector than in previous years. Essentially: are changes in the data really reflecting a trend in our industry, or perhaps a change in the set of respondents, or both, or something else?

While I'm wearing my wishing hat I'd be interested in questions which ask about the value of the changes that are being observed. For example, are smaller teams resulting in better outcomes? What kind of outcomes? For who? I wonder whether customers or consumers of testing could be polled too, to give another perspective, with a different set of biases.
Image: https://flic.kr/p/9cTwhS
Categories: Blogs

What We Found Not Looking for Bugs

Sat, 12/31/2016 - 22:05
This post is a conversation and a collaboration between Anders Dinsen and me. Aside from a little commentary at the top and edits to remove repetition and side topics, to add links, and to clarify, the content is as it came out in the moment, over the course of a couple of days.

A question I asked about not looking for bugs at Lean Coffee in Cambridge last month initiated a fun discussion. The discussion suggested it’d be worth posing the question again in a tweet. The tweet in turn prompted a dialogue.

Some of the dialogue happened on public Twitter, some via DM, and on Skype, and yet more in a Google doc, at first with staggered participation and then in a tight synchronous loop where we were simultaneously editing different parts of the same document, asking questions and answering them in a continuous flow. It was at once exhilarating, educational and energising.

The dialogue exposes some different perspectives on testing and we decided to put it together in a way that shows how it could have taken place between two respectful, but different, testers.

--00--
James: Testing can’t find all the bugs, so which ones shouldn’t we look for? How?

Anders: My brain just blew up. If we know which bugs not to look for, why test?

James: Do you think the question implies bugs are known? Could they be expected? Suspected?

Anders: No, but you appear to know some bugs not to find.

James: I don't think I'm making any claims about what I know, am I?

Anders: Au contraire, "which bugs" seems quite specific, doesn't it?

James: By asking "which" I don't believe I am claiming any knowledge of possible answers.

Anders: I think this is a valid point.

Testing takes place in time, and there is a before and an after. Before, things are fundamentally uncertain, so if we know bugs specifically to look for, uncertainty is an illusion.

That testing takes place in time is obvious, but still easily forgotten like most other things that relates to time.

In our minds, time does not seem as real as it is. Remember, that we can just as vividly imagine the future and remember the past as we can experience the current. In our thoughts, we jump back and forth between imagination, the current and memory of the past, often without even realizing that we are in fact jumping.

When I test, I hope an outcome of testing will be test results which will give me certainty so that I can communicate clearly to decision makers and help them achieve certainty about things they need to be certain about to take decisions. This happens in time.

So before testing, there is uncertainty. After testing, some kind of certainty exists in someone (e.g. me, the tester) about the thing I am testing.

Considering that, testing is simple, but it follows that, expecting and even suspecting bugs implies some certainty, which will mislead our testing away from the uncertain.

James: I find it problematic to agree that testing is simple here - and I’ve had that conversation with many people now. Perhaps part of it is that "testing" is ambiguous in at least two interesting senses, or at least at two different resolutions:
  • the specific actions of the tester
  • a black box into which stakeholders put requirements and from which they receive reports

These are micro and macro views. In The Shape of Actions, Harry Collins talks about how tasks are ripe for automation when the actors have become indifferent to the details of them. I wrote on this in Auto Did Act, noting that the perspective of the actor is significant.

I would want to ask this: from whose perspective is testing simple? Maybe the stakeholder can view testing as simple, because they are indifferent to the details: it could be off-shore workers, monkeys, robots, or whatever doing the work so long as it is "tested".

I am also a little uncomfortable with the idea of certainty as you expressed it. Are we talking about certainty in the behaviour of the product under test, or some factor(s) of the testing that has been done, or something else?

I think I would be prepared to go this far:
  • Some testing, t, has been performed
  • Before t there was an information state i
  • After t there is an information state j
  • It is never the case that i is equal to j (or, perhaps, if i is equal to j then t was not testing)
  • It is not the case that only t can provide a change from i to j. For example, other simultaneous work on the system under test may contribute to a shared information state.
  • The aim of testing is that j is a better state than i for the relevant people to use as the basis for decision making

Anders: But certainty is important, as it links to someone, a stakeholder, a human. Certainty connotes a state of knowledge in something that has a soul, not just a mathematical or mechanical entity.

This leads me to say that we cannot have human testing without judgement.
Aside: It’s funny that the word checking, which we usually associate with automatic testing, might actually better describe at least part of human testing, as the roots of ‘check’ are the same as the game of chess, the Persian word for king. The check is therefore the king’s judgement, a verdict of truth, gamified in chess, but in the real world always something that requires judgement. But that was a stray thought ... What’s important here is that some way or another testing is not only about information.

I accept that as testers, we produce information, even streams of tacit and explicit knowledge testing and some of that can be mechanistically or algorithmically produced, but if we are to use it as humans and not only leave it to the machines to process, we must not only accept what we observe in our testing, we must judge it. At the end of the day (or the test) at least we must judge whether to keep what we have observed to ourselves, or if we should report it.

James: I did not define what I mean by an information state. If you pushed me to say something formal, I might propose it’s something like a set of assertions about the state of the world that is relevant to the system under test, with associated confidence scores. I might argue that much of it is tacitly understood by the participants in testing and the consumption of test results. I might argue that there is the potential for different participants to have different views of it - it is a model, after all. I might argue that it is part of the dialogue between the participants to get a mutual understanding of the parts of j that are important to any decisions.

This last sentence is critical. While there will (hopefully) be some shared understanding between the actors involved, there will also be areas that are not shared. Those producing the information for the decision-maker may not share everything that they could. But even if they were operating in such a way as to attempt to share everything that was relevant to the decision, their judgement is involved and so they could miss something that later turns out to be important.
Aside: I wonder whether it is also interesting to consider that they could over-share and so cloud the decision with irrelevant data. It is a real practical problem but I don’t know whether it helps here. If it does, then the way in which information is presented is also likely to be a factor. Similarly, the decision-maker may have access to information from other sources. These may be contemporary or historical, from within the problem domain or not, ...

So, really, I think that having two information states - pre and post t - is an oversimplification. In reality, each actor will have information states taking input from a variety of sources, changing asynchronously. The states i and j should be considered (in a hand-wavy way) the shared states. But we must remember that useful information can reside elsewhere.

Anders: I feel this is too much PRINCE2, where people on the shop floor attach tuples of likelihood and consequence-scores to enumerated risks, but essentially thereby hiding important information needed to make good, open-eyed decisions about risks.

James: Perhaps. I have been coy about exactly what this would look like because I don't have a very well-formed or well-informed story. In Your Testing is a Joke, I reference Daniel Dennett who proposes that our mental models are somewhat like the information state I've described. But I don't think it's possible or desirable to attempt to do this in practice for all pieces of information, if it were even possible to enumerate all pieces of information

Anders:I have witnessed such systems in operation and had to live with consequences of them. I have probably developed a very sceptical attitude due to that.

But we should not forget that testing is a human activity in a context and it is my human capacity to judge what I observe in testing and convey messages about it to stakeholders.

James: I’m still not comfortable with the term "certainty".

I might speculate that certainty as you are trying to use it could be a function of the person and the information states I’m proposing. Maybe humans have some shared feeling about what this function is, but it can differ by person. So perhaps a dimension of the humanity in this kind of story is in the way we "code" the function that produces certainty from any given information state.

The data in the information state can be produced by any actor, including a machine, but the interpretation of that information to provide confidence (a term I'm more comfortable with, but see e.g. this discussion) is of course a human activity. (But recent advances in AI suggest that perhaps it won’t necessarily always be so, for at least some classes of problem.)

Anders: Can I please ask you to join "team human", i.e. that all relevant actors (except the tools we use and the item under test) are humans with human capabilities, i.e. real thoughts and perhaps most importantly gut feelings?

Can you accept that fundamentally, a test result produced by a human is not produced by mechanistically, but human interpretation of what the human senses (e.g. sees), experience, imagination, and ultimately judgement?

James: Think of statistics. There are numerous tools that take information and turn it into summaries of the information. Some of them are named to suggest that they give confidence. (Confidence intervals, for example, or significance.) Those tools are things that humans can drive without thought (so essentially machines.)

Anders: I fundamentally cannot follow you there. Nassim Taleb is probably the most notable critic of statistics interpreted as something that can give confidence. His point (and mine) is that confidence as a mathematical term should not be confused with real confidence, that which a person has.

James: I think we are agreeing. Although the terms are named in that way, and may be viewed in that way by some - particularly those with a distant perspective - the results of running those statistical methods on data must inherently be interpreted by a human in context to be meaningful, valuable.

Anders: Ethically, decisions should be taken on the basis of an understanding of information. Defining "understanding" is difficult though, but there must be some sort of judgement involved, and then I’m back at square one: I use all my knowledge, experience and connect to my values, but by the end of the day, what I do is in the hands of my gut feeling.

James: Perhaps another angle is that data can get into (my notion of) an information state from any source. This can include gut, experiment, hearsay, lies. I want each of the items of data to have some level of confidence attached to them (in some hand-wavy way, again).

The humanistic aspect that you desire can be modelled here. It’s just not the only or even necessarily the most important factor, until the last step where judgement is called for.

Anders: This leads me to think about kairos: That there is a moment in which testing takes place, the point in time where the future turns to become the past. Imagine your computer clock shows 10.24 am and you know you have found a bug. When is the right time to tell it to the devs? They are in a meeting now about future features. Let’s tell them after lunch.

Kairos for communicating the bug seems to be "after lunch".

But it is not just about communication, there could even be a supreme moment for performing a test. It could be one that I have just had the idea for, one I have sketched out yesterday in a mind map, noted on a post-it, or prepared in a script months ago.

Kairos in testing could be the moment when our minds are open to the knowledge stream of testing so we can let it help us reach certainty about the test performed.

James: I am interested in the extent to which you can prepare the ground for kairos. What can someone do to make kairos more likely? As a tester, I want to find the important issues. Kairos would be a point at which I could execute the right action to identify an important issue. How to get to that moment, with the capacity to perform that action?

Anders: There is, to me, no doubt that kairos is a "thing" in testing in the human-to-human relating parts of what we do: communication, particularly; but also in leadership. A sense of kairos involves having an intuition of what is opportune to communicate in a given moment, and when is an opportune moment to communicate what seems important to you, but of course it could also be about having a sense of some testing to carry out at a particular moment to cause a good effect on the project.

Whether kairos is a thing in what is happening only between the tester and the object being tested (and possibly other machines), I would doubt, or if it was, we would certainly reach far beyond of the original meanings of kairos.

James: I think this is tied to your desire for a dialogue to be only between two souls, as we discussed on Skype. We agreed then that it is possible for one person to have an internal dialogue, and so two souls need not be necessary in at least that circumstance. I’d argue it's also not necessary in general. (Or we have to agree some different definition of dialogue.)

Anders: I do appreciate that some testers have a "technical 6th sense", e.g. when people experience themselves as "bug magnets". I think, however, that that comes from creative talents, imagination, technical understanding, and understanding of the types of mistakes programmers make, more than about human relations or "relations" to machines. I think it would then be better to talk about "opportune conditions", which, I think, would then probably be the same as "good heuristics".

James: From Wikipedia: In rhetoric, kairos is "a passing instant when an opening appears which must be driven through with force if success is to be achieved."

Whether at a time or under given conditions (and I'm not sure the distinction helps), it seems that kairos requires the speaker and listener (to give the roles anthropomorphic names for a moment) to both be in particular states:
  • the speaker must be in a position to capitalise on whatever opportunity is there, but also to recognise that it is there to be acted upon.
  • the listener must (appear to the speaker to) be in a state that is compatible with whatever the speaker wants to do.

Whether or not the opportunity is acted upon, I think these are true. Notice that they include both time and conditions. Time can exist (forgetting metaphysical concerns) without conditions being true, but the conditions must necessarily exist in a time. So I argue that if you want to tie to conditions you are necessarily tying to time also. If I follow your reasoning, then I think this means you might be open to kairos existing in human-machine interactions?

A difference that is apparent at several points in our dialogue here, I think, is that I want to make (software) testing be about more than interaction of a human with a product. I want it to include human-human interactions around the product. (See e.g. Testing All the Way Down and The Anatomy of a Definition of Testing.)

It’s my intuition that many useful techniques of testing cross over between interactions with humans and those with machines. And so I am interested in seeing what happens when you try to capture them in the same model of testing. And in the course of our discussion I’ve realised that I’ve been thinking along these lines for a while - see Going Postel or even Special Offers, for example.

I think that you want to separate these two worlds more distinctly than I do, and reserve more concepts, approaches and so on for humans only. But I think we have a shared desire to recognise the humanity at the heart of testing and to expect that human judgement is important to contextualise the outcomes of testing.

Anders: Yes you are right, I want to separate the two worlds, and I realise now that the reason is that I hope testers will more actively recognise humanity and especially what it means being human. Too often, testers try to model humanity using terminology and understandings which are fundamentally tied to the technical domain.

This leads to a lot of (hopefully only unconscious) reductionism in the testing world. It’s probably caused by leading thinkers in testing having very technical, not humanistic backgrounds.

So I am passionate that we do not confuse the technical moment in time in which I hit the key on my keyboard to start automatic test suite thereby altering the states of the system under test and the testing tools used, but not yet influencing any humans with the kairos of testing which is only tied to the human relations we have, including those we have with ourselves, and not to any machines.

Kairos happens when we let it happen.

Kairos is when we look down on the computer screen, sense what is on the screen, allow it to enter our minds, and start figuring out what happened and what that might mean.

...
Categories: Blogs