Skip to content

Blogs

Speaking Easier

Hiccupps - James Thomas - Wed, 01/18/2017 - 22:50
Wow.

I've been thinking about public speaking and me.

Wind back a year or so. Early in November 2015 I presented my talk, my maiden conference talk, the first conference talk I'd had accepted, in fact the only conference talk I had ever submitted, on a massive stage, in a huge auditorium, to an audience of expectant software testers who had paid large amounts of money to be there, and chosen my talk over three others on parallel tracks. That was EuroSTAR in Maastricht. I was mainlining cough medicine and menthol sweets for the heavy cold I'd developed and I was losing my voice. The thing lasted 45 minutes and when I was finished I felt like I was flying.

Wind back another year or so. At the end of July 2014 I said a few words and gave a leaving present to one of my colleagues in front of a few members of staff in the small kitchen at work. I was healthy and the only drug I could possibly be under the influence of was tea (although I do like it strong). The thing lasted probably two minutes and when I was finished I felt like I'd been flushed out of an aeroplane toilet at high altitude.

Wind forward to today, January 2017. In the last couple of years, in addition to EuroSTAR, I have spoken a few times at Team Eating, the Linguamatics brown bag lunch meeting, spoken to a crowded kitchen for another leaving presentation, spoken to the whole company on behalf of a colleague, spoken at several Cambridge tester meetups, spoken at all three of the Cambridge Exploratory Workshops on Testing, spoken at the Midlands Exploratory Workshop on Testing, spoken at the UK Test Management Forum, and spoken at a handful of local companies, and opened the conference (yes, really) at the most unusual wedding I've ever been to.

I'm under no illusion that I'm the greatest public speaker in the world; I'm probably not even the greatest public speaker in my house. But, and this is a big one, I'm now confident enough about my ability to stand in front of people and speak that it's no longer the ordeal it had turned into. In fact, at times I have even enjoyed it.

Now back to 2014 and that kitchen. I stood stiffly, statically, squarely front of the fridge. Someone tapped the glass for quiet and as I spoke my scrap of paper wobbled, and my voice trembled, and my knees knocked.


The worse I felt about the delivery, the worse the delivery seemed to get, and the worse I felt, and the worse it seemed to get ... After stumbling back to my desk I decided enough was enough: I was going to do something about my increasing nervousness at speaking in public. And so, on the spur of the moment, I challenged myself to speak at a testing conference.

Wow.

I found that the EuroSTAR call for papers was open, and I wrote my proposal, and got some comments on it from testers I respect, and I rewrote my proposal, and I sent it off, and I crossed my fingers without being quite sure whether I was hoping to be accepted or not. Then, if I'm honest, I made very little progress for a couple of months, until I came across Speak Easy.

Speak Easy team inexperienced speakers with experienced mentors to help with any aspect of conference presentations. It sounded relevant so I signed up and, within a few days, James Lyndsay got in touch. In our first exchange, this is what I told him I wanted:
  • Tips, strategies, heuristics for keeping the nerves in check - ultimately, I'd like to be able to stand in front of anyone and feel able to present.
  • Tips for building, crafting, structuring presentations and talks - I imagine that confidence in the material will help confidence in delivery.
  • Any other relevant suggestions.

Amongst other things, he asked me questions such as what did I mean by nerves? When did I get them? And what was I currently using to moderate them?

Amongst other things, he gave me a suggestion: "having confidence in your material can help, but not as much as knowing the stuff".

Amongst other things, he assigned me a task: visualising a variety of scenarios in which I was required to speak in front of different audiences (people I knew, experts in my field, experts in an unfamiliar field, ...) from different positions (presenter, audience member, ...).

Amongst other things he had me watch several talks, concentrating on the breathing patterns of the speakers rather than their words.

Based on my responses, he proposed further introspection or experimentation. In effect, he explored my perception of and reaction to my problem with a range of different tools, looking for something that might provide us with an "in". In retrospect, I think I could have done more of this myself. But, again in retrospect, I think I was too close to it, too bound up in the symptoms to be able to see that.

Amongst other things, and a little out of the blue, for both of us, he mentioned that I might look into Toastmasters on the basis of Tobias Mayer's blog post, Sacred Space, published just a few days previously. So I did. In fact, I went to the next meeting of Cambridge City Communicators, which was the following week, and I stood up to speak.

I reported back to James afterwards: I was thrown an "agony aunt" question and had to answer it there and then, with no prep time. I was nervous, I was pleased that I didn't gabble, I deliberately paused, and my voice didn't (I don't think) shake.  They told me that I was very static (they are hot on body language and gesture) and I ummed a little. But my personal feedback is that although I was able to some extent to overcome the shakes and the thumping chest, I wasn't my natural self.  I was concentrating so much on the medium that the message was very average. So I think I want to tune my goal in Speak Easy: I want to feel like myself when speaking in front of a group.

I can't emphasise how big a deal this last point was for me. It changed what I wanted to change. I realised that I could live with being nervous if it was me that was nervous and not someone else temporarily inhabiting my body.

Wow.

And that was just as well, because during this period I got an email from EuroSTAR. I'd been accepted. Joy! Fear!

So I signed up to Toastmasters and committed myself to stand up and speak at every meeting I attended, and to do so without notes from the very beginning, and to do it wholeheartedly. I learned a few things:
  • I can write a full draft and then speak it repeatedly to make it sound like it should be spoken.
  • That rehearsal lets me smooth out the places where I stumble initially, and find good lines that will be remembered and used again. 
  • Experimenting with how much rehearsal I need to get the balance between natural and stilted right was useful because I can now gauge my readiness when preparing (to some extent).
  • Standing and sitting to speak are different for me. Standing is much more nerve-wracking, even alone, so now I try to practice standing up. 
  • I can squeeze rehearsal into my day, if I try. For instance, I'll put my headphones on and (I hope) appear to be having a phone conversation to anyone I walk past as I do a lap of the Science Park at lunch times.
  • Speaking without notes from the start forced me to find ways to learn the material.
  • Doing it more helps, so I sought out opportunities to speak.

I attended Toastmasters religiously every two weeks and kept up my goal of speaking at every meeting in some capacity. The possibilities include scheduled talks, ad hoc "table topics" where people volunteer to speak on a subject that's given to them there and then, and various functional roles. Whatever I was doing, I'd look for a way to prepare something for it, or dive into the unexpected with full enthusiasm.

I frequently didn't enjoy either my performance or my assessment of my performance, but I found that I could see incremental improvement over time. I used James as a sounding board, reporting back to him every now and again about problems I'd had or victories that I felt I'd won, or about the positive things that attending Toastmasters was giving me:
  • The practice: to get up and speak on a regular basis in front of a bunch of people for whom, ultimately, it made no difference whether I was good, bad or indifferent.
  • The formality:  I found that the ceremony and rigidity removed uncertainty, allowing me to focus more on the speaking.
  • The common aim: the people there all want to improve as speakers, and want others to improve as speakers too, and that gives a strong sense of solidarity and security. 
  • The feedback: in addition to slips of paper filled in by each member for each speaker there is feedback on every speech from another Toastmaster, delivered by them as a speech in itself.

Talking of feedback, a summary of the advice I was given in the eight or nine months I was there might be: speak clearly, don't be afraid to pause, include variety in my voice, use my hands to emphasise and illustrate points, use some basic structural and rhetorical devices, stop rocking backwards and forwards and shuffling my feet, stop touching my nose

Other than the last couple, which are habits I had no idea I had, this is standard advice for beginner speakers. What's useful, I found, is to get it applied to you regularly about some speaking you've just been doing, rather than reading it in a blog post when you haven't been anywhere near presenting for months.

But enough of that, because suddenly it was the start of November and I was in a taxi, in a plane, in a taxi, on a train, in a taxi, in front of a stage at a conference centre in Maastricht waiting to deliver my talk.

And then I was on the stage. And I had a headset mic on - which I had never done before.  And I was coughing, and the sound tech was coughing. And we shared my cough sweets. And I was being introduced. And I was stepping forward from the side and ... and ... and ... amazingly I found that I was smiling.

And I was interacting with the audience. And I was making a joke. And they were laughing. And I wasn't shaking. And my voice wasn't catching. And I was delivering my talk in what felt like a natural way, with pauses, at a natural pace ... and although I can't be sure what I was doing with my feet, I can say that my head was very, very big.


Wow.

A few weeks later, I got an email from the organisers:
Thank you for contributing to the success of EuroSTAR Conference 2015, we hope you enjoyed the experience of speaking in Maastricht. We have amalgamated all the information from attending delegates and for your feedback scores and comments on your session are included below: Individual speakers were evaluated by delegates using a 1-10 basis (10 being excellent - 1 being poor).We categorize sessions by the following standards:
  • 9.00 – 10.00 Outstanding
  • 8.00 – 8.99 Excellent
  • 7.00 – 7.99 Good
  • 6.00 – 6.99 Low Scoring
  • Under 6.00 – Below minimum standard acceptable
Your score was  5.90 out of 67 respondents which according to above table, came in the Below Minimum bracket. The track session presentation overall average score (40 track sessions) was 7.51 Comments on Forms below:-
  • Well, fun but what am I going to do with this?? (+ some jokes don’t work on non-British people).
  • Accent!
  • as hard to understand if you're not a native in English language
  • The core ideas turned out more interesting than I expected, but needs post processing by me.
  • Good presentation but very specific to native speakers.  Really good work done on linking patterns but I think will not reach wide audience 
Wow.

And I'd got similar comments directly too. I'd known that including jokes themselves (in a talk called Your Testing is a Joke, about the links between testing and joking) was a risk to non-native speaker comprehension from my practice runs, and I'd changed the talk to reduce it. It's also indisputable that I have an accent (I'm from the Black Country and it shows) and I think that having a heavy cold probably contributed to any lack of clarity.

So it wasn't great getting this kind of feedback - duh! - but knowing what I wanted prevented me from being discouraged: on that stage on that day, however it came across to anyone else, I was myself.

Thankfully, usefully, I did also get some positive feedback from attendees at the conference and the content of my talk was validated by winning the Best Paper prize. But even without those things I think I'd have been able to take significant positives in spite of the audience reviews.

Back at work, I quickly had an opportunity to exorcise a demon by doing another leaving presentation. I treated it as I would a Toastmasters talk and wrote a draft in full, which I then repeated until I'd smoothed it out sufficiently. And then in the kitchen I wasn't rubber-legging and I wasn't heart-pounding and I wasn't knee-knocking, and tapped the glass and I spoke without notes and I got a laugh and I ad-libbed. And, sure, I stumbled a bit, but I was still there and doing it and doing it well. Or, at least, well enough.

I've been thinking about public speaking and me.

I wouldn't want to claim anything too grand. I haven't cracked the art of presenting. I still get nerves. I am not suggesting that you must do the same things as I did. I am not claiming that I haven't had some setbacks, and I don't have a magic wand to wave. But if I tried to summarise what I've done, I guess I'd say something like this:
  • I decided I wanted to change.
  • I found out what I wanted to change to.
  • I was open to ways to help me get there.
  • I looked for, or made, openings.
  • I reflected on what I was doing.
  • I stuck at it.

And I made my change happen.

Wow.
Images: Black Country T-ShirtsCambridge City Communicators
Categories: Blogs

Mocking C++with Trompeloeil

Testing TV - Wed, 01/18/2017 - 10:09
Trompeloeil is an open source mocking framework for C++, aimed at ease of use without sacrificing expressive power. In arts, trompeloeil is intended to mock your mind, making you believe you see something that isn’t what it appears to be. In unit tests, we use use mocks to fool the unit under test, so that […]
Categories: Blogs

System definition and confidence in the system

Thoughts from The Test Eye - Thu, 01/12/2017 - 19:16
Ideas

As a tester, part of your mission should be to inform your stakeholders about issues that might threaten the value of the system/solution. But what if you as a tester do not know the boundary of the system? What if you base your confidence of the result of your testing on a fraction of what you should be testing? What if you do not know how or when the system/solution is changed? If you lack this kind of control, how can you say that you have confidence in the result of your testing?

These questions are related to testability. If the platform we base our knowledge on is in fluctuation, then how can we know that anything of what we have learnt is correct?

An example. In a project I worked on, the end-to-end solution was extremely big, consisting of many sub systems. The solution was updated by many different actors, some doing it manually and some doing it with continuous deployment. The bigger solution was changed often and in some cases without the awareness of the other organisations. The end-to-end testers sometimes performed a test that took a fair amount of time. Quite often, they started one test and during that time the solution was updated or changed with new components or sub systems. It was difficult to get any kind of determinism in the result of testing. When writing the result of a test, you probably want to state which version of the solution you were using at the time. But how do you refer to the solution and its version in a situation like this?

When you test a system and document the result of your tests you need to be able to refer to that system in one way or another. If the system is changed continuously, you somehow need to know when it is changed, what and where the change is as well. If you do not know what and where there are changes, it will make it harder for you to plan the scope of your testing. If you do not know when, it is difficult to trust the result of your tests.

One way of identifying your system is to first identify what the system consists of. Considering the boundary of the system and what is included. Should you include configuration of the environment as part of the system? I would. Still, there are no perfect oracles. You will only be able to define the system to certain extent.

Sub systems

System version 1.0

System version 1.1

System version 1.2

component 1 version

1.0

1.1

1.1

component 2 version

1.0

1.0

1.0

component 3 version

1.0

1.0

1.1

As you define parts or components of the system, you can also determine when each are changed. The sum of those components are the system and its version. I am sure there are many ways to do this. Whatever method you choose, you need to be able to refer to what it is.

I think it is extremely important that you do anything you can to explore what the system is and what possible boundaries it could have. You need many and different views of the system, creating many models and abstractions. In the book “Explore IT!”, Elizabeth Hendrickson writes about identifying the eco system and performing recon missions to charter the terrain, which is an excellent way of describing it. When talking about test coverage you need to be able to connect that to a model or a map of the system. By doing that you also show what you know are coverable areas. Another way of finding out what the system is using the heuristic strategy model, by James Bach, and specifically exploring Product Elements. Something that I have experienced is that when you post and visualize the models of the system for everyone to see, you will immediately start to gain feedback about them from your co-workers. Very often, there are parts missing or dependencies not shown.

If one of your missions as a tester is to inform stakeholders to make sound decisions, then consider if you know enough of the system to be able to recommend a release to customer or not. Consider what you are referring to when you talk about test coverage and if your view of the system is enough.

References

  1. Explore It! by Elisabeth Hendrickson – https://pragprog.com/book/ehxta/explore-it
  2. Heuristic Test Strategy Model by James Bach – http://www.satisfice.com/tools/htsm.pdf

  3. The Oracle Problem – http://kaner.com/?p=190

  4. A Taxonomy for Test Oracles by Douglas Hoffman – http://www.softwarequalitymethods.com/Papers/OracleTax.pdf

Categories: Blogs

Without Which ...

Hiccupps - James Thomas - Wed, 01/11/2017 - 08:40

This week's Cambridge Tester meetup was a show-and-tell with a theme:
Is there a thing that you can't do without when testing? A tool, a practice, a habit, a method that just works for you and you wouldn't want to miss it? Here's a list, with a little commentary, of some of the things that were suggested:
  • Testability: mostly, in this discussion, it was tools for probing and assessing a product.
  • Interaction with developers: but there's usually a workaround if they're not available ..
  • Workarounds
  • The internet: because we use it all the time for quick answers to quick questions (but wonder about the impact this is having on us).
  • Caffeine: some people can't do anything without it.
  • Adaptability: although this is like making your first wish be infinite wishes.
  • People: Two of us suggested this. I wrote my notes up in Testing Show
  • Emacs
  • Money: for paying for staff, tools, services etc.
  • Visual modelling: as presented, this was mostly about system architecture, but could include e.g. mind maps.
  • Notebook and pen: writing gives clarity
  • Phone: for playing games as a break from work.
  • Explainability: "it's my job to eradicate inexplicability."
  • Freedom/free will: within the scope of the mission
  • Problems: because we'll be out of a job without them.

Image: https://flic.kr/p/5Wqpov
Categories: Blogs

Testing Show

Hiccupps - James Thomas - Wed, 01/11/2017 - 08:22

This week's Cambridge Tester meetup was a show-and-tell with a theme:
Is there a thing that you can't do without when testing? A tool, a practice, a habit, a method that just works for you and you wouldn't want to miss it? Thinking about what I might present I remembered that Jerry Weinberg, in Perfect Software, says "The number one testing tool is not the computer, but the human brain — the brain in conjunction with eyes, ears, and other sense organs. No amount of computing power can compensate for brainless testing..."

And he's got a point. I mean, I'd find it hard to argue that any other tool would be useful without a brain to guide its operation, to understand the results it generates, and to interpret them in context.

In show-and-tell terms, the brain scores highly on "tell" and not so much on "show", at least without a trepanning drill. But, in any case, I was prepared to take it as a prerequisite for testing so I thought on, assuming I could count on my brain being there, and came up with this:
The thing I can't do without when testing is people. Why? Well, first and foremost, software is commissioned by people, and built by people, and functions to service the needs of people. Without those people there wouldn't be software for me to test. As a software tester I need software and software needs people. And so, by a transitive relationship, I need people.

Which is a nice line, but a bit trite. So I thought some more.

What do people give me when I'm testing? Depending on their position with respect to the software under test they might provide
  • background data such as requirements, scope, expectations, desires, motivations, cost-benefit analyses, ...
  • test ideas and feedback on my own test ideas
  • insight, inspiration, and innovation
  • reasons to test or not to test some aspects of the system
  • another perspective, or perspectives 
  • knowledge of the mistakes they've made in the past, so perhaps I need not make them   
  • coaching
  • the chance to improve my coaching
  • satisfy a basic human need for company and interaction
  • ...

There are methodologies and practices that recognise the value of people to other people. For example, XP, swarming, mobbing, pairing, 3 Amigos, code reviews, peer reviews, brainstorming, ... and then there are those approaches that provide proxies for other people such as personas, thinking hats, role playing, ...

Interactions with others needn't be direct: requirements, user stories, books, blogs, tweets, podcasts, videos, magazines, online forums, and newsletters, say, are all interactions. And they can be more or less formal, and facilitated,  like Slack channels, conferences, workshops, and even meetups. They're generally organised by people, and the content created by people for other people, and the currency they deal in is information. And it's information which is grist to the testing mill.

And that's an interesting point because, although I do pair test sometimes, for the majority of my hands-on testing I have tended to work alone. Despite this, the involvement of other people in that testing is significant, through the information they contribute.

Famously, to Weinberg and Bolton, people are crucial in both a definition of quality and indeed a significant proportion of everything else too.
  •  Quality is value to some person.
  •   X is X to some person at some time.

Fair enough, you might ask with a twinkle in your eye, but didn't Sartre say "Hell is other people"?

Yes he did, I might reply, and I've worked with enough other people now to know that there's more than a grain of truth in that. (Twinkling back atcha!) But in our world, for our needs, I think it's better to think of it this way: software is other people.
Image: https://flic.kr/p/gp2CDC

Edit: I've listed some of the other things that were suggested at the meetup in Without Which.
Categories: Blogs

mmdrv.exe command line options

My Load Test - Sat, 01/07/2017 - 01:25
If you are running LoadRunner scripts from the command line using mmdrv.exe, then you will know that mmdrv.exe has a large number of command line options. Unfortunately, the options are not really documented apart from what is displayed in the user interface. Read on for some hints… To see the command line options, run the […]
Categories: Blogs

Drop the Crutches

DevelopSense Blog - Fri, 01/06/2017 - 00:18
This post is adapted from a recent blast of tweets. You may find answers to some of your questions in the links; as usual, questions and comments are welcome. Update, 2017-01-07: In response to a couple of people asking, here’s how I’m thinking of “test case” for the purposes of this post: Test cases are […]
Categories: Blogs

Automating with Protractor & WebDriver

Testing TV - Wed, 01/04/2017 - 18:15
This presentation shares the good and the bad experience in the journey of building a Test Automation framework for an AngularJS based application. You will learn, by a case study, what thought process we applied on the given context (product, team, skills, capabilities, long term vision) to come up with an appropriate Test Automation Strategy. […]
Categories: Blogs

State of Play

Hiccupps - James Thomas - Wed, 01/04/2017 - 08:23
The State of Testing Survey for 2017 is now open. This will be the fourth iteration of the survey and last year's report says that there were over 1000 respondents worldwide, the most so far.

I think that the organisers should be applauded for the efforts they're putting into the survey. And, as I've said before, I think the value from it is likely to be in the trends rather than the particular data points, so they're playing a long game with dedication.

To this end, the 2016 report shows direct comparisons to 2015 in places and has statements like this in others:
We are starting to see a trend where testing teams are getting smaller year after year in comparison with the results from the previous surveys.I'd like to see this kind of analysis presented alongside the time-series data from previous years and perhaps comparisons to other relevant industries where data is available. Is this a trend in testing or a trend in software development, for instance?

I'd also like to see some thought going into how comparable the year-to-year data really is. For example: is the set of participants sufficiently similar (in statistically important respects) that direct comparisons are possible? Or do some adjustments need to be made to account for, say, a larger number of respondents from some part of the world or from some particular sector than in previous years. Essentially: are changes in the data really reflecting a trend in our industry, or perhaps a change in the set of respondents, or both, or something else?

While I'm wearing my wishing hat I'd be interested in questions which ask about the value of the changes that are being observed. For example, are smaller teams resulting in better outcomes? What kind of outcomes? For who? I wonder whether customers or consumers of testing could be polled too, to give another perspective, with a different set of biases.
Image: https://flic.kr/p/9cTwhS
Categories: Blogs

Wrapping up Agile Testing Days 2016

Agile Testing with Lisa Crispin - Sun, 01/01/2017 - 04:07
Mike Sutton on the importance of communityMike Sutton on the importance of community

After being our delightful host and emcee all week, Mike Sutton finally got to do what he originally came to Potsdam to do – give his keynote on the power of Communities of Practice in testing. I’d never thought of what those words mean – a community, yes, but also practice.

Where can we learn?

We have to practice our skills to get good at them. We share a craft and a profession, and deliberately come together. Mike urged us to take time to create meaning from your work, and noted that we don’t get much better just by doing our work. And how can we learn at work, where we’re expected to be *doing* the work?

A theme for me at this conference has been the power of stories. Mike told about his “Nigerian gap year” where he learned a seemingly mundane skill from a coworker that transformed how he did his job. It’s hard to learn at work if managers don’t promote learning. So, we can improve our professional status by being in the community, where we can learn by leading. Mike suggested ways that companies can incorporate community stuff into performance reviews. I would like to just get rid of performance reviews, but if you have to do them, looking at how a person helped her coworkers learn is important.

There’s no community of one

As Mike explained, “There’s no community of one – that’s just a crazy person!” Don’t do this alone. A problem shared is a problem halved. He advised us to beware the “onion of engagement” and not end up doing all the community organizing yourself. Make checklists, make it easy for others to help, book the next event at the end of this event. Bring food, change venues, have fun, and retrospect to improve.

Mike and JoséMike and José

I loved Mike’s closing message, “You don’t know it yet ’cause you’re drunk and tired, but you’re in flames”. Indeed, we were all on fire with new ideas and new connections!

José!

The heart missing all week from ATD, due to a bad case of flu, José Diaz surprised Mike onstage at the end of his keynote with a big hug. The crowd went wild! José thanked Mike for his terrific job filling in as the heart of the conference. We look forward to many more ATD conferences from José and his team!

Un-Conference Alex Schwarz pitches a topicAlex Schwarz pitches a topic Open Space Session OutcomesOpen Space Session Outcomes

It’s hard to pick a favorite part of the conference, but I got a huge amount of value from the Open Space sessions during the regular conference and the un-conference on the last day. Olaf Lewitz (and maybe other people, now it’s been a few weeks and I didn’t take good notes) kicked it off, explaining how open space works. Lots of great topics were proposed and the ATD staff had provided several comfortable places to gather. We had flip charts, sticky notes, markers and each other – everything we needed for generating ideas and learning!

Internet of Things

Last but not least in the week’s keynotes were Bart Knaack and James Lyndsay, showing us live! and in real time! how to connect to the IoT. I won’t try to explain the technical bits, but they made the point that there are connections everywhere, and they can fail in so many ways.

The IoT is clearly lots of fun, as Bart and James demonstrated. At one point we @mentioned Bart on Twitter to make a light flash on. But Bart and James pointed out that so far, we haven’t found the compelling uses for the IoT. We could already turn on the heat before we got home or order toner for our printer without involving the Internet.

Bart and James use IoT for unicorn powerBart and James use IoT for unicorn power

What’s coming that might be game-changing? Very low power devices. How do we test the IoT? James and Bart say TDD isn’t easy, though you can do arduino unit tests.  You can fire up API events and look at logs. To do end to end testing you need sniffers and manipulators. These are RESTful, and don’t save information. They’re asynchronous events that fire and forget. Nevertheless, we can have lots of fun, as James and Bart showed by using the IoT to re-inflate a tired unicorn!

Get outside too! Gitte at the Christmas marketGitte at the Christmas market

Though I was enjoying the un-conference, the opportunity to walk and talk with the awesome agile coach and my treasured friend Gitte Klitgaard. We took off to explore the Potsdam Christmas Market and do a bit of shopping. I met Gitte at Booster Conference in Bergen, Norway a couple years ago (though we just learned that we may have met at JAOO back in 2004!). Meeting people like her is the main reason I value conferences, especially ones like ATD with such a safe and nurturing community. Her insights and wisdom are helping me find my own courage and grow confidence in my own coaching and leadership skills. Plus we had so much fun!

Sad to go, but much to anticipate Un-Conference closing circleUn-Conference closing circle

We made it back in time for the un-conference wrap up. I always enjoy hearing everyone’s aha moments. This was my 8th year at ATD, and the last day is so hard on me. All the poignant goodbyes! Whereas all week, everyplace I looked I saw my friends, now there were strangers in the hotel!

Maybe all good things must come to an end, but it’s a beginning, too. For one thing, the next ATD is in November, so it’s less than a year off! And in the meantime, we have ATD USA in Boston in June! Some of the ATD Potsdam family will be there, and I’m confident that the magic will be there too.

Don't drink and test!Don’t drink and test!

After a dinner with dear friends, I found some solace in my hotel room: The beer that St. Nicolas had left on our doors on Tuesday. What’s more special than participants and organizers collaborating to brew special craft brews for the conference?

Since the conference, I’ve been trying out some of the many ideas and techniques I learned. I’d sure like to be successful at building some kind of testing community where I live, so I’ll take Mike’s advice to heart. I hope to see you at a future ATD!

The post Wrapping up Agile Testing Days 2016 appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Psychological Safety leads to High-Performing Agile Teams

There are two types of safety that factor into a healthy and productive enterprise environment and high-performing teams.  The first is physical safety. This is where employees have an environment where they are free from physical hazards and can focus on the work at hand. This type of safety should be part of the standard workplace promoted by company and government regulations.
The second is psychological safety that is core to enterprise effectiveness. According to Google research, high performing teams always display psychological safety.  This phenomenon has two aspects.  The first is where there is a shared belief that the team is safe to take interpersonal risks and be vulnerable in front of each other.  The second is how this type of safety along with increased accountability leads to increased employee productivity and ergo high-performing teams.  Psychological safety helps establish Agile in that it promotes a safe space for employees to share their ideas, discuss options, take methodical risks, and become productive.  An Agile mindset promotes self-organizing teams around the work, taking ownership and accountability, and creating an environment for learning what is customer value through the discovery mindset, divergent thinking, and feedback loops. Agile with psychological safety can be a powerful pairing toward high-performing teams.   

However, accountability without psychological safety, leads to great anxiety.  This is why there is a need to move away from a negative mindset when results aren’t positive or new ideas are seen as different. If this occurs, employees are less willing to share ideas and take risks.  Instead consider ways to build psychological safety paired with team ownership and accountability of the work. This can lead to high performing teams. 

Everyone has a role to play in establishing a psychologically safe environment.  Agile Coaches and ScrumMasters can help you evolve to an enterprise where psychological safety and accountability are paired. Leadership has a strong role to play to provide awareness of the importance of a safe environment, provide education on this topic, and build positive patterns in the way they respond to results of risk taking by teams.  Team members must adopt an open, divergent, and positive mindset that is focused on accepting differences and coaching each other for better business outcomes.  Employees at all levels must be aware of the attitudes and mindset they bring.   
Categories: Blogs

What We Found Not Looking for Bugs

Hiccupps - James Thomas - Sat, 12/31/2016 - 22:05
This post is a conversation and a collaboration between Anders Dinsen and me. Aside from a little commentary at the top and edits to remove repetition and side topics, to add links, and to clarify, the content is as it came out in the moment, over the course of a couple of days.

A question I asked about not looking for bugs at Lean Coffee in Cambridge last month initiated a fun discussion. The discussion suggested it’d be worth posing the question again in a tweet. The tweet in turn prompted a dialogue.

Some of the dialogue happened on public Twitter, some via DM, and on Skype, and yet more in a Google doc, at first with staggered participation and then in a tight synchronous loop where we were simultaneously editing different parts of the same document, asking questions and answering them in a continuous flow. It was at once exhilarating, educational and energising.

The dialogue exposes some different perspectives on testing and we decided to put it together in a way that shows how it could have taken place between two respectful, but different, testers.

--00--
James: Testing can’t find all the bugs, so which ones shouldn’t we look for? How?

Anders: My brain just blew up. If we know which bugs not to look for, why test?

James: Do you think the question implies bugs are known? Could they be expected? Suspected?

Anders: No, but you appear to know some bugs not to find.

James: I don't think I'm making any claims about what I know, am I?

Anders: Au contraire, "which bugs" seems quite specific, doesn't it?

James: By asking "which" I don't believe I am claiming any knowledge of possible answers.

Anders: I think this is a valid point.

Testing takes place in time, and there is a before and an after. Before, things are fundamentally uncertain, so if we know bugs specifically to look for, uncertainty is an illusion.

That testing takes place in time is obvious, but still easily forgotten like most other things that relates to time.

In our minds, time does not seem as real as it is. Remember, that we can just as vividly imagine the future and remember the past as we can experience the current. In our thoughts, we jump back and forth between imagination, the current and memory of the past, often without even realizing that we are in fact jumping.

When I test, I hope an outcome of testing will be test results which will give me certainty so that I can communicate clearly to decision makers and help them achieve certainty about things they need to be certain about to take decisions. This happens in time.

So before testing, there is uncertainty. After testing, some kind of certainty exists in someone (e.g. me, the tester) about the thing I am testing.

Considering that, testing is simple, but it follows that, expecting and even suspecting bugs implies some certainty, which will mislead our testing away from the uncertain.

James: I find it problematic to agree that testing is simple here - and I’ve had that conversation with many people now. Perhaps part of it is that "testing" is ambiguous in at least two interesting senses, or at least at two different resolutions:
  • the specific actions of the tester
  • a black box into which stakeholders put requirements and from which they receive reports

These are micro and macro views. In The Shape of Actions, Harry Collins talks about how tasks are ripe for automation when the actors have become indifferent to the details of them. I wrote on this in Auto Did Act, noting that the perspective of the actor is significant.

I would want to ask this: from whose perspective is testing simple? Maybe the stakeholder can view testing as simple, because they are indifferent to the details: it could be off-shore workers, monkeys, robots, or whatever doing the work so long as it is "tested".

I am also a little uncomfortable with the idea of certainty as you expressed it. Are we talking about certainty in the behaviour of the product under test, or some factor(s) of the testing that has been done, or something else?

I think I would be prepared to go this far:
  • Some testing, t, has been performed
  • Before t there was an information state i
  • After t there is an information state j
  • It is never the case that i is equal to j (or, perhaps, if i is equal to j then t was not testing)
  • It is not the case that only t can provide a change from i to j. For example, other simultaneous work on the system under test may contribute to a shared information state.
  • The aim of testing is that j is a better state than i for the relevant people to use as the basis for decision making

Anders: But certainty is important, as it links to someone, a stakeholder, a human. Certainty connotes a state of knowledge in something that has a soul, not just a mathematical or mechanical entity.

This leads me to say that we cannot have human testing without judgement.
Aside: It’s funny that the word checking, which we usually associate with automatic testing, might actually better describe at least part of human testing, as the roots of ‘check’ are the same as the game of chess, the Persian word for king. The check is therefore the king’s judgement, a verdict of truth, gamified in chess, but in the real world always something that requires judgement. But that was a stray thought ... What’s important here is that some way or another testing is not only about information.

I accept that as testers, we produce information, even streams of tacit and explicit knowledge testing and some of that can be mechanistically or algorithmically produced, but if we are to use it as humans and not only leave it to the machines to process, we must not only accept what we observe in our testing, we must judge it. At the end of the day (or the test) at least we must judge whether to keep what we have observed to ourselves, or if we should report it.

James: I did not define what I mean by an information state. If you pushed me to say something formal, I might propose it’s something like a set of assertions about the state of the world that is relevant to the system under test, with associated confidence scores. I might argue that much of it is tacitly understood by the participants in testing and the consumption of test results. I might argue that there is the potential for different participants to have different views of it - it is a model, after all. I might argue that it is part of the dialogue between the participants to get a mutual understanding of the parts of j that are important to any decisions.

This last sentence is critical. While there will (hopefully) be some shared understanding between the actors involved, there will also be areas that are not shared. Those producing the information for the decision-maker may not share everything that they could. But even if they were operating in such a way as to attempt to share everything that was relevant to the decision, their judgement is involved and so they could miss something that later turns out to be important.
Aside: I wonder whether it is also interesting to consider that they could over-share and so cloud the decision with irrelevant data. It is a real practical problem but I don’t know whether it helps here. If it does, then the way in which information is presented is also likely to be a factor. Similarly, the decision-maker may have access to information from other sources. These may be contemporary or historical, from within the problem domain or not, ...

So, really, I think that having two information states - pre and post t - is an oversimplification. In reality, each actor will have information states taking input from a variety of sources, changing asynchronously. The states i and j should be considered (in a hand-wavy way) the shared states. But we must remember that useful information can reside elsewhere.

Anders: I feel this is too much PRINCE2, where people on the shop floor attach tuples of likelihood and consequence-scores to enumerated risks, but essentially thereby hiding important information needed to make good, open-eyed decisions about risks.

James: Perhaps. I have been coy about exactly what this would look like because I don't have a very well-formed or well-informed story. In Your Testing is a Joke, I reference Daniel Dennett who proposes that our mental models are somewhat like the information state I've described. But I don't think it's possible or desirable to attempt to do this in practice for all pieces of information, if it were even possible to enumerate all pieces of information

Anders:I have witnessed such systems in operation and had to live with consequences of them. I have probably developed a very sceptical attitude due to that.

But we should not forget that testing is a human activity in a context and it is my human capacity to judge what I observe in testing and convey messages about it to stakeholders.

James: I’m still not comfortable with the term "certainty".

I might speculate that certainty as you are trying to use it could be a function of the person and the information states I’m proposing. Maybe humans have some shared feeling about what this function is, but it can differ by person. So perhaps a dimension of the humanity in this kind of story is in the way we "code" the function that produces certainty from any given information state.

The data in the information state can be produced by any actor, including a machine, but the interpretation of that information to provide confidence (a term I'm more comfortable with, but see e.g. this discussion) is of course a human activity. (But recent advances in AI suggest that perhaps it won’t necessarily always be so, for at least some classes of problem.)

Anders: Can I please ask you to join "team human", i.e. that all relevant actors (except the tools we use and the item under test) are humans with human capabilities, i.e. real thoughts and perhaps most importantly gut feelings?

Can you accept that fundamentally, a test result produced by a human is not produced by mechanistically, but human interpretation of what the human senses (e.g. sees), experience, imagination, and ultimately judgement?

James: Think of statistics. There are numerous tools that take information and turn it into summaries of the information. Some of them are named to suggest that they give confidence. (Confidence intervals, for example, or significance.) Those tools are things that humans can drive without thought (so essentially machines.)

Anders: I fundamentally cannot follow you there. Nassim Taleb is probably the most notable critic of statistics interpreted as something that can give confidence. His point (and mine) is that confidence as a mathematical term should not be confused with real confidence, that which a person has.

James: I think we are agreeing. Although the terms are named in that way, and may be viewed in that way by some - particularly those with a distant perspective - the results of running those statistical methods on data must inherently be interpreted by a human in context to be meaningful, valuable.

Anders: Ethically, decisions should be taken on the basis of an understanding of information. Defining "understanding" is difficult though, but there must be some sort of judgement involved, and then I’m back at square one: I use all my knowledge, experience and connect to my values, but by the end of the day, what I do is in the hands of my gut feeling.

James: Perhaps another angle is that data can get into (my notion of) an information state from any source. This can include gut, experiment, hearsay, lies. I want each of the items of data to have some level of confidence attached to them (in some hand-wavy way, again).

The humanistic aspect that you desire can be modelled here. It’s just not the only or even necessarily the most important factor, until the last step where judgement is called for.

Anders: This leads me to think about kairos: That there is a moment in which testing takes place, the point in time where the future turns to become the past. Imagine your computer clock shows 10.24 am and you know you have found a bug. When is the right time to tell it to the devs? They are in a meeting now about future features. Let’s tell them after lunch.

Kairos for communicating the bug seems to be "after lunch".

But it is not just about communication, there could even be a supreme moment for performing a test. It could be one that I have just had the idea for, one I have sketched out yesterday in a mind map, noted on a post-it, or prepared in a script months ago.

Kairos in testing could be the moment when our minds are open to the knowledge stream of testing so we can let it help us reach certainty about the test performed.

James: I am interested in the extent to which you can prepare the ground for kairos. What can someone do to make kairos more likely? As a tester, I want to find the important issues. Kairos would be a point at which I could execute the right action to identify an important issue. How to get to that moment, with the capacity to perform that action?

Anders: There is, to me, no doubt that kairos is a "thing" in testing in the human-to-human relating parts of what we do: communication, particularly; but also in leadership. A sense of kairos involves having an intuition of what is opportune to communicate in a given moment, and when is an opportune moment to communicate what seems important to you, but of course it could also be about having a sense of some testing to carry out at a particular moment to cause a good effect on the project.

Whether kairos is a thing in what is happening only between the tester and the object being tested (and possibly other machines), I would doubt, or if it was, we would certainly reach far beyond of the original meanings of kairos.

James: I think this is tied to your desire for a dialogue to be only between two souls, as we discussed on Skype. We agreed then that it is possible for one person to have an internal dialogue, and so two souls need not be necessary in at least that circumstance. I’d argue it's also not necessary in general. (Or we have to agree some different definition of dialogue.)

Anders: I do appreciate that some testers have a "technical 6th sense", e.g. when people experience themselves as "bug magnets". I think, however, that that comes from creative talents, imagination, technical understanding, and understanding of the types of mistakes programmers make, more than about human relations or "relations" to machines. I think it would then be better to talk about "opportune conditions", which, I think, would then probably be the same as "good heuristics".

James: From Wikipedia: In rhetoric, kairos is "a passing instant when an opening appears which must be driven through with force if success is to be achieved."

Whether at a time or under given conditions (and I'm not sure the distinction helps), it seems that kairos requires the speaker and listener (to give the roles anthropomorphic names for a moment) to both be in particular states:
  • the speaker must be in a position to capitalise on whatever opportunity is there, but also to recognise that it is there to be acted upon.
  • the listener must (appear to the speaker to) be in a state that is compatible with whatever the speaker wants to do.

Whether or not the opportunity is acted upon, I think these are true. Notice that they include both time and conditions. Time can exist (forgetting metaphysical concerns) without conditions being true, but the conditions must necessarily exist in a time. So I argue that if you want to tie to conditions you are necessarily tying to time also. If I follow your reasoning, then I think this means you might be open to kairos existing in human-machine interactions?

A difference that is apparent at several points in our dialogue here, I think, is that I want to make (software) testing be about more than interaction of a human with a product. I want it to include human-human interactions around the product. (See e.g. Testing All the Way Down and The Anatomy of a Definition of Testing.)

It’s my intuition that many useful techniques of testing cross over between interactions with humans and those with machines. And so I am interested in seeing what happens when you try to capture them in the same model of testing. And in the course of our discussion I’ve realised that I’ve been thinking along these lines for a while - see Going Postel or even Special Offers, for example.

I think that you want to separate these two worlds more distinctly than I do, and reserve more concepts, approaches and so on for humans only. But I think we have a shared desire to recognise the humanity at the heart of testing and to expect that human judgement is important to contextualise the outcomes of testing.

Anders: Yes you are right, I want to separate the two worlds, and I realise now that the reason is that I hope testers will more actively recognise humanity and especially what it means being human. Too often, testers try to model humanity using terminology and understandings which are fundamentally tied to the technical domain.

This leads to a lot of (hopefully only unconscious) reductionism in the testing world. It’s probably caused by leading thinkers in testing having very technical, not humanistic backgrounds.

So I am passionate that we do not confuse the technical moment in time in which I hit the key on my keyboard to start automatic test suite thereby altering the states of the system under test and the testing tools used, but not yet influencing any humans with the kairos of testing which is only tied to the human relations we have, including those we have with ourselves, and not to any machines.

Kairos happens when we let it happen.

Kairos is when we look down on the computer screen, sense what is on the screen, allow it to enter our minds, and start figuring out what happened and what that might mean.

...
Categories: Blogs

Agile Testing Days Conference Day 3 highlights

Agile Testing with Lisa Crispin - Fri, 12/23/2016 - 06:43

Conference day 3 got off to a rousing start with Jessica DaVita: “I like to go fast and break things”. I think she was referring both to her passion for Moto GP and for software delivery. I liked her observation about a “wall of confusion” between Dev – “Change!” and Ops – “Stability!”

Safety matters, but Jessica observed, “No one ever says ‘We love our security team'”. As the recent Google research supported the concept that psychological safety is the best predictor of successful teams, we have to work on speaking a common language. Jessica explained that finding common ground doesn’t mean we all know the same stuff. Everyone has a unique skill set, but we all have pertinent mutual knowledge, beliefs, and assumptions. We lose that common ground in handoffs. She suggested a “Joint action ladder” to support common ground: Attend, perceive, understand and act.

Safety is conveyed through actions – code and conversation. Jessica says CODE is the only place to find TRUTH.

Another new voice!

Though it is always hard to pick a session for any given timeslot, I was keen to learn from Ash Coleman and her presentation, “Expectations, Adaptations and the Battle for Quality“. I felt this tied in a bit with Jessica’s keynote. Ash explained how we need to set expectations: What to test, how to keep track, how to communicate, how and when to interrupt.

The value of telling stories hit me again and again at this conference, and I loved Ash’s story about her mom, who said the greatest joke is making a plan. But Ash says we should make it anyway, and make alterations as needed. We have to ask questions. What’s working? What isn’t? What does success look like?

We have to cope with multiple communications, and contradictory bugs being reported. Ash suggests we give it time, celebrate small successes, plan for today. Recognize the lack of direction. Don’t sacrifice quality when conditions are changing. One aha learning for me here was that holding tight under pressure is a mistake. We need to learn what the forest looks like, make room for more trees, and decide where to focus. Ash urged us to frame quality, set clear expectations, think about our MVP, and – a theme that came up throughout the conference – let it go.

Designed to learn Melissa Perri on MVPMelissa Perri on MVP

Melissa Perri is one of my idols, I’ve learned so much about design from her. She talked about one of my favorite topics, experiments. Many lead nowhere, but we need them. Melissa noted that “MVP” (minimum viable product) does NOT equal your first release. It’s the minimum effort to let us learn.

I’m a fan of small experiments, an idea I learned from Linda Rising. Melissa recommends experiments to discover the reasons for users’ behavior. While only 1 in 20 experiments about the problem she described actually worked, they still learned from each experiment. Product strategy shouldn’t be a plan – plans fail (as Ash’s mom knew!). Instead, experiment to find out what customers want. Experiments sound scary – “We can’t experiment, it will affect our brand.” Melissa quoted Bill Beard, “Your brand is how people feel about your product or service.” Get it out fast and iterate. Solving big problems for customers creates big value for the business.

Melissa noted that if you go fast but you don’t do what you need to learn, you’re flapping, not flying.

Insights in Open Space Gojko Adzic's TOO BIG heuristicGojko Adzic’s TOO BIG heuristic

There is just sooo much to do at Agile Testing Days. I like to sample everything, which means giving up something else. I’d heard about a number of insightful sessions in the Open Space, which was facilitated by two of my favorite people, Alex Schladebeck and Olaf Lewitz. So this afternoon I finally got there. I’m so happy I did, because I got one of my biggest takeaways here.

Gojko Adzic (another one of my favorite people; Agile Testing Days is teeming with my favorite people) hosted a session about his TOO BIG heuristic for slicing stories. The acronym stands for Testable, Output, Outcome, Behavior change, Impact and Goal. The Outcome is the value to customer, the Behavior Change and Impact are the value to customers. The Goal is the value to the business, for example, market share, conversion rate, churn. This is a way to split stories. Gojko pointed out it is easier to split the problem than the solution. He recommends impact mapping before and after slicing the story.

Gojko said we need to measure value immediately after delivery. The delivery teams I’ve worked on tend to just put features out of mind after delivery and work on the next feature. It’s so important to learn what impact a new feature had on customers, users and the business. Gojko said the change in behavior brings the value you want, not the behavior itself. Lesson learned: If you want to slice a story, make sure it’s TOO BIG first.

Coaching Dojo

I spent the first afternoon session in the Open Space, but Gitte Klitgaard was kind enough to let me join the second half of her coaching dojo. Now, this is one thing I love about Agile Testing Days. Building quality into our software products requires a broad spectrum of skills. I’ve recently learned the value of shedding roles from Selena Delesie, in her Love2Lead course. Even if I’m not working as a “coach”, I have opportunities to lead and coach.

I got so much out of this hour of practice, and number one was learning to ask powerful questions. Here’s one I like: “What would you do if you knew you couldn’t fail?” We should ask open ended, probing questions. “How did it feel, how did it affect you?” We can be a mirror for our colleagues.

In the dojo, we had three roles, the listener, the seeker and the observer. Gitte asked observers to look out for defensive personal barriers and posture. I had the chance to be an observer and then a listener. I’m grateful to Mike Talks (did I mention my favorite people were here?) who played the seeker role. I found it hard to think of powerful questions, but Mike’s story was rather parallel to my own and I gained so many insights into my own issues as I listened to his experiences. This dojo delivered some of my most valuable takeaways.

Women in Agile Summit Women in Agile SummitWomen in Agile Summit

The Women in Agile Summit reflected the conference organizers’ commitment to diversity. Maaret Pyhäjärvi (who had just been voted Most Influential Agile Testing Professional Person by her peers) facilitated the evening session. She set our context with an overview of the topic, then asked each table group to conduct a Lean Coffee discussion. Afterwards, each group shared what came up at their table. Interestingly, more people wandered in as we finished, and intense conversations continued. I had to chase people out of the room when it became clear that the hotel staff really needed to clear it up.

One of my aha moments in the Lean Coffee came from two men at our table. We were discussing what some of us saw as a tendency of men to “brag” more or be more assertive about their abilities. Two of our group were men from England whose faces registered horror at the idea of bragging about themselves. They clearly were not raised to “toot their own horns”, they were raised to be polite and unassuming. I realized that culture is important, regardless of gender.

Prior to this evening session, a group of us went out to dinner. When you’ve been enclosed in such an intense conference, sometimes you just have to get outside of it! Mike Talks asked us such an interesting question: What professional achievement are you most proud of? I enjoyed sharing these stories. This is a great example of how the experiences outside of the conference proper can be the most valuable.

 

The post Agile Testing Days Conference Day 3 highlights appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

How to Test Time In Java

Testing TV - Tue, 12/20/2016 - 09:29
We all know we should have unit tests for most of our Java code. But how do you test your functionalities that are depending on time? You don’t because it is too difficult? You test it manually, changing the actual time of your computer? Or did you check StackOverflow and read that you need to […]
Categories: Blogs

On Being Capable

Hiccupps - James Thomas - Tue, 12/20/2016 - 07:58

When Karo asked whether it'd be OK if she nominated me along with Chris George and Neil Younger as meetup heroes for a UKSTAR competition I said I was sure we'd all be flattered.
Know any Software Meetup Heroes? I nominated @chrisg0911 @norry_twitting @qahiccupps - with a heartfelt thanks! https://t.co/6q68WU4fW1— karo. stoltzenburg (@karostol) December 14, 2016
I guess I didn't really expect it to go anywhere and I certainly didn't expect that I'd feel somewhat embarrassed if it did.

But it has.


And so there you go, I learned something about myself. Again.

I've read the short-listed nominations and Emma, Oana, Alexandru, Leigh, Tony, and Hugh all look like great candidates doing great work for their local testing communities. I'd love you to go and read about them and vote for a hero right now.

Except that as I write this, it looks like, with delightful irony, that might not be possible ...
@qahiccupps there a bug in voting format?Ppl i know couldn't vote even once— Hugh McCamphill (@hughleo01) December 19, 2016 Which I suppose means that you have a little spare time in which I can urge you to get involved if you aren't already!

If you're reading this and you're from Cambridge, join the local tester meetup group run by Karo and Chris and come along to the morning Lean Coffee or the evening sessions. You can read my notes from recent meetings to get a flavour of what they're about.

If you attend just one of the Cambridge meetups in the 6 months before we announce a CEWT - Cambridge Exploratory Workshop on Testing - you'll automatically get an invite to that too.

If you're not from these parts I encourage you to find your own local meetup, with its own heroes, and get in there. You could do a lot worse than starting at the Ministry of Testing meetup group.

And if there's nothing happening locally for you why not set something up? Here's some peer workshop wisdom that proved enormously helpful to me when I was getting CEWT off the ground.
Image: https://flic.kr/p/c4aHWC
Categories: Blogs

One Way to Test

Hiccupps - James Thomas - Tue, 12/20/2016 - 07:06

I came across this quote in Managing the Unmanageable, attributed to Doug Linder:
A good programmer is someone who looks both ways before crossing a one-way street.It made me chuckle - churlishly, childishly - as I imagined a developer crossing testing off their list because each time they'd happened to cross the street they'd implemented they'd checked it was working. Well, perhaps that some aspect of it wasn't not working, at that time, for that person, etc etc.

Reflecting as I write this, I wonder if I'd been having a bad day...

Anyway, I offered the quote to the Test team at Linguamatics yesterday, along with mince pies, and posed a different question as part of our annual festive Testing Can be Fun session (see also The So in Absolute, Last Orders, Further Reading, Testing is Like Making Love):
What might a “good” tester say or do, when encountering a one-way street?Ten minutes allowed, and as many mince pies as you can eat. Stick your answers in the comments if you like.
Image: https://flic.kr/p/brbxns
Categories: Blogs

Very Short Blog Posts (31): Ambiguous or Incomplete Requirements

DevelopSense Blog - Mon, 12/19/2016 - 07:24
This question came up the other day in a public forum, as it does every now and again: “Should you test against ambiguous/incomplete requirements?” My answer is Yes, you should. In fact, you must, because all requirements documents are to some degree ambiguous or incomplete. And in fact, all requirements are to some degree ambiguous […]
Categories: Blogs

You may not need a Tech Lead, but others do

thekua.com@work - Sun, 12/18/2016 - 13:56

Vinicius sent me a tweet about an article he published called We don’t need a Tech Lead in response to an older article of mine, “Do we need a Tech Lead?”

I wanted to respond earlier, but tweets were too restrictive. Here’s my response.

The argument against Tech Leads

The article rebuts the necessity for a Tech Lead with the following points (emphasis author’s, not mine):

  1. Well functioning teams in which people share responsibilities are not rare.
  2. When a team is not functioning well, assigning a tech lead can potentially make it worse.

There are many great points in the article. Some of the points I support such as how sharing responsibilities (also known as effective delegation). Distributing responsibilities can be one way effective teams work. Other points lack essential context such as the title (it depends), while other points lack concrete answers such as how to turn a dysfunctional team into a highly performing team.

Are well-functioning teams rare?

I’ve worked with at least 30 organisations over my career as a consultant, and countless teams, both as a team member (sometimes Tech Lead) and as an observer. I have seen the whole spectrum – from teams who function like a single person/unit to teams with people who simply tolerate sitting next to each other, and where one can’t miss the passive-aggressive behaviours or snide remarks.

The article claims:

that the “tech lead is a workaround – not a root cause solution

and

Tech leads could alleviate the consequences only

Unfortunately the article doesn’t explain how or why the tech lead is a workaround, nor how tech leads alleviate just the consequences.

The article gathered some discussion on Hackernews, and I found some comments particularly interesting.

Let’s take a sample:

  • (gohrt) Trusting that a pair of engineers will always come to an agreement to authoritatively decide the best way forward seems naive to me. Where are these magical people?
  • (vidhar) …we live in reality where lots of teams are not well-functioning some or all of the time, and we still need to get things done even when we don’t have the time, resources or influence to fix the team composition then and there.
  • (ep103) If I had an entire team of my great engineers, my job would be easy. I’d simply delegate my duties to everyone else, and we’d all be nearly equal. I’m jealous of people who work in a shop where the teams are so well constructed, that they think you can get rid of the tech lead role.
  • (shandor) My experience with other developers is that there is a surprisingly large dev population who would absolutely abhorred if they had to touch any of those things (EDIT: i.e. tech lead responsibilities)
  • (doctor_fact) I have worked on teams of highly competent developers where there was no tech lead. They failed badly…
  • (mattsmith321) It’s been a while since I have worked with a lot of talented, like-minded people that were all capable of making good technical decisions.
  • (jt2190) I’ve been on more that one team where no leadership emerged, and in fact, leadership type behavior was passively resisted… These teams (if they can be called that) produced software that had little to no overall design.

Do these sound like well-functioning teams to you? They don’t to me.

Office FightImage from David Trawin’s Flickr stream under the Creative Commons licence

Well-functioning teams do exist. However it is clear that not all teams are well-functioning. In my experience, I would even say that really well-functioning teams are less common than dysfunctional, or just functioning teams. For me, the comments are proof enough that well-functioning teams are not everywhere.

It is actually irrelevant if well-performing teams are rare – there are teams that definitely need help! Which leads to the question…

Does assigning a tech lead to a poorly functioning team make it worse?

In my talk, What I wish I knew as a first time Tech Lead, I explain how acts of leadership are amplifiers (can be good or bad). Therefore assigning a bad tech lead to a poorly functioning team will probably make it worse. However I don’t think organisations set out to give teams bad tech leads.

If a team is poorly functioning, what do organisations do? Simply leave the team to stew in its own juices until things are resolved? That’s one option. Doing nothing is a gamble – you depend on someone in the team to take an act of leadership but the question is will they? I’ve seen many teams never resolve the very issues that make them poorly functioning without some form external intervention or assistance.

Most organisations try to solve this by introducing a role who has some authority. It doesn’t necessarily need to be a Tech Lead, but when the core issues are technical in nature, a good Tech Lead can help. A good leader will seek out the core issues that prevent good teamwork, and use their role to find ways to move them towards a well-functioning team. Sometimes this may mean calling meetings, even if the team do not want to have meetings to reach an agreement about how the team handles certain situations, tasks or responsibilities. A good outcome might be an agreed Team Charter or some clarity about who in the team is responsible for what. A team may end up with a model that looks like they do not need a Tech Lead, but it takes an act of leadership to to make that happen.

The wrong analysis?

The article suggests that a full-time Tech Lead introduces risks such as a lack of collective code ownership, decision-making bottlenecks, a single point bus factor, and (reduced) impact on motivation. I have seen teams with and without Tech Leads both suffering from these issues. In my experience, teams without a Tech Lead tend to have more issues with knowledge silos, no cohesive view and less collective code ownership because there is little motivation to optimise for the group and individuals end up optimising for themselves.

The issue is not caused by whether or not teams have a Tech Lead. Rather, these issues are caused by a lack of a technical leadership (behaviour). The Tech Lead role is not a prerequisite for having technical leadership. I have seen teams where strong, passionate individuals will speak up, bring the team together and address these issues – which are acts of leadership. I have also seen dysfunctional teams sit on their hands because individual (job) safety is an issue and these issues go unaddressed.

My conclusion

The article misses the subtle but important point of good technical leadership. A good leader and Tech Lead is not trying to own all of the responsibilities – they are there to make sure they happen. There is nothing worse than expecting everyone is responsible for a task, only to find that no one is responsible for it.

“The greatest leader is not necessarily the one who does the greatest things. (They) are the one that gets the people to do the greatest things.” – Ronald Reagan

The extent to how much individuals in a team can own these responsibilities is a function of the individuals’ interests, skills and experience. It depends!

Asking whether or not teams need a Tech Lead is the wrong question. Better questions to ask include what’s the best way to make sure all of the Tech Lead responsibilities are fulfilled, and what style of leadership does this team need right now.

Categories: Blogs

Fixing ssh on Mac Sierra 10.12.1

thekua.com@work - Thu, 12/15/2016 - 20:54

I recently upgraded my mac to the latest OS only to find out that my ssh command wasn’t working.

>ssh <strong>servername</strong>

resulted in:

> .ssh/config: line 18: Bad configuration option: useroaming
> .ssh/config: terminating, 1 bad configuration options

which looks like because I added in the following entry to my

.ssh/config

file in response to a previous SSH vulnerability:

UseRoaming no

This vulnerability looks like it’s been fixed: https://www.solved.tips/sshconfig-line-7-bad-configuration-option-useroaming-macos-10-12-sierra/

Categories: Blogs

Cambridge Lean Coffee

Hiccupps - James Thomas - Thu, 12/15/2016 - 08:02

This month's Lean Coffee was hosted by Cambridge Consultants. Here's some brief, aggregated comments and questions  on topics covered by the group I was in.

How to get work in testing having been a developer for 25 years?
  • The questioner is an experienced developer/consultant who consistently sees "poor quality" development.
  • You don't need a formal background; it's possible to learn testing on the job.
  • The job market seems to be about 'technical testers' these days, so a developer could be suited to it.
  • Are you applying for roles and being rejected. (Not yet; this is a recent idea.)
  • What do you mean by testing? ("Separation of concerns, loose coupling, SOLID, good requirements. Unit testing is just there for the taking ... you just do it.")
  • They sound like full life-cycle or architectural ideas that might enable testing or reduce the need for it? ("Yes.")
  • Think about what motivates the person you're pitching to. What do they care about? Ask what they're worried about, the risks they perceive.
  • Testing is a stigma for some people.
  • Perhaps don't try to sell testing, so much as the value that testing can bring.
  • Testing for its own sake is tedious.
  • What is the context that you're trying to sell testing into?
  • In some cases, testing might be the wrong thing to suggest. For example a startup might need to move fast to get to market.
  • Remember that it doesn't matter how valuable testing is to you, the key is how valuable it is to them.

Test Managers must have been testers.
  • Are we talking about technical management or line management? (The questioner was more interested in line management.)
  • Other things being equal, I'd rather have a good people manager than a tester as a manager.
  • Testers will benefit from access to someone with technical knowledge, if not their manager.
  • A good manager can give the value proposition from the company perspective. Someone focused on testing might not do that so well.
  • A good line manager understands your needs and helps you through challenges in all areas (not just your discipline).
  • A non-testing manager can offer a useful alternative perspective, force you to speak in plain language.
  • A non-testing manager might not understand the value that you've given on projects (and does salary review, appraisal etc) but a good manager will ask relevant people for that feedback.
  • What's the best thing a manager has done/does for you?
  • ... (non-tester) pushed me to develop myself; in particular he saw that I could benefit from public speaking experience.
  • ... (non-tester) trusts me to get on with stuff - but asks me hard questions
  • ... (tester) supported me; gave me time to learn
  • ... (tester) defended me from company crap and allowed me to do good work that needed doing
  • Can we differentiate people who see value in testing and in testers?
  • Line management is about people not activities.

How detailed should exploratory testing be?
  • The questioner has been accused of going "too deep" when testing, after finding bugs outside the mission scope.
  • ET is about learning the product; about iterating, debriefing and focusing.
  • Look at Explore It!
  • Sometimes the mission is "I just want you to check the feature".
  • Sometimes people don't want to hear about bugs because they might e.g. stop the product shipping.
  • Sometimes people assume that "I found a bug" means "you must fix the bug I found".
  • Are there other things that you could have done that would have been more valuable?
  • What did your accusers expect from you?
Edit: Katrina Clokie followed up on this question in The Testing Pendulum: Finding balance in exploration.
We can't find all the bugs, so which ones shouldn't we look for? How?
  • Think about the cost to the organisation if an issue comes to light. What do the stakeholders care about?
  • Quality is in the eye of the stakeholder.
  • Don't look for the bugs that the customer is likely to find.
  • You shouldn't look for the cases that aren't important.
  • Is that very practical advice? How do you know?
  • Yes, it is practical advice, it can force you to think about or find out which are the important cases.
  • ... for example, performance is not important, so we won't look for bugs there.
  • ... which isn't to say we won't find them in passing, of course.
  • But testing is a way to uncover the things that are important.
  • ... ideally it will be a continual dialogue with stakeholders which focuses the investigation.
  • If you're not going to do anything with the information, then don't look for it. There's no value in reporting if no action will result.
  • But sometimes the aggregation of bugs in an area is itself significant, e.g. one typo on a page vs 300 typos on every page.
  • That's an interesting negative ("shouldn't") because normally we focus on the things we are doing or should do.
  • Isn't the premise here questionable? Do testers really generally go out looking for specific bugs?
  • Perhaps testers will be focusing more on the areas of potential risk and ways in which those risks might be exposed?
  • But you might know of, say, a repeated anti-pattern within the development team that you would look for explicitly.
Edit: Me and Anders Dinsen followed up this question in What We Found Not Looking for Bugs.Image: https://flic.kr/p/51zjaK
Categories: Blogs