Skip to content


A Glass Half Fool

Hiccupps - James Thomas - Wed, 07/27/2016 - 07:48
While there's much to dislike about Twitter, one of the things I do enjoy is the cheap and frequent opportunities it provides for happy happenstance.
@noahsussman Only computers?

It's easy to put people in incongruous situations. The art is in not doing it accidentally.— James Thomas (@qahiccupps) July 27, 2016Without seeing Noah Sussman's tweet, I wouldn't have had my own thought, a useful thought for me, a handy reminder to myself of what I'm trying to do in my interactions with others, captured in a way I had never considered it before.
Categories: Blogs

CC to Everyone

James Bach's Blog - Tue, 07/26/2016 - 19:23
I sent this to someone who’s angry with me due to some professional matter we debated. A colleague thought it would be worth showing you, too. So, for whatever it’s worth:

I will say this. I don’t want anyone to feel bad about me, or about my behavior, or about themselves. I can live with that, but I don’t want it.

So, if there is something simple I can do to help people feel better, and it does not require me to tell a lie, then I am willing to do so.

I want people to excel at their craft and be happy. That’s actually what is motivating me, underneath all my arguing.

Categories: Blogs

Stepping out of my comfort zone

Agile Testing with Lisa Crispin - Mon, 07/25/2016 - 00:27

The 30 days of testing challenges are energizing me! Day 23’s challenge is to help someone test better. I’m going to combine that one with stepping out of my comfort zone on day 14 by sharing what I learned here. It may help you test better!

Recently, my awesome teammate Chad Wagner and I were trying to reproduce a problem found by another teammate.  Chad and I are testers on the Pivotal Tracker team. One of the developers on our team reported that he was hitting the backspace button while editing text in a story while another project member made an update, and it caused his backspace button to act as a browser back button. He lost his text changes, which was annoying. In trying to reproduce this, we found that whenever focus goes outside the text field, the backspace button indeed acts as a browser back button. But was that what happened in this case? It was hard to be sure what element has focus.

Chad wanted a way to see what element is in focus at any given time to help with trying to repro this issue. He found a :focus psuedo class in CSS that seemed helpful. He also found a bookmarklet to inject new CSS rules from Paul Irish. With help from a developer teammate and our Product Owner, Chad made the following bookmarklet:

javascript:(function()%7Bvar newcss%3D”:focus { outline:5px dashed red !important} .honeypot:focus { opacity:1 !important; width: 10px !important; height: 10px !important; outline:5px dashed red !important}”%3Bif(“%5Cv”%3D%3D”v”)%7Bdocument.createStyleSheet().cssText%3Dnewcss%7Delse%7Bvar tag%3Ddocument.createElement(“style”)%3Btag.type%3D”text/css”%3Bdocument.getElementsByTagName(“head”)%5B0%5D.appendChild(tag)%3Btag%5B(typeof”string”)%3F”innerText”:”innerHTML”%5D%3Dnewcss%7D%7D)()%3B

Red highlighting shows focus is currently in the Description field

Red highlighting shows focus is currently in the Description field

This bookmarklet puts red highlighting around whatever field, button or link on which your browser session has focus, as shown in the example.

What does this have to do with my comfort zone?

Chad is always trying new things and dragging me out of my comfort zone. He told me about the bookmarklet. I didn’t even know what a bookmarklet is, I had to start searching around. Chad sent me the code for the bookmarklet, and I tried unsuccessfully to use the bookmarklet. I was working from home that day, so we got on Zoom and Chad showed me how to use this. I read the blog posts (listed above) that he had found.

These fancy tools tend to scare me, because I’m afraid I won’t understand them. And indeed, I do not understand this very well. So we need to find time so that Chad can pair with me and explain more about this bookmarklet. My understanding is that this could work on any web page, but I haven’t been able to get it to work with another one.  So this will be getting me out of my comfort zone again soon.

Can you try it?

If being able to see what has focus on your web page would help you test better, maybe you can try this out, and if you can get it to work, maybe you can help me. Day 24’s challenge is to connect with someone new, so let’s connect! And when I learn more, which I’ll try to do tomorrow, I’ll update this post.

Team effort FTW Story for built-in tool

Story for built-in tool

Our PO who helped Chad get this bookmarklet working thinks it’s such a good idea that he added and prioritized a story in our backlog to allow users to enable a mode to show what has focus in Tracker. The team thinks this is a cool idea, and it will be done soon. So I won’t have to worry about the bookmarklet for that, but I still want to learn more about how I can use CSS rules and bookmarklets to help with testing.

The post Stepping out of my comfort zone appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Go, Ape

Hiccupps - James Thomas - Thu, 07/21/2016 - 22:44

A couple of years ago I read The One Minute Manager by Ken Blanchard on the recommendation of a tester on my team. As The One Line Reviewer I might write that it's an encouragement to do some generally reasonable things (set clear goals, monitor progress towards them, and provide precise and timely feedback) wrapped up in a parable full of clumsy prose and sprinkled liberally with business aphorisms.

Last week I was lent a copy of The One Minute Manager Meets the Monkey, one of what is clearly a not insubstantial franchise that's grown out of the original book. Unsurprisingly perhaps, given that it is part of a successful series, this book is similar to the first: another shop floor fable, more maxims, some sensible suggestions.

On this occasion, the advice is to do with delegation and, specifically, about managers who pull work to themselves rather than sharing it out. I might summarise the premise as:
  • Managers, while thinking they are servicing their team, may be blocking them.
  • The managerial role is to maximise the ratio of managerial effort to team output.
  • Which means leveraging the team as fully as possible.
  • Which in turn means giving people responsibility for pieces of work.

And I might summarise the advice as:
  • Specify the work to be done as far as is sensible.
  • Make it clear who is doing what, and give work to the team as far as is sensible.
  • Assess risks and find strategies to mitigate them.
  • Review on a schedule commensurate with the risks identified.

And I might describe the underlying conceit as: tasks and problems are monkeys to be passed from one person's back to another. (See Management Time: Who’s Got the Monkey?)  And also as: unnecessary.

So, as before, I liked the book's core message - the advice, to me, is a decent default - but not so much the way it is delivered. And, yes, of course, I should really have had someone read it for me.
Image: Amazon
Categories: Blogs

Integrating AutoMapper with ASP.NET Core DI

Jimmy Bogard - Wed, 07/20/2016 - 18:30

Part of the release of ASP.NET Core is a new DI framework that’s completely integrated with the ASP.NET pipeline. Previous ASP.NET frameworks either had no DI or used service location in various formats to resolve dependencies. One of the nice things about a completely integrated container (not just a means to resolve dependencies, but to register them as well), means it’s much easier to develop plugins for the framework that bridge your OSS project and the ASP.NET Core app. I already did this with MediatR and HtmlTags, but wanted to walk through how I did this with AutoMapper.

Before I got started, I wanted to understand what the pain points of integrating AutoMapper with an application are. The biggest one seems to be the Initialize call, most systems I work with use AutoMapper Profiles to define configuration (instead of one ginormous Initialize block). If you have a lot of these, you don’t want to have a bunch of AddProfile calls in your Initialize method, you want them to be discovered. So first off, solving the Profile discovery problem.

Next is deciding between the static versus instance way of using AutoMapper. It turns out that most everyone really wants to use the static way of AutoMapper, but this can pose a problem in certain scenarios. If you’re building a resolver, you’re often building one with dependencies on things like a DbContext or ISession, an ORM/data access thingy:

public class LatestMemberResolver : IValueResolver<object, object, User> {
  privat readonly AppContext _dbContext;
  public LatestMemberResolver(AppContext dbContext) {
    _dbContext = dbContext;
  public User Resolve(object source, object destination, User destMember, ResolutionContext context) {
    return _dbContext.Users.OrderByDescending(u => u.SignUpDate).FirstOrDefault();

With the new DI framework, the DbContext would be a scoped dependency, meaning you’d get one of those per request. But how would AutoMapper know how to resolve the value resolver correctly?

The easiest way is to also scope an IMapper to a request, as its constructor takes a function to build value resolvers, type converters, and member value resolvers:

IMapper mapper 
  = new Mapper(Mapper.Configuration, t => ServiceLocator.Resolve(t));

The caveat is you have to use an IMapper instance, not the Mapper static method. There’s a way to pass in the constructor function to a Mapper.Map call, but you have to pass it in *every single time*, and thus not so useful:

Mapper.Map<User, UserModel>(user, 
  opt => opt.ConstructServicesUsing(t => ServiceLocator.Resolve(t)));

Finally, if you’re using AutoMapper projections, you’d like to stick with the static initialization. Since the projection piece is an extension method, there’s no way to resolve dependencies other than passing them in, or service location. With static initialization, I know exactly where to go to look for AutoMapper configuration. Instance-based, you have to pass in your configuration to every single ProjectTo call.

In short, I want static initialization for configuration, but instance-based usage of mapping. Call Mapper.Initialize, but create mapper instances from the static configuration.

Initializating the container and AutoMapper

Before I worry about configuring the container (the IServiceCollection object), I need to initialize AutoMapper. I’ll assume that you’re using Profiles, and I’ll simply scan through a list of assemblies for anything that is a Profile:

private static void AddAutoMapperClasses(IServiceCollection services, IEnumerable<Assembly> assembliesToScan)
    assembliesToScan = assembliesToScan as Assembly[] ?? assembliesToScan.ToArray();

    var allTypes = assembliesToScan.SelectMany(a => a.ExportedTypes).ToArray();

    var profiles =
        .Where(t => typeof(Profile).GetTypeInfo().IsAssignableFrom(t.GetTypeInfo()))
        .Where(t => !t.GetTypeInfo().IsAbstract);

    Mapper.Initialize(cfg =>
        foreach (var profile in profiles)

The assembly list can come from a list of assemblies or types passed in to mark assemblies, or I can just look at what assemblies are loaded in the current DependencyContext (the thing ASP.NET Core populates with discovered assemblies):

public static void AddAutoMapper(this IServiceCollection services)

public static void AddAutoMapper(this IServiceCollection services, DependencyContext dependencyContext)
        .SelectMany(lib => lib.GetDefaultAssemblyNames(dependencyContext).Select(Assembly.Load)));

Next, I need to add all value resolvers, type converters, and member value resolvers to the container. Not every value resolver etc. might need to be initialized by the container, and if you don’t pass in a constructor function it won’t use a container, but this is just a safeguard just in case something needs to resolve these AutoMapper service classes:

var openTypes = new[]
foreach (var openType in openTypes)
    foreach (var type in allTypes
        .Where(t => t.GetTypeInfo().IsClass)
        .Where(t => !t.GetTypeInfo().IsAbstract)
        .Where(t => t.ImplementsGenericInterface(openType)))

I loop through every class and see if it implements the open generic interfaces I’m interested in, and if so, registers them as transient in the container. The “ImplementsGenericInterface” doesn’t exist in the BCL, but it probably should :) .

Finally, I register the mapper configuration and mapper instances in the container:

services.AddScoped<IMapper>(sp => 
  new Mapper(sp.GetRequiredService<IConfigurationProvider>(), sp.GetService));

While the configuration is static, every IMapper instance is scoped to a request, passing in the constructor function from the service provider. This means that AutoMapper will get the correct scoped instances to build its value resolvers, type converters etc.

With that in place, it’s now trivial to add AutoMapper to an ASP.NET Core application. After I create my Profiles that contain my AutoMapper configuration, I instruct the container to add AutoMapper (now released as a NuGet package from the AutoMapper.Extensions.Microsoft.DependencyInjection package):

public void ConfigureServices(IServiceCollection services)
    // Add framework services.


And as long as I make sure and add this after the MVC services are registered, it correctly loads up all the found assemblies and initializes AutoMapper. If not, I can always instruct the initialization to look in specific types/assemblies for Profiles. I can then use AutoMapper statically or instance-based in a controller:

public class UserController {
  private readonly IMapper _mapper;
  private readonly AppContext _dbContext;
  public UserController(IMapper mapper, AppContext dbContext) {
    _mapper = mapper;
    _dbContext = dbContext;
  public IActionResult Index() {
    var users = dbContext.Users
    return View(users);
  public IActionResult Show(int id) {
    var user = dbContext.Users.Where(u => u.Id == id).Single();
    var model = _mapper.Map<User, UserIndexModel>(user);
    return View(model);

The projections use the static configuration, while the instance-based uses any potential injected services. Just about as simple as it can get!

Other containers

While the new AutoMapper extensions package is specific to ASP.NET Core DI, it’s also how I would initialize and register AutoMapper with any container. Previously, I would lean on DI containers for assembly scanning purposes, finding all Profile classes, but this had the unfortunate side effect that Profiles could themselves have dependencies – a very bad idea! With the pattern above, it should be easy to extend to any other DI container.

Categories: Blogs

MediatR Extensions for Microsoft Dependency Injection Released

Jimmy Bogard - Tue, 07/19/2016 - 21:07

To help those building applications using the new Microsoft DI libraries (used in Orleans, ASP.NET Core, etc.), I pushed out a helper package to register all of your MediatR handlers into the container.


To use, just add the AddMediatR method to wherever you have your service configuration at startup:

public void ConfigureServices(IServiceCollection services)


You can either pass in the assemblies where your handlers are, or you can pass in Type objects from assemblies where those handlers reside. The extension will add the IMediator interface to your services, all handlers, and the correct delegate factories to load up handlers. Then in your controller, you can just use an IMediator dependency:

public class HomeController : Controller
  private readonly IMediator _mediator;

  public HomeController(IMediator mediator)
    _mediator = mediator;
  public IActionResult Index()
    var pong = _mediator.Send(new Ping {Value = "Ping"});
    return View(pong);

And you’re good to go. Enjoy!

Categories: Blogs

Iterate to Accumulate

Hiccupps - James Thomas - Tue, 07/19/2016 - 06:08
I'm very interested in continual improvement and I experiment to achieve it. This applies to most aspects of my work and life and to the Cambridge Exploratory Workshop on Testing (CEWT) that I founded and now run with Chris George.

After CEWT #1 I solicited opinions, comments and suggestions from the participants and acted on many of them for CEWT #2.

In CEWT #2, in order to provide more opportunity for feedback, we deliberately scheduled some time for reflection on the content, the format and any other aspect of the workshop in the workshop itself. We used a rough-and-ready Stop, Start, Continue format and here's the results, aggregated and slightly edited for consistency:
  • Speaker to present "seed" questions
  • Closing session (Identify common threads, topics; Share our findings more widely)
  • More opposing views (Perhaps set up opposition by inviting talks? Use thinking hats?)
  • Focused practical workshop (small huddles)
  • 10 talks too many?
  • Whole day event (About an hour or two too long; Make it half a day)
  • Running CEWT on a Sunday
  • Earlyish start
  • Voting for talks (perhaps group familiar ones?)
  • Don’t make [everyone] present
  • Prep for different length talks
  • CEWT :)
  • Loved it!
  • Great Venue
  • Good location
  • Lunch, logistics
  • One whole day worked well
  • Varied talks
  • Keep to 10 min talks
  • Short talks & long discussions are good
  • This amount of people
  • Informal, local, open
  • Topic discussions
  • Everyone got a chance to speak
  • Cards for facilitation
  • Flexible agenda
  • Ideas being the priority
  • Energy seemed to drop during the day
Me and Chris have now started planning CEWT #3 and so we reviewed the retrospective comments and discussed changes we might make, balanced against our own desires (which, we find, differ in places) and the remit for CEWT itself, which is:
  • Cambridge: the local tester community; participants have been to recent meetups.
  • Exploratory: beyond the topic there's no agenda; bring on the ideas.
  • Workshop: not lectures but discussion; not leaders but peers; not handouts but arms open.
  • Testing: and anything relevant to it.
We first discussed the reported decrease in energy levels towards the end of the day during CEWT #2. We'd felt it too. We considered several options, including reducing the length. But we decided for now to keep to a whole day.

We like that length for several reasons, including: it allows conversation to go deep and broad; it allows time for reflection; it allows time for all to participate; it contributes to the distinction between CEWT and local meetups.

So if we're keeping the same length, what else could we try changing to keep energy levels up? The CEWT #2 feedback suggested a couple of directions:
  • stop: 10 talks too many; Don’t make [everyone] present
  • start: More opposing views; Focused practical workshops
We are personally interested in switching to some kind of group activity inside the workshop, maybe even ad hoc lightning talks, so we're going do something in that direction. We also - after much deliberation - decided to reduce the number of talks and to inject more potential views by increasing the number of participants to 12.

CEWT #1 had eight participants, CEWT #2 had 10. We felt that the social dynamic at those events was good. We are wary of growing to a point where anyone doesn't get a chance to speak on any topic they feel strongly about or that they have something interesting to contribute to. We will retain cards to facilitate discussion but we know from our own experience, and research amongst other peer workshop groups, that we need to be careful here.

At the two CEWTs to date all participants have presented. Personally I like that aspect of it; it feels inclusive, participatory, about peers sharing. But we are aware that asking people to stand up and talk is a deterrent to some and part of what we're about is encouraging local testers. Participation in a discussion might be an easier next step from meetups than speaking, even in a safe environment like CEWT. So we're going to try having only some participants present talks.

But we also don't want to stop providing an opportunity for people who have something to say and would like to practice presenting in front of an interested, motivated, friendly audience. One of the CEWT #2 participants, Claire, blogged on just that point:
I was asked if I wanted to attend CEWT2. I knew this would involve doing a presentation which I wasn't particularly thrilled about, but the topic really had me chomping at the bit to participate. It was an opportunity for me to finally lay some ghosts to rest about a particularly challenging situation I foolishly allowed to affect me to the extent I thought I was a rubbish tester. I deleted my previous blog and twitter as I no longer had any enthusiasm about testing and wasn't even sure it was a path I wanted to continue down. So, despite being nervous at the thought of presenting I was excited to be in a position to elicit the thoughts from other testers about my experience.
The reactions, questions and suggestions have healed that last little hole in my testing soul. It was a great experience to be in a positive environment amongst other testers, all with different skills and experiences, who I don't really know, all coming together to talk about testing.Chris and me talked a lot about how to implement the desire to have fewer talks. Some possibilities we covered:
  • invite only a select set of participants to speak
  • ask for pre-submission of papers and choose a set of them 
  • ask everyone to prepare and use voting on the day to decide who speaks
  • ask people when they sign up whether they want to speak
I have some strong opinions here, opinions that I will take a good deal of persuading to change:
  • I don't want CEWT to turn into a conference.
  • I don't want CEWT to turn into a bureaucracy.
  • I don't want anyone to prepare and not get the opportunity to present.
In CEWT #2 we used dot voting to order the talks and suggested that people be prepared to talk for longer (if their talk was voted early) or shorter (if late). As it happened, we decided on the day to change the schedule to let everyone speak for the same length of time but the two-length talk idea wasn't popular, as the stop "Prep for different length talks" feedback notes.

So this time we're going to try asking people whether they want to present or not, expecting that some will not and we'll have a transparent strategy for limiting the number in any case. (Perhaps simply an ordered list of presenters and reserve presenters, as we do for participants.) We'll have a quota of presenters in mind but we haven't finalised that quite yet; not until we've thought some more about the format of the day.

With some presenters and some non-presenters, we're concerned that we don't encourage or create a kind of two-level event with some people (perceived as) active and some passive. You'll notice I haven't referred to attendees in this post; we are about peers, about participation, and we want participants. Part of the CEWT #3 experiment will be to see how that goes on the day.

Clearly the changes we've chosen to make are not the only possible way to accommodate the feedback we received. But we have, consciously, chosen to make some changes. Our commitment here is to continually look to improve the experience and outcomes from the CEWTs (for the participants, the wider community and ourselves) and we believe that openness, experimentation, feedback and evaluation is a healthy way to do that.

Let's see what happens!
Categories: Blogs

HtmlTags 4.1 Released for ASP.NET 4 and ASP.NET Core

Jimmy Bogard - Mon, 07/18/2016 - 20:20

One of the libraries that I use on most projects (but probably don’t talk about it much) is now updated for the latest ASP.NET Core MVC. In order to do so, I broke out the classic ASP.NET and ASP.NET Core pieces into separate NuGet packages:

Since ASP.NET Core supports DI from the start, it’s quite a bit easier to integrate HtmlTags into your ASP.NET Core application. To enable HtmlTags, you can call AddHtmlTags in the method used to configure services in your startup (typically where you’d have the AddMvc method):

services.AddHtmlTags(reg =>
       .ModifyWith(er => er.CurrentTag.Text(er.CurrentTag.Text() + "?"));

The AddHtmlTags method takes a configuration callback, a params array of HtmlConventionRegistry objects, or an entire HtmlConventionLibrary. The one with the configuration callback includes some sensible defaults, meaning you can pretty much immediately use it in your views.

The HtmlTags.AspNetCore package includes extensions directly for IHtmlHelper, so you can use it in your Razor views quite easily:

@Html.Label(m => m.FirstName)
@Html.Input(m => m.FirstName)
@Html.Tag(m => m.FirstName, "Validator")

@Html.Display(m => m.Title)

Since I’m hooked in to the DI pipeline, you can make tag builders that pull in a DbContext and populate a list of radio buttons or drop down items from a table (for example). And since it’s all object-based, your tag conventions are easily testable, unlike the tag helpers which are solely string based.


Categories: Blogs

Pokémon Go to Agile Go

Pokémon Go is an augmented-reality game that recently launched in Australia, New Zealand, United States, Germany and a number of other countries. In a nutshell, you search for Pokemon in the Poké world in your actual, geographical location, which you can explore by physically walking around.
Allow me to introduce you to my newly invented game called Agile Go.  Like Pokémon Go, Agile Go is a reality based game in which you search for people within your “Agile World” (aka, your company) that are exhibiting Agile behaviors aligned with the Agile values and principles.  Take a picture of them and tag it with “Agile Go”.  In Pokémon Go, you capture pokemon.  In Agile Go, you capture the moment where people are exhibiting Agile behavior.  What Agile behaviors should you look for?  Here are some real time scenarios to look for:
  • Business and development collaborating together
  • Product Owner or Team welcoming change to requirements
  • Teams self-organizing around the work
  • Team or Product Owner demonstrating an iteration of work
  • Product Owner getting feedback from actual customers
  • Team member applying a secondary skills to help others 
  • Anyone applying face-to-face communication 
  • Team identifying work not needed during grooming 
  • Anyone completing an action for improvement 
  • Manager or anyone removing an impediment to progress
If you see someone exhibiting any of these Agile behaviors or others you deem as aligning with the Agile values and principles, take a picture of them, write the Agile behavior they are exhibiting, and tag it with the “Agile Go” logo (see below).  Then share the photograph with them, letting them know that you appreciate them exhibiting positive Agile behaviors! Even consider tweeting their picture with the #agilego hashtag.  Go Agile!

© Agile Go All rights reserved. Anyone may use the Agile Go logo for non profit and non revenue basis
Categories: Blogs

Open letter to "CDT Test Automation" reviewers

Chris McMahon's Blog - Thu, 07/14/2016 - 16:23


Tim Western
Alan Page
Keith Klain
Ben Simo
Paul Holland
Alan Richardson
Christin Wiedemann
Albert Gareev
Noah Sussman
Joseph Quaratella

Apropos of my criticism of "Context Driven Approach to Automation in Testing" (I reviewed version 1.04), I ask you to join me in condemning publicly both the tone and the substance of that paper.

If you do support the paper, I ask you to do so publicly.

And regardless of your view, I request that you ask the authors of the paper bearing your names to remove that paper from public view as well as to remove the copy that Keith Klain hosts here.  For the reasons I pointed out, this paper is an impediment to reasonable discussion and it has no place in the modern discourse about test automation.
Categories: Blogs

Getting the Worm

Hiccupps - James Thomas - Thu, 07/14/2016 - 06:57
Will Self wrote about his writing in The Guardian recently:
When I’m working on a novel I type the initial draft first thing in the morning. Really: first thing ... I believe the dreaming and imagining faculties are closely related, such that wreathed in night-time visions I find it possible to suspend disbelief in the very act of making stuff up, which, in the cold light of day would seem utterly preposterous. I’ve always been a morning writer, and frankly I believe 99% of the difficulties novices experience are as a result of their unwillingness to do the same.I am known (and teased) at work for being up and doing stuff at the crack of dawn and, although I don't aim to wake up early, when it happens I do aim to take advantage. I really do like working (or blogging, or reading) at this time. I feel fresher, more creative, less distracted.

I wouldn't be as aggressive as Self is about others who don't graft along with the sunrise (but he's not alone; even at bedtime I don't have to look hard to find articles like Why Productive People Get Up Insanely Early) because, for me, there are any number of reasons why novice writers, or testers or managers, or others experience difficulties. And I doubt more conscientious attention to an alarm clock would help in most of those cases.

Also, it's known that people differ in chronotype. I came to terms with my larkness a long time ago and now rarely try to go against it by, say, working in the evenings.

How about you?
Categories: Blogs

The Inquiry Method for Test Planning

Google Testing Blog - Wed, 07/13/2016 - 17:20
by Anthony Valloneupdated: July 2016

Creating a test plan is often a complex undertaking. An ideal test plan is accomplished by applying basic principles of cost-benefit analysis and risk analysis, optimally balancing these software development factors:
  • Implementation cost: The time and complexity of implementing testable features and automated tests for specific scenarios will vary, and this affects short-term development cost.
  • Maintenance cost: Some tests or test plans may vary from easy to difficult to maintain, and this affects long-term development cost. When manual testing is chosen, this also adds to long-term cost.
  • Monetary cost: Some test approaches may require billed resources.
  • Benefit: Tests are capable of preventing issues and aiding productivity by varying degrees. Also, the earlier they can catch problems in the development life-cycle, the greater the benefit.
  • Risk: The probability of failure scenarios may vary from rare to likely, and their consequences may vary from minor nuisance to catastrophic.
Effectively balancing these factors in a plan depends heavily on project criticality, implementation details, resources available, and team opinions. Many projects can achieve outstanding coverage with high-benefit, low-cost unit tests, but they may need to weigh options for larger tests and complex corner cases. Mission critical projects must minimize risk as much as possible, so they will accept higher costs and invest heavily in rigorous testing at all levels.
This guide puts the onus on the reader to find the right balance for their project. Also, it does not provide a test plan template, because templates are often too generic or too specific and quickly become outdated. Instead, it focuses on selecting the best content when writing a test plan.

Test plan vs. strategy
Before proceeding, two common methods for defining test plans need to be clarified:
  • Single test plan: Some projects have a single "test plan" that describes all implemented and planned testing for the project.
  • Single test strategy and many plans: Some projects have a "test strategy" document as well as many smaller "test plan" documents. Strategies typically cover the overall test approach and goals, while plans cover specific features or project updates.
Either of these may be embedded in and integrated with project design documents. Both of these methods work well, so choose whichever makes sense for your project. Generally speaking, stable projects benefit from a single plan, whereas rapidly changing projects are best served by infrequently changed strategies and frequently added plans.
For the purpose of this guide, I will refer to both test document types simply as "test plans”. If you have multiple documents, just apply the advice below to your document aggregation.

Content selection
A good approach to creating content for your test plan is to start by listing all questions that need answers. The lists below provide a comprehensive collection of important questions that may or may not apply to your project. Go through the lists and select all that apply. By answering these questions, you will form the contents for your test plan, and you should structure your plan around the chosen content in any format your team prefers. Be sure to balance the factors as mentioned above when making decisions.

  • Do you need a test plan? If there is no project design document or a clear vision for the product, it may be too early to write a test plan.
  • Has testability been considered in the project design? Before a project gets too far into implementation, all scenarios must be designed as testable, preferably via automation. Both project design documents and test plans should comment on testability as needed.
  • Will you keep the plan up-to-date? If so, be careful about adding too much detail, otherwise it may be difficult to maintain the plan.
  • Does this quality effort overlap with other teams? If so, how have you deduplicated the work?

  • Are there any significant project risks, and how will you mitigate them? Consider:
    • Injury to people or animals
    • Security and integrity of user data
    • User privacy
    • Security of company systems
    • Hardware or property damage
    • Legal and compliance issues
    • Exposure of confidential or sensitive data
    • Data loss or corruption
    • Revenue loss
    • Unrecoverable scenarios
    • SLAs
    • Performance requirements
    • Misinforming users
    • Impact to other projects
    • Impact from other projects
    • Impact to company’s public image
    • Loss of productivity
  • What are the project’s technical vulnerabilities? Consider:
    • Features or components known to be hacky, fragile, or in great need of refactoring
    • Dependencies or platforms that frequently cause issues
    • Possibility for users to cause harm to the system
    • Trends seen in past issues

  • What does the test surface look like? Is it a simple library with one method, or a multi-platform client-server stateful system with a combinatorial explosion of use cases? Describe the design and architecture of the system in a way that highlights possible points of failure.
  • What platforms are supported? Consider listing supported operating systems, hardware, devices, etc. Also describe how testing will be performed and reported for each platform.
  • What are the features? Consider making a summary list of all features and describe how certain categories of features will be tested.
  • What will not be tested? No test suite covers every possibility. It’s best to be up-front about this and provide rationale for not testing certain cases. Examples: low risk areas that are a low priority, complex cases that are a low priority, areas covered by other teams, features not ready for testing, etc. 
  • What is covered by unit (small), integration (medium), and system (large) tests? Always test as much as possible in smaller tests, leaving fewer cases for larger tests. Describe how certain categories of test cases are best tested by each test size and provide rationale.
  • What will be tested manually vs. automated? When feasible and cost-effective, automation is usually best. Many projects can automate all testing. However, there may be good reasons to choose manual testing. Describe the types of cases that will be tested manually and provide rationale.
  • How are you covering each test category? Consider:
  • Will you use static and/or dynamic analysis tools? Both static analysis tools and dynamic analysis tools can find problems that are hard to catch in reviews and testing, so consider using them.
  • How will system components and dependencies be stubbed, mocked, faked, staged, or used normally during testing? There are good reasons to do each of these, and they each have a unique impact on coverage.
  • What builds are your tests running against? Are tests running against a build from HEAD (aka tip), a staged build, and/or a release candidate? If only from HEAD, how will you test release build cherry picks (selection of individual changelists for a release) and system configuration changes not normally seen by builds from HEAD?
  • What kind of testing will be done outside of your team? Examples:
    • Dogfooding
    • External crowdsource testing
    • Public alpha/beta versions (how will they be tested before releasing?)
    • External trusted testers
  • How are data migrations tested? You may need special testing to compare before and after migration results.
  • Do you need to be concerned with backward compatibility? You may own previously distributed clients or there may be other systems that depend on your system’s protocol, configuration, features, and behavior.
  • Do you need to test upgrade scenarios for server/client/device software or dependencies/platforms/APIs that the software utilizes?
  • Do you have line coverage goals?

Tooling and Infrastructure
  • Do you need new test frameworks? If so, describe these or add design links in the plan.
  • Do you need a new test lab setup? If so, describe these or add design links in the plan.
  • If your project offers a service to other projects, are you providing test tools to those users? Consider providing mocks, fakes, and/or reliable staged servers for users trying to test their integration with your system.
  • For end-to-end testing, how will test infrastructure, systems under test, and other dependencies be managed? How will they be deployed? How will persistence be set-up/torn-down? How will you handle required migrations from one datacenter to another?
  • Do you need tools to help debug system or test failures? You may be able to use existing tools, or you may need to develop new ones.

  • Are there test schedule requirements? What time commitments have been made, which tests will be in place (or test feedback provided) by what dates? Are some tests important to deliver before others?
  • How are builds and tests run continuously? Most small tests will be run by continuous integration tools, but large tests may need a different approach. Alternatively, you may opt for running large tests as-needed. 
  • How will build and test results be reported and monitored?
    • Do you have a team rotation to monitor continuous integration?
    • Large tests might require monitoring by someone with expertise.
    • Do you need a dashboard for test results and other project health indicators?
    • Who will get email alerts and how?
    • Will the person monitoring tests simply use verbal communication to the team?
  • How are tests used when releasing?
    • Are they run explicitly against the release candidate, or does the release process depend only on continuous test results? 
    • If system components and dependencies are released independently, are tests run for each type of release? 
    • Will a "release blocker" bug stop the release manager(s) from actually releasing? Is there an agreement on what are the release blocking criteria?
    • When performing canary releases (aka % rollouts), how will progress be monitored and tested?
  • How will external users report bugs? Consider feedback links or other similar tools to collect and cluster reports.
  • How does bug triage work? Consider labels or categories for bugs in order for them to land in a triage bucket. Also make sure the teams responsible for filing and or creating the bug report template are aware of this. Are you using one bug tracker or do you need to setup some automatic or manual import routine?
  • Do you have a policy for submitting new tests before closing bugs that could have been caught?
  • How are tests used for unsubmitted changes? If anyone can run all tests against any experimental build (a good thing), consider providing a howto.
  • How can team members create and/or debug tests? Consider providing a howto.

  • Who are the test plan readers? Some test plans are only read by a few people, while others are read by many. At a minimum, you should consider getting a review from all stakeholders (project managers, tech leads, feature owners). When writing the plan, be sure to understand the expected readers, provide them with enough background to understand the plan, and answer all questions you think they will have - even if your answer is that you don’t have an answer yet. Also consider adding contacts for the test plan, so any reader can get more information.
  • How can readers review the actual test cases? Manual cases might be in a test case management tool, in a separate document, or included in the test plan. Consider providing links to directories containing automated test cases.
  • Do you need traceability between requirements, features, and tests?
  • Do you have any general product health or quality goals and how will you measure success? Consider:
    • Release cadence
    • Number of bugs caught by users in production
    • Number of bugs caught in release testing
    • Number of open bugs over time
    • Code coverage
    • Cost of manual testing
    • Difficulty of creating new tests

Categories: Blogs

Fast Screenshot Testing for Android

Testing TV - Tue, 07/12/2016 - 14:55
This presentation explains about how Facebook generate fast deterministic screenshots for Android views, with particular focus on how it does this for the hundreds of different configurations of Android feed stories. The talk discusses how Facebook use this approach both for iterating fast on user interfaces and for catching regressions. Video producer:
Categories: Blogs

Put a Ring on It

Hiccupps - James Thomas - Sat, 07/09/2016 - 11:28

Back in May I responded to the question "Which advice would you give your younger (#Tester) self?" like this:
Learn to deal with, rather than shy away from, uncertainty.#testing— James Thomas (@qahiccupps) May 25, 2016Last week I was reminded of the question as I found myself sketching the same diagram three times for three different people on three different whiteboards.

The diagram represents my mind's-eye view of a problem space, a zone of uncertainty, a set of unresolved questions, a big cloud of don't know with a rather fuzzy border:

What I'll often want to do with this kind of thing is find some way to remove enough uncertainty that I can make my next move. 
For example, perhaps I am being pressed to make a decision about a project where there are many uknowns. I might try to find some aspect of the project to which I can anchor the rest and then give an answer relative to that. Something like this: "Yes, I agreed an approach in principle with Team X and until their prototype confirms the approach our detailed planning can't start."
I've still got a lot of uncertainty about exactly what I will do. But I found enough firm ground - in this case a statement in principle - that I can move the project forward.
In my head, I think of this as putting a band around the cloud and squeezing it:

And I'm left with a cleaner picture, the band effectively containing the uncertainty. Until the conditions that the band represents are confirmed or rejected I don't have to consider the untidy insides. (Which doesn't mean that I can't if I want to, of course.)

A useful heuristic for me is that if I find myself thinking about the insides too much - if something I expect to be in is leaking out - then probably I didn't tighten the band enough and I need to revisit.

When I'm exploring, the band can represent an assumption that I'm testing rather than some action that I've taken. "if this were true, then the remaining uncertainty would look that way and so I should be able to ..."

I like this way of picturing things even though the model itself doesn't help me with specific next moves. What it does do, which I find valuable, is remind me that when I have uncertainty I don't have to work it out in one shot.

P.S. While writing this, I realised that I've effectively written it before, although in a much more technical way: On Being a Test Charter.
Categories: Blogs

AutoMapper 5.0 Released

Jimmy Bogard - Thu, 07/07/2016 - 17:42

Release notes:

Today I pushed out AutoMapper 5.0.1, the culmination of about 9 months of work from myself and many others to build a better, faster AutoMapper. Technically I pushed out a 5.0.0 package last week, but it turns out that almost nobody really pulls down beta packages to submit bugs so this package fixes the bugs reported from the 5.0.0 drop :)

The last 4.x release introduced an instance-based configuration model for AutoMapper, and with 5.0, we’re able to take advantage of that model to focus on speed. So how much faster? In our benchmarks, 20-50x faster. Compared to hand-rolled mappings, we’re still around 8-10x slower, mostly because we’re taking care of null references, providing diagnostics, good exception messages and more.

To get there, we’ve converted the runtime mappings to a single compiled expression, making it as blazing fast as we can. There’s still some micro-optimizations possible, which we’ll look at for the next dot release, but the gains so far have been substantial. Since compiled expressions give you zero stack trace if there’s a problem, we made sure to preserve all of the great diagnostic/error information to figure out how things went awry.

We’ve also expanded many of the configuration options, and tightened the focus. Originally, AutoMapper would do things like keep track of every single mapped object during mapping, which made mapping insanely slow. Instead, we’re putting the controls back into the developer’s hands of exactly when to use what feature, and our expression builder builds the exact mapping plan based on how you’ve configured your mappings.

This did mean some breaking changes to the API, so to help ease the transition, I’ve included a 5.0 upgrade guide in the wiki.


Categories: Blogs

Lessons Learned in Test Automation Through Sudoku

For many years, I have recommended Sudoku as a mind training game for testers. I think Sudoku requires some of the same thinking skills that testers need, such as the ability to eliminate invalid possibilities, deduce correct answers and so forth.

Much like an application that may lack documentation, Sudoku only gives you a partial solution and you have to fill in the rest. Unlike software, however, guessing actually prevents you from solving the puzzle - mainly because you don't see the impact of an incorrect guess until it is too late to change it.

My friend, Frank Rowland, developed an Excel spreadsheet some time ago, that he used to help solve Sudoku puzzles by adding macros to identify cells that had only one possible correct value. At the time, I thought that was pretty cool. Of course, you still had to enter one value at a time manually. But I thought it was a good example of using a degree of automation to solve a problem.

Fast forward to last week. I was having lunch with Frank and he whips out his notebook PC and shows me the newest version of the spreadsheet. After listening to a math lecture from The Great Courses, he learned some new approaches for solving Sudoku.

Armed with this new information, Frank was successful in practically automating the solution of a Sudoku puzzle. I say "practically" because at times, some human intervention is required.

Now, I think the spreadsheet is very cool and I think that the approach used to solve the puzzle can also be applied to test automation. The twist is that the automation is not pre-determined as far as the numeric values are concerned. The numbers are derived totally dynamically.

Contrast this with traditional test automation. In the traditional approach to test automation (even keyword-driven), you would be able to place numbers in the cells, but only in a repeatable way - not a dynamic way.

In Franks's approach, the actions are determined based on the previous actions and outcomes. For example, when a block of nine cells are filled, that drives the possible values in related cells. The macros in this case know how to deduce the other possibilities and also can eliminate the invalid possibilities. In this case of "Man vs. Machine", the machine wins big time.

I don't have all the answers and I know that work has been done by others to create this type of dynamic test automation. I just want to present the example and would love to hear your experiences in similar efforts.

I think the traditional view of test automation is limited and fragile. It has helped in establishing repeatable tests for some applications, but the problem is that most applications are too dynamic. This causes the automation to fail. At that point, people often get frustrated and give up.

I've been working with test automation since 1989, so I have seen a lot of ideas come down the pike. I really like the possibilities of test automation with AI.

I hope to get a video posted soon to show more about how this works. Once again, I would also love to hear your feedback.  

Categories: Blogs

Open letter to the Association for Software Testing

Chris McMahon's Blog - Wed, 07/06/2016 - 16:33
To the Association for Software Testing:

Considering the discussion in the software testing community with regard to my blog post "Test is a Ghetto", I ask the Board of the AST  to release a statement regarding the relationship of the AST with Keith Klain and Per Scholas, particularly in regard to the lawsuit for fraud filed by Doran Jones (PDF download link) .

The AST has a Code of Ethics  and I also ask the AST Board to release a public statement on whether the AST would consider creating an Ethics Committee similar to, or as a part of the recently created Committee on Standards and Professional Practices.

The yearly election for the Board of the AST happens in just a few weeks, and I hope that the candidates for the Board and the voting members of the Association for Software Testing will consider these requests with the gravity they deserve.
Categories: Blogs

Agile Requirements Exploration: How Testers Add Value – ExpoQA 2016

Agile Testing with Lisa Crispin - Mon, 07/04/2016 - 23:22

Raji Bhamidipati and I co-facilitated a day-long workshop at ExpoQA Madrid in June. We shared techniques testers can use with their teams to build shared understanding at the product, release, feature, iteration and story levels. Participants worked in teams to try out the various techniques and contribute their own experiences with ways to enable shared understanding among the delivery team and customer team.

This was a new workshop for me and Raji. We based it on a tutorial that my co-author Janet Gregory did at a recent conference. It includes some valuable agile business analysis techniques and ideas from Ellen Gottesdiener and Mary Gorman in their book Discover to Deliver. We are grateful for all this terrific material we were allowed to share. Our slide deck is available, but this workshop was about the participants doing and practicing more than about me and Raji talking. I got a lot of new ideas to try myself!

Adding value as testers

We encouraged a mindset shift from bug detection to bug prevention. We shared some essentials for stories and

Table groups practicing techniques to explore requirements

Table groups practicing techniques to explore requirements

requirements. The INVEST criteria from Bill Wake apply to feature sets and slices of features as well as to individual stories. The 7 Product Dimensions from Ellen Gottesdiener and Mary Gorman have been a huge help to me when discussing proposed features with business experts. We can consider these dimensions together with the Agile Testing Quadrants to help explore different aspects of a feature at the appropriate time.

A case study from an imagined tour bus company provided participants with features and requirements to explore. We used our tester’s mindset, the 7 Product Dimensions, INVEST and agile testing quadrants to come up with questions about functional requirements and “non-functional” quality attributes. Each group worked on a different dimension, and shared these on a wall chart of all 7 Product Dimensions.

Techniques to explore requirements Example Story Map

One table group’s story map exercise

As we moved down the levels of precision from release down to stories, participants tried out several different ways to explore requirements with customers: user story mapping and personas (see Jeff Patton’s excellent book ), context diagrams, process map / flow diagrams, state diagrams, scenarios, business policies, example mapping (see Matt Wynne’s post), and various approaches to guiding development with business facing tests – acceptance test-driven development, behavior driven development, specification by example. Participants practiced writing these acceptance tests together. Combining a story, examples, rules, acceptance tests, and most importantly, conversations, helps us all get on the same page about each requirement.

Obstacles and experiments to overcome them

Throughout the day, participants used the “speed car – abyss” retrospective / futurespective activity to identify what’s holding their team back, what’s helping them move forward, what dragons may lurk in their future, and how to overcome those. A lot of common themes emerged, as well as unique issues and new ideas.

retro chart

A retro chart from one of the table groups


It wasn’t surprising to see that missing, incorrect, misunderstood and changing requirements drag down many teams. Teams struggle because of poor communication with customers and end users, unavailable product owners (PO), poorly-defined priorities, and dependencies on other teams or pods, either within the company or external. Many teams lack time, resources and skills to do useful test automation, and are slowed down by a reliance on manual regression testing. Programmers and testers on newer agile teams often aren’t used to working together, and have communication issues. For example, programmers may dismiss bugs reported by testers. Even basics like lack of CI and not being allowed to self-organize their own workflow. Slow feedback loops, lack of testers, too much work, and inflexible deadlines were also common themes.

I was more surprised that some teams are held back by confusion over who should test what, and oversimplifying features and stories. I also hadn’t expected that testers and teams often don’t know how to communicate problems to managers. Teams need to be creative for these types of challenges.


Participants had good ideas early in the workshop to put some gas in their teams’ engines. Many want to try the 7 Product Dimensions. Visualization techniques such as story mapping and process flow diagrams looked like good options to many. We had discussed Three (or four or more) Amigos or Power of Three meetings along with example mapping, which some participants are already using, and others are keen to start. Models such as the test automation pyramid and agile testing quadrants help power some teams’ engines. Pairing is also seen as a good way to overcome drag. Quite a few participants are already doing exploratory testing, and more want to try, including techniques such as investigating competing products.

Some ideas I was reminded of by participants included using the MoSCoW Must/Could/Should/Won’t prioritization method. Several participants cited smaller, self-organizing teams, pods or squads as a key to being able to effectively explore requirements and build the right thing. Team building activities were mentioned as a way to help with that. Someone mentioned a firefighter role, someone to come in and help in sticky situations, which intrigued me. Another great idea is to add people to help coordinate activities among teams and help manage dependencies for delivering software that meets requirements.


Many participants fear the same pitfalls as they’re already experiencing, such as changing requirements, unclear requirements and priorities and adding stories during the iteration. They’re worried that they’ll build the wrong thing, or fail to deliver on time. Broken or non-existent test environments lurk in the abyss. So do miscommunications with the PO and customers, misunderstanding of business rules, and a lack of documentation.

One insight is that teams get stuck into old habits and can’t get out of their comfort zone. They don’t experiment with new ways to discuss examples and business rules with customers, or spend time learning the business domain so they can better understand their needs. Some teams may get tripped up by working on more than one big feature at the same time. Others get stalled during planning, or don’t get test data in time, and then can’t deliver on time. Often they’re faced with unachievable deadlines or at the mercy of bad business decisions and micro-management.

Technical debt is a common pitfall. This also limits a team’s ability to deliver on time. Lack of testing skills and insufficient knowledge transfer can take a team down and result in incorrect implementation of features. Some participants worried that their team would lose sight of the business goals and priorities as they get bogged down with problems and technical debt. Some just don’t have enough people, especially testers, or enough time to get their work done. This also means they’re not being allowed to self-organize. Interestingly, some people feared spending too much time on testing.

Automation was again much discussed. Some teams have no test automation, others focus so much on automation they fall short on other testing activities. They worry about missing edge cases or backward compatibility issues.

Assuming that the business experts will be available for conversations and questions can lead to trouble. Another interesting insight was the possibility of a team confusing agile with ad-hoc, and going off in the weeds.

Bridge Another group's retro chart

Another group’s retro chart

My favorite suggestion from a participant to help build a bridge over the abyss is “learn mind reading”. I help with customer support on my team, and mind reading would really help there too! Here are many other great ideas to successfully explore requirements and build shared understanding with business experts.

Many participants plan to apply the INVEST method for features at planning and grooming meetings. Using visual techniques such as flow diagrams for complicated user stories is a good way to avoid misunderstandings and make sure edge cases are covered. Simplifying stories and making them independent is also part of our participants’ bridge. structured, visual discussion frameworks such as story mapping and example mapping helps clarify details before starting testing and coding. Get examples of desired and undesired feature behavior and business rules. Use personas to help elicit requirements.  Slice big stories smaller.

Participants feel it’s important to work in small increments, take “baby steps”. Some participants plan to experiment with BDD and SBE. They thought starting on a small scale or doing a spike would help get buy in to give it a try. One group suggested combining these approaches with quality attributes to help make sure they think of everything. Others plan to experiment with Kanban and other processes. Finding ways to measure progress is important to see whether various techniques are helping the customer and delivery teams understand what to build and avoid unnecessary rework. Simply agreeing on a definition of done helps.

Another important area for the bridge is educating management. This helps teams get the time and resources they need, along with the ability to self-organize and manage their own workloads. It also may help ensure that stakeholders, designers and other key players are available for conversations and collaborating, and allow them to experiment with new practices such as pairing. They can promote more collaboration. More support from management also helps with finding ways to solve dependency issues among teams and improve cross-team communication.

Some participants want more test automation, including at the unit level, to help build their bridge. This is one way to help keep bugs out of production. With more time available, they can learn the necessary skills.

What will you try?

I agree with the group that suggested passion is an important way to bridge safely over the abyss! We need to be dedicated, disciplined and excited about collaborating across the delivery team and with the customer team. Conversations, examples, tests and requirements at all levels of precision combine to help us delight our customers and end users with great features. As testers, we can contribute so much to help our team deliver value frequently, predictably and sustainably.

What will you try to help your customers and your delivery team collaborate to specify and deliver valuable features?

The post Agile Requirements Exploration: How Testers Add Value – ExpoQA 2016 appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Context Driven Clichés

Today I read a blogpost by Olaf Agterbosch on the ViQiT site with the intriguing title “Context Driven Clichés“. Since it is written in Dutch, I will first translate his post.

Do you recognize the following? In recent years I often hear the words “Context Driven ‘when it comes to developments in ICT. Apparently, those using them consider the context in their activities in other words the overall environment in which their activities take place. As if nobody kept that in mind before, these people talk about the importance of the environment and the way their products, services and processes have to connect to the context.

I sometimes wonder why people choose to kick in a open door and yell how well it’s working. But what’s even worse: people parroting it. Everyone jumps on the bandwagon and before you know it the latest hype is born. A hype that is fully wrung out by, for example consultants, who will not fail to emphasize the importance of this development. You really can not live without!

Old wine new bottles. Old wine in Agile Bags. Old wine in faded bags …

Apparently there is marketing potential.

Context Driven testing is exemplar for an over hyped branch on the Agile tree. The nice thing is that there is nothing wrong with it, you are aware of the environment in which you perform your job. What I regret is that many people get sucked into this development. Running along with the latest hype.

Does it pay? Possibly. A prediction of the trends for the coming years, we will get occupied with:

Context Driven Requirements engineering;
Context Driven Application Management;
Context Driven Directed Lining;
Context Driven Project Management;
Context Driven CRM;
Context Driven Innovation;
Context Driven Migration;
Context Driven Whatever etc.

What did you say? We were doing that for long, weren’t we? We obviously haven’t been paying attention.

Now let us just go back to work. And yes, work, like the good ones among us have always done it in a Context-Driven way. Or Risk-Based. Or Business Case Driven. But please stop those cheap semantic tricks. Try to be less weighty when performing the same trick again. The trick won’t get better doing this. Try to simply create better real added value. Perhaps less familiar to you and others, but effective!

Not sure what his problem with context-driven is…

Fortunately, I hear about people becoming more and more context-driven. To me being context-driven is not just keeping the context or environment in mind, it is way more that that… As I wrote in a post on the DEWT  blog: “Context-driven testing made my testing more personal. Not doing stuff everybody does, but it encouraged me to develop my own style. It is a mind set, a paradigm and a culture. It is not only about what I do, it is more about who I am!”

A hype? Maybe because it gets more and more attention. Although I think it isn’t. Far from it! A hype, according to the free dictionary:

  1. Excessive publicity and the ensuing commotion
  2. Exaggerated or extravagant claims made especially in advertising or promotional material
  3. An advertising or promotional ploy
  4. Something deliberately misleading; a deception

I can’t see why context-driven testing would be a hype. Exaggerated? Misleading? A promotional ploy? Excessive publicity?

Old wine in new bags? I don’t think so. The saying means “things are presented differently, but not fundamentally changed.” I think context-driven testing is fundamentally different. Of course testers has been taking context in consideration for years. But how well do they do that? I still hear things like “that’s how we do things now here” or “you have to play by the standard/procedure” quite often. I almost never hear testers speak about serious context analysis and adapting their approaches accordingly. But there is a lot more to context-driven testing then taking context into consideration. To name a few:

  • developing and using skills to effectively support the software development project
  • learning tester to be less dependent on documentation
  • modeling by mapping of various aspects of the software product
  • diversify tactics, approaches and techniques
  • thinking skills like logical reasoning, using heuristics and critical thinking
  • dealing with complexity, ambiguity, coping with half answers and changes

In the last paragraph Olaf states that the good testers among us, have always been using a context-driven approach. Really? How does he define good testers? And if they use a context-driven approach, why is he complaining? Unfortunately I see too many testers not using context-driven approaches and creating a lot of waste!

Then he continues “try to be less weighty when performing the same trick again. The trick won’t get better doing this. Try to simply create better real added value“. If you look at testing as performing a trick, I can see that Olaf sees context-driven testing as a cheap semantic trick! The opposite is true: adding real value is what context-driven testing is trying. Context-driven testers do this by focusing on their skills, using heuristics, considering cost verus value in all they do, continuous learning by deliberate practice, etc. When I look at the most commonly used test approaches in the Netherlands (TMap and ISTQB), I wonder how they add value by focusing on standards, document heavy procedures, use of many templates, best practices and using  the same approach every time.

Leaves me with one question… What is Olaf’s problem with context-driven exactly?

Categories: Blogs

Data-driven tests in Junit5.0.0-SNAPSHOT

Markus Gaertner ( - Sat, 07/02/2016 - 19:58

It’s been a while since I wrote code these days. Back in late April however I found myself in a Coding Dojo at the Düsseldorf Softwerkskammer meet-up working on the Mars Rover Kata. I have a story to share from that meeting. However, since I tried to reproduce the code we ended up with that night, and I decided to give JUnit5 (and Java8) a go for that, I ended up with a struggle.

Back in the days with JUnit4 I used the ParameterizedRunner quite often to use data-driven tests. I never remembered the signature of the @Parameters function, though. The Mars Rover Kata also includes some behavior that I wanted to run through a data-driven test, but how do you do that in JUnit5? I couldn’t find good answers for that on the Internet, so I decided to put my solution up here – probably for lots of critique.

Please note that I used JUnit 5.0.0-SNAPSHOT which is a later version than the alpha, but probably not the final one.

JUnit5 offers besides Java 8 capabilities some interesting new things. JUnit5 comes now with Extension capabilities where you may influence the test’s lifecycle and also ways to resolve parameters to your tests, and your test class constructors. And then there are TestFactories for DynamicTests. Woha, quite a lot new stuff.

First I tried stuff with parameter resolvers. But then I would have needed to keep track of the parameters, and I had to call the parameter resolver more than once. So, combining it with an extension might work? No, I couldn’t make that work. So, dynamic tests are the way to go.

So, here is an example for what I ended up with. We have a Direction class with a method called turnLeft(). The idea is if the Rover is headed NORTH, and turns left (by 90 degrees) then it will be facing WEST.

Some notes:

  • I kept a collection of test data in a field in line 17. This is somewhat similar to the old way you annotated a function with @Parameters in JUnit4, even though you can now get rid of the Object[], and use a private test data class per test class. That at least seems to be the solution that I preferred.
  • For the @TestFactory you have several possibilities. I decided to use the Stream return type here in line 28. As I haven’t programmed too much in Java 8, I am not sure whether my usage is appropriate here. The conversion of the testData from the Collection is quite straight-forward, I found.
  • For each operation I wrapped the assertion in line 36 to avoid making the call to dynamicTest more convoluted than necessary. I also decided to generate a descriptive string for each test with the method in line 32. I think you can come up with better ways to generate the test descriptions. Wrapping the assertion on seemed unavoidable though. I especially didn’t like the usage of the lambda-expression together with the aggregate expression seems to make the line with dynamicTest (line 29) less readable than I would like to. I think there is more improvement possible.
  • Note that you can have several @TestFactory methods on your test class. So when writing a test for turning right, you can provide another TestFactory and reuse the test data for that. I’ll leave that as an exercise for the inspired reader of my blog.

So, this is what I ended up with. I think there is still room for improvement, especially when you compare the result with stuff you might write in tools like Spock.

P.S.: I ran this through Marc Philipp – one of the JUnit5 originators – in an earlier version, and he told me that they will be working on a more elegant solution for data-driven tests, probably for one of the next releases of JUnit5.

PrintDiggStumbleUpondel.icio.usFacebookTwitterLinkedInGoogle Bookmarks

Categories: Blogs