Skip to content


Contoso University updated to ASP.NET Core

Jimmy Bogard - Fri, 10/21/2016 - 17:33

I pushed out a new repository, Contoso University Core, that updated my “how we do MVC” sample app to ASP.NET Core. It’s still on full .NET framework, but I also plan to push out a .NET Core version as well. In this, you can find usages of:

It uses all of the latest packages I’ve built out for the OSS I use, developed for ASP.NET Core applications. Here’s the Startup, for example:

public void ConfigureServices(IServiceCollection services)
    // Add framework services.
    services.AddMvc(opt =>
            opt.Conventions.Add(new FeatureConvention());
            opt.ModelBinderProviders.Insert(0, new EntityModelBinderProvider());
        .AddRazorOptions(options =>
            // {0} - Action Name
            // {1} - Controller Name
            // {2} - Area Name
            // {3} - Feature Name
            // Replace normal view location entirely
            options.ViewLocationExpanders.Add(new FeatureViewLocationExpander());
        .AddFluentValidation(cfg => { cfg.RegisterValidatorsFromAssemblyContaining<Startup>(); });

    services.AddScoped(_ => new SchoolContext(Configuration["Data:DefaultConnection:ConnectionString"]));
    services.AddHtmlTags(new TagConventions());

Still missing are unit/integration tests, that’s next. Enjoy!

Categories: Blogs

He Said Captain

Hiccupps - James Thomas - Fri, 10/21/2016 - 10:41
A few months ago, as I was walking my two daughters to school, one of their classmates gave me the thumbs up and shouted "heeeyyy, Captain!"

Young as the lad was, I congratulated myself that someone had clearly recognised my innate leadership capabilities and felt compelled to verbalise his respect for them, and me. Chest puffed out I strutted across the playground, until one of my daughters pointed out that the t-shirt I was wearing had a Captain America star on the front of it. Doh!

Today, as I was getting dressed, my eldest daughter asked to choose a t-shirt for me to wear, and picked the Captain America one. "Do you remember the time ..." she said, and burst out laughing at my recalled vain stupidity.

Young as my daughter is, her laughter is well-founded and a useful lesson for me. I wear a virtual t-shirt at work, one with Manager written on it. People no doubt afford me respect, or at least deference, because of it. I hope they also afford me respect because of my actions. But from my side it can be hard to tell the difference. So I'll do well to keep any strutting in check.
Categories: Blogs

Advanced Android Espresso Testing

Testing TV - Fri, 10/21/2016 - 07:49
Do you test your Android apps? It’s okay if you don’t – historically the tools had not been stellar. But they have gotten much better, and I am going to show you my favorite, instrumentation testing with Espresso. The Espresso testing framework provides APIs for writing UI tests to simulate user interactions within a single […]
Categories: Blogs

Developer on Fire! podcast: Just try stuff!

Agile Testing with Lisa Crispin - Mon, 10/17/2016 - 21:22

I was honored to be interviewed by Dave Rael for his Developer on Fire! podcast. Please give it a listen and let me know if you share any of my experiences with small experiments, collaborating with customer and delivery team members, imposter syndrome, or whatever!

Developer on Fire podcast

Developer on Fire podcast

I’ve listened to many great interviews on Dave’s podcast, I highly recommend it. Dave is trying to have a more diverse lineup of guests, so if you would like to be on the podcast, or know someone (especially a woman or other minority in the software world) whom you’d love to hear interviewed, let me know and I’ll pass it along.

The post Developer on Fire! podcast: Just try stuff! appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

And Now Repeat

Hiccupps - James Thomas - Sat, 10/15/2016 - 06:57

As we were triaging that day's bug reports, the Dev Manager and me, we reached one that I'd filed. After skimming it to remind himself of the contents, the Dev Manager commented "ah yes, here's one of your favourite M.O.s ..."

In this case I'd created a particular flavour of an object by a specific action and then found that I could reapply the action to cause the object to become corrupted. Fortunately for our product, this kind of object is created only rarely and there's little occasion - although valid reasons - to do what I did with one.

The Dev Manager carried on "... if you can find a way to connect something that links out back to itself, or to make something that takes input read its own output, or to make something and then try to remake it, or stuff it back into itself ... you will."

Fascinating. It should come as no surprise to find that those with a different perspective to us see different things in us. And, in fact, I was not surprised to find that I use this kind of approach. But once I was aware that others see it as a thing and observe value in it, I could feed that back into our testing consciously.

Connecting my output to my input to my output ...
Categories: Blogs

MediatR Pipeline Examples

Jimmy Bogard - Thu, 10/13/2016 - 21:02

A while ago, I blogged about using MediatR to build a processing pipeline for requests in the form of commands and queries in your application. MediatR is a library I built (well, extracted from client projects) to help organize my architecture into a CQRS architecture with distinct messages and handlers for every request in your system.

So when processing requests gets more complicated, we often rely on a mediator pipeline to provide a means for these extra behaviors. It doesn’t always show up – I’ll start without one before deciding to add it. I’ve also not built it in directly to MediatR  – because frankly, it’s hard and there are existing tools to do so with modern DI containers. First, let’s look at the simplest pipeline that could possible work:

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
    private readonly IRequestHandler<TRequest, TResponse> _inner;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner)
        _inner = inner;

    public TResponse Handle(TRequest message)
        return _inner.Handle(message);

Nothing exciting here, it just calls the inner handler, the real handler. But we have a baseline that we can layer on additional behaviors.

Let’s get something more interesting going!

Contextual Logging and Metrics

Serilog has an interesting feature where it lets you define contexts for logging blocks. With a pipeline, this becomes trivial to add to our application:

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
    private readonly IRequestHandler<TRequest, TResponse> _inner;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner)
        _inner = inner;

    public TResponse Handle(TRequest message)
        using (LogContext.PushProperty(LogConstants.MediatRRequestType, typeof(TRequest).FullName))
            return _inner.Handle(message);

In our logs, we’ll now see a logging block right before we enter our handler, and right after we exit. We can do a bit more, what about metrics? Also trivial to add:

using (LogContext.PushProperty(LogConstants.MediatRRequestType, requestType))
using (Metrics.Time(Timers.MediatRRequest))
    return _inner.Handle(request);

That Time class is just a simple wrapper around the .NET Timer classes, with some configuration checking etc. Those are the easy ones, what about something more interesting?

Validation and Authorization

Often times, we have to share handlers between different applications, so it’s important to have an agnostic means of cross-cutting concerns. Rather than bury our concerns in framework or application-specific extensions (like, say, an action filter), we can instead embed this behavior in our pipeline. First, with validation, we can use a tool like Fluent Validation with validator handlers for a specific type:

public interface IMessageValidator<in T>
    IEnumerable<ValidationFailure> Validate(T message);

What’s interesting here is that our message validator is contravariant, meaning I can have a validator of a base type work for messages of a derived type. That means we can declare common validators for base types or interfaces that your message inherits/implements. In practice this lets me share common validation amongst multiple messages simply by implementing an interface.

Inside my pipeline, I can execute my validation my taking a dependency on the validators for my message:

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
    private readonly IRequestHandler<TRequest, TResponse> _inner;
    private readonly IEnumearble<IMessageValidator<TRequest>> _validators;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner,
        IEnumerable<IMessageValidator<TRequest>> validators)
        _inner = inner;
        _validators = validators;

    public TResponse Handle(TRequest message)
        using (LogContext.PushProperty(LogConstants.MediatRRequestType, requestType))
        using (Metrics.Time(Timers.MediatRRequest))
            var failuers = _validators
                .Select(v => v.Validate(message))
                .SelectMany(result => result.Errors)
                .Where(f => f != null)
            if (failures.Any())
                throw new ValidationException(failures);
            return _inner.Handle(request);

And bundle up all my errors into a potential exception thrown. The downside of this approach is I’m using exceptions to provide control flow, so if this is a problem, I can wrap up my responses into some sort of Result object that contains potential validation failures. In practice it seems fine for the applications we build.

Again, my calling code INTO my handler (the Mediator) has no knowledge of this new behaviors, nor does my handler. I go to one spot to augment and extend behaviors across my entire system. Keep in mind, however, I still place my validators beside my message, handler, view etc. using feature folders.

Authorization is similar, where I define an authorizer of a message:

public interface IMessageAuthorizer {
  void Evaluate<TRequest>(TRequest request) where TRequest : class

Then in my pipeline, check authorization:

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
    private readonly IRequestHandler<TRequest, TResponse> _inner;
    private readonly IEnumearble<IMessageValidator<TRequest>> _validators;
    private readonly IMessageAuthorizer _authorizer;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner,
        IEnumerable<IMessageValidator<TRequest>> validator,
        IMessageAuthorizor authorizer
        _inner = inner;
        _validators = validators;
        _authorizer = authorizer;

    public TResponse Handle(TRequest message)
        using (LogContext.PushProperty(LogConstants.MediatRRequestType, requestType))
        using (Metrics.Time(Timers.MediatRRequest))
            var failures = _validators
                .Select(v => v.Validate(message))
                .SelectMany(result => result.Errors)
                .Where(f => f != null)
            if (failures.Any())
                throw new ValidationException(failures);
            return _inner.Handle(request);

The actual implementation of the authorizer will go through a series of security rules, find matching rules, and evaluate them against my request. Some examples of security rules might be:

  • Do any of your roles have permission?
  • Are you part of the ownership team of this resource?
  • Are you assigned to a special group that this resource is associated with?
  • Do you have the correct training to perform this action?
  • Are you in the correct geographic location and/or citizenship?

Things can get pretty complicated, but again, all encapsulated for me inside my pipeline.

Finally, what about potential augmentations or reactions to a request?

Pre/post processing

In addition to some specific processing needs, like logging, metrics, authorization, and validation, there are things I can’t predict one message or group of messages might need. For those, I can build some generic extension points:

public interface IPreRequestHandler<in TRequest>
    void Handle(TRequest);
public interface IPostRequestHandler<in TRequest, in TResponse>
    void Handle(TRequest request, TResponse response);
public interface IResponseHandler<in TResponse>
    void Handle(TResponse response);

Next I update my pipeline to include calls to these extensions (if they exist):

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
    private readonly IRequestHandler<TRequest, TResponse> _inner;
    private readonly IEnumearble<IMessageValidator<TRequest>> _validators;
    private readonly IMessageAuthorizer _authorizer;
    private readonly IEnumerable<IPreRequestProcessor<TRequest>> _preProcessors;
    private readonly IEnumerable<IPostRequestProcessor<TRequest, TResponse>> _postProcessors;
    private readonly IEnumerable<IResponseProcessor<TResponse>> _responseProcessors;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner,
        IEnumerable<IMessageValidator<TRequest>> validator,
        IMessageAuthorizor authorizer,
        IEnumerable<IPreRequestProcessor<TRequest>> preProcessors,
        IEnumerable<IPostRequestProcessor<TRequest, TResponse>> postProcessors,
        IEnumerable<IResponseProcessor<TResponse>> responseProcessors
        _inner = inner;
        _validators = validators;
        _authorizer = authorizer;
        _preProcessors = preProcessors;
        _postProcessors = postProcessors;
        _responseProcessors = responseProcessors;

    public TResponse Handle(TRequest message)
        using (LogContext.PushProperty(LogConstants.MediatRRequestType, requestType))
        using (Metrics.Time(Timers.MediatRRequest))
            foreach (var preProcessor in _preProcessors)
            var failures = _validators
                .Select(v => v.Validate(message))
                .SelectMany(result => result.Errors)
                .Where(f => f != null)
            if (failures.Any())
                throw new ValidationException(failures);
            var response = _inner.Handle(request);
            foreach (var postProcessor in _postProcessors)
                postProcessor.Handle(request, response);
            foreach (var responseProcessor in _responseProcessors)
            return response;

So what kinds of things might I accomplish here?

  • Supplementing my request with additional information not to be found in the original request (in one case, barcode sequences)
  • Data cleansing or fixing (for example, a scanned barcode needs padded zeroes)
  • Limiting results of paged result models via configuration
  • Notifications based on the response

All sorts of things that I could put inside the handlers, but if I want to apply a general policy across many handlers, can quite easily be accomplished.

Whether you have specific or generic needs, a mediator pipeline can be a great place to apply domain-centric behaviors to all requests, or only matching requests based on generics rules, across your entire application.

Categories: Blogs

Faster Websites with WebPageTest

Testing TV - Wed, 10/12/2016 - 14:54
Modern users expect more than ever from web applications. Unfortunately, they are also consuming applications more frequently from low bandwidth and low power devices – which strains developers to not only nail the user experience, but also the application performance. WebPageTest is a free and open source web performance testing tool that equips developers with […]
Categories: Blogs

Hackable Projects - Pillar 1: Code Health

Google Testing Blog - Tue, 10/11/2016 - 18:25
By: Patrik Höglund
IntroductionSoftware development is difficult. Projects often evolve over several years, under changing requirements and shifting market conditions, impacting developer tools and infrastructure. Technical debt, slow build systems, poor debuggability, and increasing numbers of dependencies can weigh down a project The developers get weary, and cobwebs accumulate in dusty corners of the code base.

Fighting these issues can be taxing and feel like a quixotic undertaking, but don’t worry — the Google Testing Blog is riding to the rescue! This is the first article of a series on “hackability” that identifies some of the issues that hinder software projects and outlines what Google SETIs usually do about them.

According to Wiktionary, hackable is defined as:
hackable ‎(comparative more hackable, superlative most hackable)
  1. (computing) That can be hacked or broken into; insecure, vulnerable. 
  2. That lends itself to hacking (technical tinkering and modification); moddable.

Obviously, we’re not going to talk about making your product more vulnerable (by, say, rolling your own crypto or something equally unwise); instead, we will focus on the second definition, which essentially means “something that is easy to work on.” This has become the mainfocus for SETIs at Google as the role has evolved over the years.
In PracticeIn a hackable project, it’s easy to try things and hard to break things. Hackability means fast feedback cycles that offer useful information to the developer.

This is hackability:
  • Developing is easy
  • Fast build
  • Good, fast tests
  • Clean code
  • Easy running + debugging
  • One-click rollbacks
In contrast, what is not hackability?
  • Broken HEAD (tip-of-tree)
  • Slow presubmit (i.e. checks running before submit)
  • Builds take hours
  • Incremental build/link > 30s
  • Flakytests
  • Can’t attach debugger
  • Logs full of uninteresting information
The Three Pillars of HackabilityThere are a number of tools and practices that foster hackability. When everything is in place, it feels great to work on the product. Basically no time is spent on figuring out why things are broken, and all time is spent on what matters, which is understanding and working with the code. I believe there are three main pillars that support hackability. If one of them is absent, hackability will suffer. They are:

Pillar 1: Code Health“I found Rome a city of bricks, and left it a city of marble.”
   -- Augustus
Keeping the code in good shape is critical for hackability. It’s a lot harder to tinker and modify something if you don’t understand what it does (or if it’s full of hidden traps, for that matter).
TestsUnit and small integration tests are probably the best things you can do for hackability. They’re a support you can lean on while making your changes, and they contain lots of good information on what the code does. It isn’t hackability to boot a slow UI and click buttons on every iteration to verify your change worked - it is hackability to run a sub-second set of unit tests! In contrast, end-to-end (E2E) tests generally help hackability much less (and can evenbe a hindrance if they, or the product, are in sufficiently bad shape).

Figure 1: the Testing Pyramid.
I’ve always been interested in how you actually make unit tests happen in a team. It’s about education. Writing a product such that it has good unit tests is actually a hard problem. It requires knowledge of dependency injection, testing/mocking frameworks, language idioms and refactoring. The difficulty varies by language as well. Writing unit tests in Go or Java is quite easy and natural, whereas in C++ it can be very difficult (and it isn’t exactly ingrained in C++ culture to write unit tests).

It’s important to educate your developers about unit tests. Sometimes, it is appropriate to lead by example and help review unit tests as well. You can have a large impact on a project by establishing a pattern of unit testing early. If tons of code gets written without unit tests, it will be much harder to add unit tests later.

What if you already have tons of poorly tested legacy code? The answer is refactoring and adding tests as you go. It’s hard work, but each line you add a test for is one more line that is easier to hack on.
Readable Code and Code ReviewAt Google, “readability” is a special committer status that is granted per language (C++, Go, Java and so on). It means that a person not only knows the language and its culture and idioms well, but also can write clean, well tested and well structured code. Readability literally means that you’re a guardian of Google’s code base and should push back on hacky and ugly code. The use of a style guide enforces consistency, and code review (where at least one person with readability must approve) ensures the code upholds high quality. Engineers must take care to not depend too much on “review buddies” here but really make sure to pull in the person that can give the best feedback.

Requiring code reviews naturally results in small changes, as reviewers often get grumpy if you dump huge changelists in their lap (at least if reviewers are somewhat fast to respond, which they should be). This is a good thing, since small changes are less risky and are easy to roll back. Furthermore, code review is good for knowledge sharing. You can also do pair programming if your team prefers that (a pair-programmed change is considered reviewed and can be submitted when both engineers are happy). There are multiple open-source review tools out there, such as Gerrit.

Nice, clean code is great for hackability, since you don’t need to spend time to unwind that nasty pointer hack in your head before making your changes. How do you make all this happen in practice? Put together workshops on, say, the SOLID principles, unit testing, or concurrency to encourage developers to learn. Spread knowledge through code review, pair programming and mentoring (such as with the Readability concept). You can’t just mandate higher code quality; it takes a lot of work, effort and consistency.
Presubmit Testing and LintConsistently formatted source code aids hackability. You can scan code faster if its formatting is consistent. Automated tooling also aids hackability. It really doesn’t make sense to waste any time on formatting source code by hand. You should be using tools like gofmt, clang-format, etc. If the patch isn’t formatted properly, you should see something like this (example from Chrome):

$ git cl upload
Error: the media/audio directory requires formatting. Please run
git cl format media/audio.

Source formatting isn’t the only thing to check. In fact, you should check pretty much anything you have as a rule in your project. Should other modules not depend on the internals of your modules? Enforce it with a check. Are there already inappropriate dependencies in your project? Whitelist the existing ones for now, but at least block new bad dependencies from forming. Should our app work on Android 16 phones and newer? Add linting, so we don’t use level 17+ APIs without gating at runtime. Should your project’s VHDL code always place-and-route cleanly on a particular brand of FPGA? Invoke the layout tool in your presubmit and and stop submit if the layout process fails.

Presubmit is the most valuable real estate for aiding hackability. You have limited space in your presubmit, but you can get tremendous value out of it if you put the right things there. You should stop all obvious errors here.

It aids hackability to have all this tooling so you don’t have to waste time going back and breaking things for other developers. Remember you need to maintain the presubmit well; it’s not hackability to have a slow, overbearing or buggy presubmit. Having a good presubmit can make it tremendously more pleasant to work on a project. We’re going to talk more in later articles on how to build infrastructure for submit queues and presubmit.
Single Branch And Reducing RiskHaving a single branch for everything, and putting risky new changes behind feature flags, aids hackability since branches and forks often amass tremendous risk when it’s time to merge them. Single branches smooth out the risk. Furthermore, running all your tests on many branches is expensive. However, a single branch can have negative effects on hackability if Team A depends on a library from Team B and gets broken by Team B a lot. Having some kind of stabilization on Team B’s software might be a good idea there. Thisarticle covers such situations, and how to integrate often with your dependencies to reduce the risk that one of them will break you.
Loose Coupling and TestabilityTightly coupled code is terrible for hackability. To take the most ridiculous example I know: I once heard of a computer game where a developer changed a ballistics algorithm and broke the game’s chat. That’s hilarious, but hardly intuitive for the poor developer that made the change. A hallmark of loosely coupled code is that it’s upfront about its dependencies and behavior and is easy to modify and move around.

Loose coupling, coherence and so on is really about design and architecture and is notoriously hard to measure. It really takes experience. One of the best ways to convey such experience is through code review, which we’ve already mentioned. Education on the SOLID principles, rules of thumb such as tell-don’t-ask, discussions about anti-patterns and code smells are all good here. Again, it’s hard to build tooling for this. You could write a presubmit check that forbids methods longer than 20 lines or cyclomatic complexity over 30, but that’s probably shooting yourself in the foot. Developers would consider that overbearing rather than a helpful assist.

SETIs at Google are expected to give input on a product’s testability. A few well-placed test hooks in your product can enable tremendously powerful testing, such as serving mock content for apps (this enables you to meaningfully test app UI without contacting your real servers, for instance). Testability can also have an influence on architecture. For instance, it’s a testability problem if your servers are built like a huge monolith that is slow to build and start, or if it can’t boot on localhost without calling external services. We’ll cover this in the next article.
Aggressively Reduce Technical DebtIt’s quite easy to add a lot of code and dependencies and call it a day when the software works. New projects can do this without many problems, but as the project becomes older it becomes a “legacy” project, weighed down by dependencies and excess code. Don’t end up there. It’s bad for hackability to have a slew of bug fixes stacked on top of unwise and obsolete decisions, and understanding and untangling the software becomes more difficult.

What constitutes technical debt varies by project and is something you need to learn from experience. It simply means the software isn’t in optimal form. Some types of technical debt are easy to classify, such as dead code and barely-used dependencies. Some types are harder to identify, such as when the architecture of the project has grown unfit to the task from changing requirements. We can’t use tooling to help with the latter, but we can with the former.

I already mentioned that dependency enforcement can go a long way toward keeping people honest. It helps make sure people are making the appropriate trade-offs instead of just slapping on a new dependency, and it requires them to explain to a fellow engineer when they want to override a dependency rule. This can prevent unhealthy dependencies like circular dependencies, abstract modules depending on concrete modules, or modules depending on the internals of other modules.

There are various tools available for visualizing dependency graphs as well. You can use these to get a grip on your current situation and start cleaning up dependencies. If you have a huge dependency you only use a small part of, maybe you can replace it with something simpler. If an old part of your app has inappropriate dependencies and other problems, maybe it’s time to rewrite that part.

The next article will be on Pillar 2: Debuggability.
Categories: Blogs

Hackable Projects - Pillar 2: Debuggability

Google Testing Blog - Tue, 10/11/2016 - 18:12
By: Patrik Höglund

This is the second article in our series on Hackability; also see the first article.

“Deep into that darkness peering, long I stood there, wondering, fearing, doubting, dreaming dreams no mortal ever dared to dream before.” -- Edgar Allan Poe

Debuggability can mean being able to use a debugger, but here we’re interested in a broader meaning. Debuggability means being able to easily find what’s wrong with a piece of software, whether it’s through logs, statistics or debugger tools. Debuggability doesn’t happen by accident: you need to design it into your product. The amount of work it takes will vary depending on your product, programming language(s) and development environment.

In this article, I am going to walk through a few examples of how we have aided debuggability for our developers. If you do the same analysis and implementation for your project, perhaps you can help your developers illuminate the dark corners of the codebase and learn what truly goes on there.
Figure 1: computer log entry from the Mark II, with a moth taped to the page.
Running on Localhost Read more on the Testing Blog: Hermetic Servers by Chaitali Narla and Diego Salas

Suppose you’re developing a service with a mobile app that connects to that service. You’re working on a new feature in the app that requires changes in the backend. Do you develop in production? That’s a really bad idea, as you must push unfinished code to production to work on your change. Don’t do that: it could break your service for your existing users. Instead, you need some kind of script that brings up your server stack on localhost.

You can probably run your servers by hand, but that quickly gets tedious. In Google, we usually use fancy python scripts that invoke the server binaries with flags. Why do we need those flags? Suppose, for instance, that you have a server A that depends on a server B and C. The default behavior when the server boots should be to connect to B and C in production. When booting on localhost, we want to connect to our local B and C though. For instance:

b_serv --port=1234 --db=/tmp/fakedb
c_serv --port=1235
a_serv --b_spec=localhost:1234 --c_spec=localhost:1235

That makes it a whole lot easier to develop and debug your server. Make sure the logs and stdout/stderr end up in some well-defined directory on localhost so you don’t waste time looking for them. You may want to write a basic debug client that sends HTTP requests or RPCs or whatever your server handles. It’s painful to have to boot the real app on a mobile phone just to test something.

A localhost setup is also a prerequisite for making hermetic tests,where the test invokes the above script to bring up the server stack. The test can then run, say, integration tests among the servers or even client-server integration tests. Such integration tests can catch protocol drift bugs between client and server, while being super stable by not talking to external or shared services.
Debugging Mobile AppsFirst, mobile is hard. The tooling is generally less mature than for desktop, although things are steadily improving. Again, unit tests are great for hackability here. It’s really painful to always load your app on a phone connected to your workstation to see if a change worked. Robolectric unit tests and Espresso functional tests, for instance, run on your workstation and do not require a real phone. xcTestsand Earl Grey give you the same on iOS.

Debuggers ship with Xcode and Android Studio. If your Android app ships JNI code, it’s a bit trickier, but you can attach GDB to running processes on your phone. It’s worth spending the time figuring this out early in the project, so you don’t have to guess what your code is doing. Debugging unit tests is even better and can be done straightforwardly on your workstation.
When Debugging gets TrickySome products are harder to debug than others. One example is hard real-time systems, since their behavior is so dependent on timing (and you better not be hooked up to a real industrial controller or rocket engine when you hit a breakpoint!). One possible solution is to run the software on a fake clock instead of a hardware clock, so the clock stops when the program stops.

Another example is multi-process sandboxed programs such as Chromium. Since the browser spawns one renderer process per tab, how do you even attach a debugger to it? The developers have made it quite a lot easier with debugging flags and instructions. For instance, this wraps gdb around each renderer process as it starts up:

chrome --renderer-cmd-prefix='xterm -title renderer -e gdb --args'

The point is, you need to build these kinds of things into your product; this greatly aids hackability.
Proper LoggingRead more on the Testing Blog: Optimal Logging by Anthony Vallone

It’s hackability to get the right logs when you need them. It’s easy to fix a crash if you get a stack trace from the error location. It’s far from guaranteed you’ll get such a stack trace, for instance in C++ programs, but this is something you should not stand for. For instance, Chromium had a problem where renderer process crashes didn’t print in test logs, because the test was running in a separate process. This was later fixed, and this kind of investment is worthwhile to make. A clean stack trace is worth a lot more than a “renderer crashed” message.

Logs are also useful for development. It’s an art to determine how much logging is appropriate for a given piece of code, but it is a good idea to keep the default level of logging conservative and give developers the option to turn on more logging for the parts they’re working on (example: Chromium). Too much logging isn’t hackability. This article elaborates further on this topic.

Logs should also be properly symbolized for C/C++ projects; a naked list of addresses in a stack trace isn’t very helpful. This is easy if you build for development (e.g. with -g), but if the crash happens in a release build it’s a bit trickier. You then need to build the same binary with the same flags and use addr2line / ndk-stack / etc to symbolize the stack trace. It’s a good idea to build tools and scripts for this so it’s as easy as possible.
Monitoring and StatisticsIt aids hackability if developers can quickly understand what effect their changes have in the real world. For this, monitoring tools such as Stackdriver for Google Cloudare excellent. If you’re running a service, such tools can help you keep track of request volumes and error rates. This way you can quickly detect that 30% increase in request errors, and roll back that bad code change, before it does too much damage. It also makes it possible to debug your service in production without disrupting it.
System Under Test (SUT) SizeTests and debugging go hand in hand: it’s a lot easier to target a piece of code in a test than in the whole application. Small and focused tests aid debuggability, because when a test breaks there isn’t an enormous SUT to look for errors in. These tests will also be less flaky. This article discusses this fact at length.

Figure 2. The smaller the SUT, the more valuable the test.
You should try to keep the above in mind, particularly when writing integration tests. If you’re testing a mobile app with a server, what bugs are you actually trying to catch? If you’re trying to ensure the app can still talk to the server (i.e. catching protocol drift bugs), you should not involve the UI of the app. That’s not what you’re testing here. Instead, break out the signaling part of the app into a library, test that directly against your local server stack, and write separate tests for the UI that only test the UI.

Smaller SUTs also greatly aids test speed, since there’s less to build, less to bring up and less to keep running. In general, strive to keep the SUT as small as possible through whatever means necessary. It will keep the tests smaller, faster and more focused.
SourcesFigure 1: By Courtesy of the Naval Surface Warfare Center, Dahlgren, VA., 1988. - U.S. Naval Historical Center Online Library Photograph NH 96566-KN, Public Domain,
Categories: Blogs

Rands in Review

Hiccupps - James Thomas - Sat, 10/08/2016 - 11:04
Do you work with people? Are you a person? Can you read?

Yes. Yes. Yes? Read on.

Are you reading a book?

Yes? Go and find that book and put it away now. Go on, and then come back. No? Good news, I am about to help you out.

Ready? OK: you should immediately read Managing Humans by Michael Lopp because it contains something of value to you. I can't tell you what it is, because I don't know you and your interests and your circumstances and your experiences and your co-workers and the other myriad things that make up who you are with your working head on.

But what I can tell you is that there is something - at least one thing, and probably more - in here that will have you nodding along in agreement, or gawping at the perspective that challenges your own, or shaking your head at the unwarranted certainty of a curt categorisation of colleagues and then shortly afterwards finding yourself mentally fitting your company's staff to it, and adding the archetypes that you need that Lopp doesn't describe.

Lopp - or Rands on his blog, Rands in Repose - writes from vast experience across a bunch of companies you have heard of. Slack, for now, but also Pinterest, Apple, Netscape and Borland amongst others. His prose has the patina of a practitioner and, as with Managing the Unmanageable by Mantle and Lichty that I reviewed recently, if you have any experience of working in a tech company you'll find episodes or characters or atmospheres that you can use as touch points to satisfy yourself that Lopp is a reliable witness, a plausible primary source, even if some of his stories and their participants are composites.

For me, interested at the moment specifically in resources that a new manager might appreciate, the message isn't so different from Mantle and Lichty's either. You can read my summary of that but Lopp characterises his take on it succinctly in the first words of the book:
Don't be a prick.The 50 or so stylish essays that follow (in the third edition; I haven't read earlier ones) cover management and leadership of others, interpersonal relationships at any level, and self-management, all intertwined with the challenges of having to operate in a corporate environment, the structure and logic of which you'll have a better grasp of at some times than others.

Lopp offers short shrift the to the oft-discussed distinction between management and leadership and a sharp pin to the balloon that is the management ego in the book's glossary (which is also online):
  • Leader — A better title than "manager."
  • Manager — The person who signs your review.

The same glossary defines some terms that you've probably come across but would have preferred not to:
  • Human Capital — HR term that refers to the people you work with. You should never ever say this.
  • Individual Contributor — HR term that describes a single employee who has no direct reports. Don’t say this either.

Yet page 1 of Part 1 says (my emphasis)
We all have managers, and whether you’re the director of engineering or an individual contributor, one of your jobs is to figure your manager out.A case of do what I say, not what I do, perhaps? I'm not so sure. More likely just one of those things. This book describes how Lopp goes about making sense of, and dealing with the world. He explains at length how and where and why he gathers his data, how he analyses it, what conclusions he draws from it, and the actions he takes as a result.

But that can't prepare him for all eventualities, can't account for all the googlies, can't prevent or catch all errors, can't predict all the points at which the rails and the wheels lose contact. All day long, we're in the real world, dealing with real humans in real time. And one of the key messages in the book, and laid out very early in it, is this:
Every single person with whom you work has a vastly different set of needs. They are chaotic beautiful snowflakes. So when a usually courteous colleague does something outlandishly rude it ... could be malicious. Could be a mishap. Could mean they hate you, and they've always hated you but manage to hide it. Could mean their marriage is breaking up. Could mean their project failed and they feel responsible. Could mean they've got indigestion. Could mean they have a new job and can't find a way to tell you. Could mean they just got a text telling them their mum won the lottery. Could mean nothing at all, it just happened. And now you have to deal with it, and with something else from the next person and a third thing with the one after that.
 ... that means great managers have to work terribly hard to see the subtle differences in each of the people working with them.See. See the people who work with you. They say repetition improves long-term memory, so let’s say it once more. You must see the people who work with you.As I reflect on the book while I'm writing this - and the writing is key for me to understand my reflections - I begin to see it as a guide for making a manual: a manual for becoming a better manager (big and small "m") of people. Lopp's descriptions are of his way of making his manual. And though his manual changes over time his guidance on constructing it do not or, at least, not as much.

You might feel that there's a little too much cop out, places where he says that he can't tell you what the best course of action is because your mileage will vary. But, for me, that's a self-evident truth. Any book that purports to tell me the right way to act in a situation devoid of the context of that particular instance of that kind of situation with those actors at that time is going to need some other extremely redeeming feature to get a place on my shelves. What you get from Lopp is the justification, the method or analysis (or both) and his result, with an implicit or explicit invitation to find your own.

For example, when he breaks meeting attendees down into personalities such as The Snake, Curveball Kurt, Sally Synthesizer, Chatty Patty, The Anchor and others he's not telling you that your meetings must have the same cast (although doubtless some recognition will exist). He is saying that if you were to observe with the same diligence that he has and does, you could find your own heuristics for navigating the tedium and politics, and heading off those ridiculous outcomes.

Lopp was a tester earlier in his career and one of my favourite blogs of his is The QA Mindset which ends like this:
It’s not that QA can discover what is wrong, they intimately understand what is right and they unfailingly strive to push the product in that direction.I believe these are humans you want in the building.This book is the QA mindset applied to interaction with other people in the face of their, and your, idiosyncrasies and the final chapter, titled Chaotic, Beautiful Snowflakes, reminds us of that:
The hard work of great leadership isn’t just managing the expected tasks that we can predict—it’s the art of successfully traversing the unexpected.Yes. Yes. Yes.
Image: Google Books 
Categories: Blogs

Glory Be

Hiccupps - James Thomas - Thu, 10/06/2016 - 06:13

Aware that I'm looking at resources for new managers at the moment, one of my team came across an article and pinged me a quick IM:
Do you agree with this article?The article in question was Are You a Leader or a Glorified Individual Contributor? by Joe Contrera.

I looked at the article. It's in two parts. The first is a series of statements - mostly absolutes - about being a leader or an individual contributor, neither of which terms are defined except implicitly in the statements. The second is a sequence of "questions" (actually also statements) which permit only true/false answers, with responses being tallied at the end to determine whether the reader is closer to being an effective leader or an individual contributor.

The article pushes some of my buttons and you don't have to look very hard to see a couple of pieces of evidence for that peeking out of the previous paragraph. Here's some more, for the button I label unjustifed absolutes.
After awhile your frustration builds and so you either you start trying to micro-manage your peoples' behaviors or you throw your hands up and jump in and do the work yourself because it's easier and faster.I guess I'd want to say that frustration might build, the responses given are possible responses, ...
After all it was your ability to get results that got you promoted in the first place. Well, yes, perhaps that was the reason. But perhaps there was no-one else, or perhaps those kinds of results aren't the key thing in this team at this time, or perhaps I stepped into the breach when there was a catastrophe and now management don't feel able to take the position away from me, or ...
What becomes so frustrating is that for the life of you ... you can't understand why folks won't or don't see things the same way you do.That could be the case. Or there are any number of other things that I might be frustrated about. For instance that no-one in my team is prepared to put an opposing point of view when I need people to bounce ideas off.

It's not much different in the second half, although the game changes to one of answering true or false to statements such as this one:
When you get frustrated with the progress others are making ... you step in by having accountability conversations with your people to get them back on track.I might do that for that reason. Or I might do that for another reason. Or I might do something else for the same reason. Or I might ask what the problem is directly. Or I might poke around in the bug tickets, or commit record to see if I can understand the issue before I do more. I might speak to their project leads. I might speak to consumers of the output of my people to see whether they're getting what they need, ...

Which isn't to say that that there's nothing of relevance about the piece: in my experience, and in the experience of other managers I speak to, there is frequently compromise in your ability to directly contribute to the team's output (in the sense that you have less capacity to do the same kinds of work as someone who reports to you) when you become a leader.

But - and this perspective isn't covered in the article I don't think - that lack can be compensated for by your ability to indirectly contribute. (And this is something you do as an individual.) You naturally have a different perspective from outside of the work. You can be looking wider, deeper, further into the future for icebergs, longer into the past for precedent. You can be looking for patterns across projects, across team members, across teams.

Your contribution can then be in, say, guiding the work in a direction that you hope will avoid problems of the kind you've seen before, or that you think will come from dependency on, or integration with, some other project. You can take action with your team, or in other teams. You can inform your people, or not, ...

I've been critical here, and  Joe is welcome to pick apart one of my pieces and how it fails to align with his personal beliefs, experiences, prejudices, preferences and biases in return if he cares to. But, really, I do it only to give some context into which to place my main interest: the original question. Here it is again:
Do you agree with this article?When I spoke to the person who asked, I found that it was a throwaway question, tagged onto a link in quick message, tossed in my direction in passing. But it was still a question. And - until we spoke - it was a question I could take at face value, and for my initial reconnoitre, I did.

And, having skimmed the article, and before forming any deep conclusions, I found I had my own questions about the question. Here's a few:
  • Do I agree with everything stated in the first half of this article? 
  • Do I agree with the apparent precepts of this article? (There's some kind of spectrum between leader and individual contributor.)
  • Do I agree with the concept of this article? (That it is possible to place someone on a leader/contributor spectrum on the basis of accepting/rejecting 10 statements.)
  • Do I agree with this implementation of the concept? (That these 10 questions enable that evaluation.)
  • Do I agree with the evaluation criteria in this article? (That the scoring mechanism implemented tells us what it purports to?)
  • Do I agree with the evaluation I recieved when I provided true/false answers? (That I'm not much of a leader.)
  • Is a one-word answer sufficient? (Perhaps just "yes" or "no" will satisfy the questioner.)
  • Should I try to summarise my positive and negative responses?
  • Should I give a thorough review of the article?
  • Should I justify my response or is a statement of it sufficient?
  • Is this a rhetorical question? (It just means "Here's something I found that I thought you might like.")
  •  ...

I could go on. The original question leaves me with a lot of scope for answering because it's both very specific ("agree or not") but at the same time very open ("with unspecified factors of this article"). In this case, the article itself has so many things it's possible to agree or disagree with (including a section designed to force me to agree or disagree with things) that finding a particular angle to take in a response is even more problematic. That is, if I care to attempt to answer the question honestly, assuming good intent on behalf of the questioner, and in a way that I think will satisfy the questioner's needs.

And often that's the way it is at work, and particularly in vocations concerned with building novel things, and particularly for testers whose stock-in-trade is exploring the unknown.

To pick just three motivations for such questions: we might simply ask questions carelessly, or perhaps we have insufficient knowledge about the area to ask questions sensibly even if we take care, or we might deliberately ask very open questions so as not to constrain the thought processes of the person we're requesting information or opinion from.

However these questions are asked, they put a burden on the person responding to not only find data that could form the basis of a response, but to understand the range of possible questions being asked, and then to formulate an answer which includes sufficient of those possibilities in a sufficiently consumable fashion that there's a reasonable chance of the answer being useful in some respect.

I feel like I am on both ends of this problem all day every day. My default when requesting is to try to be as specific as I can (or as I think is necessary given the context and the people involved) about what I'm asking for, or open about the fact that I'm unsure what I know or want. As a leader I will sometimes deliberately go against this in order to illustrate some point to the person I'm asking, or perhaps as a training exercise. My default when answering is to be prepared to ask for clarification. When I can't get it, I try to be clear about which aspect of the question I'm covering at any point in my response, particularly if the answers are meta-answers.

And that's just common sense, don't you agree?
Categories: Blogs

More “Strong Style Pairing” Experiences

Agile Testing with Lisa Crispin - Thu, 10/06/2016 - 04:59
equine driving pair

In an equine team, both must be equally strong.

Following on to my previous post about strong style pairing for testing, I must say that this takes practice! And, I think maybe strong style pairing may not be the best way approach for all pairing situations, but I am learning more benefits of it.

I had a golden opportunity to pair with my awesome developer teammate Glen Ivey to write some exploratory testing charters for a critical project we have underway. We agreed to try strong style pairing. After a couple of hours doing this, we weren’t sure it was the best approach. It kind of felt like one of us  (ok, mainly me) dictating to the other what text to type. Later on, when I paired with my tester teammate Chad, we had a similar experience. I had the context for what should go in the charter, and I was just telling him what to type. Hmmm.

And when opportunities where strong style pairing would really help presented themselves, something I knew about and Glen didn’t, I screwed up and took control of the keyboard. I should have let him take control and guided him on what to do so he could discover the features for himself. That would have made a good learning experience. Doing something is a better way to learn than watching someone else do it. Creating a new habit takes a lot of practice and discipline, so I have to keep working at this.

Glen was much better at having me take the keyboard to walk through some of the production and test code, and that activity helped us think of a lot of exploratory testing charters that I’d never have thought of on my own. For me at that point, the strong style pairing delivered a big benefit.

Since we were pairing on charters, we naturally referred to Elisabeth Hendrickson’s Explore It! book. Glen focused on what resources we’d use for each charter. Not only did that get us thinking more deeply about how to scope charters, it prompted us to put in helpful information for whoever happens to pick up any given charter.

Brain in my fingertips

So much of what I know resides only in my fingertips, and that can make strong style pairing a challenge. Today,  Chad and I were pairing to explore a GitHub integration with our product. I totally told him the wrong syntax to commit a change. After we got an error, I had to imagine typing it myself to get it right. I think that says something about how brains work. But, thanks to the strong style pairing, he also had the opportunity to try out areas of our app he hadn’t used before, and it was much more useful for him to have control of the keyboard to learn and practice. Overall, we confirmed expected behavior for some features, and found a couple of potential issues in others, and I think we did that faster than if we hadn’t been pairing.

Pairing can feel hard

I’m going to keep practicing strong style pairing. These past few days I’ve been exploring areas of our app that I’ve largely forgotten. It feels easier, and less stressful, to poke around on my own. And I might be right about that in some cases. But I want to take advantage of the opportunities to get a fresh set of eyes on these areas, while at the same time helping a teammate learn about those areas. I believe that pairing for testing helps us improve our ability to build quality in.

It’s so exciting and rewarding to pair test with fellow testers and developers. I’d love to hear your own pair testing stories.

The post More “Strong Style Pairing” Experiences appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Fixing Performance Problems

Testing TV - Wed, 10/05/2016 - 17:10
A lot of people know the basics of performance testing, but most will have a vague knowledge at best centered around one or two favorite tools which are always available. What happens when these tools are taken away? How do you apply your knowledge to other tools in a useful way? Video producer:
Categories: Blogs

Accountability for What You Say is Dangerous and That’s Okay

James Bach's Blog - Sat, 10/01/2016 - 21:33

[Note: I offered Maaret Pyhäjärvi the right to review this post and suggest edits to it before I published it. She declined.]

A few days ago I was keynoting at the New Testing Conference, in New York City, and I used a slide that has offended some people on Twitter. This blog post is intended to explore that and hopefully improve the chances that if you think I’m a bad guy, you are thinking that for the right reasons and not making a mistake. It’s never fun for me to be a part of something that brings pain to other people. I believe my actions were correct, yet still I am sorry that I caused Maaret hurt, and I will try to think of ways to confer better in the future.

Here’s the theme of this post: Getting up in front of the world to speak your mind is a dangerous process. You will be misunderstood, and that will feel icky. Whether or not you think of yourself as a leader, speaking at a conference IS an act of leadership, and leadership carries certain responsibilities.

I long ago learned to let go of the outcome when I speak in public. I throw the ideas out there, and I do that as an American Aging Overweight Left-Handed Atheist Married Father-And-Father-Figure Rough-Mannered Bearded Male Combative Aggressive Assertive High School Dropout Self-Confident Freedom-Loving Sometimes-Unpleasant-To-People-On-Twitter Intellectual. I know that my ideas will not be considered in a neutral context, but rather in the context of how people feel about all that. I accept that.  But, I have been popular and successful as a speaker in the testing world, so maybe, despite all the difficulties, enough of my message and intent gets through, overall.

What I can’t let go of is my responsibility to my audience and the community at large to speak the truth and to do so in a compassionate and reasonable way. Regardless of what anyone else does with our words, I believe we speakers need to think about how our actions help or harm others. I think a lot about this.

Let me clarify. I’m not saying it’s wrong to upset people or to have disagreement. We have several different culture wars (my reviewers said “do you have to say wars?”) going on in the software development and testing worlds right now, and they must continue or be resolved organically in the marketplace of ideas. What I’m saying is that anyone who speaks out publicly must try to be cognizant of what words do and accept the right of others to react.

Although I’m surprised and certainly annoyed by the dark interpretations some people are making of what I did, the burden of such feelings is what I took on when I first put myself forward as a public scold about testing and software engineering, a quarter century ago. My annoyance about being darkly interpreted is not your problem. Your problem, assuming you are reading this and are interested in the state of the testing craft, is to feel what you feel and think what you think, then react as best fits your conscience. Then I listen and try to debug the situation, including helping you debug yourself while I debug myself. This process drives the evolution of our communities. Jay Philips, Ash Coleman, Mike Talks, Ilari Henrik Aegerter, Keith Klain, Anna Royzman, Anne-Marie Charrett, David Greenlees, Aaron Hodder, Michael Bolton, and my own wife all approached me with reactions that helped me write this post. Some others approached me with reactions that weren’t as helpful, and that’s okay, too.

Leadership and The Right of Responding to Leaders

In my code of conduct, I don’t get to say “I’m not a leader.” I can say no one works for me and no one has elected me, but there is more to leadership than that. People with strong voices and ideas gain a certain amount of influence simply by virtue of being interesting. I made myself interesting, and some people want to hear what I have to say. But that comes with an implied condition that I behave reasonably. The community, over time negotiates what “reasonable” means. I am both a participant and a subject of those negotiations. I recommend that we hold each other accountable for our public, professional words. I accept accountability for mine. I insist that this is true for everyone else. Please join me in that insistence.

People who speak at conferences are tacitly asserting that they are thought leaders– that they deserve to influence the community. If that influence comes with a rule that “you can’t talk about me without my permission” it would have a chilling effect on progress. You can keep to yourself, of course; but if you exercise your power of speech in a public forum you cannot cry foul when someone responds to you. Please join me in my affirmation that we all have the right of response when a speaker takes the microphone to keynote at a conference.

Some people have pointed out that it’s not okay to talk back to performers in a comedy show or Broadway play. Okay. So is that what a conference is to you? I guess I believe that conferences should not be for show. Conferences are places for conferring. However, I can accept that some parts of a conference might be run like infomercials or circus acts. There could be a place for that.

The Slide

Here is the slide I used the other day:


Before I explain this slide, try to think what it might mean. What might its purposes be? That’s going to be difficult, without more information about the conference and the talks that happened there. Here are some things I imagine may be going through your mind:

  • There is someone whose name is Maaret who James thinks he’s different from.
  • He doesn’t trust nice people. Nice people are false. Is Maaret nice and therefore he doesn’t trust her, or does Maaret trust nice people and therefore James worries that she’s putting herself at risk?
  • Is James saying that niceness is always false? That’s seems wrong. I have been nice to people whom I genuinely adore.
  • Is he saying that it is sometimes false? I have smiled and shook hands with people I don’t respect, so, yes, niceness can be false. But not necessarily. Why didn’t he put qualifying language there?
  • He likes debate and he thinks that Maaret doesn’t? Maybe she just doesn’t like bad debate. Did she actually say she doesn’t like debate?
  • What if I don’t like debate, does that mean I’m not part of this community?
  • He thinks excellence requires attention and energy and she doesn’t?
  • Why is James picking on Maaret?

Look, if all I saw was this slide, I might be upset, too. So, whatever your impression is, I will explain the slide.

Like I said I was speaking at a conference in NYC. Also keynoting was Maaret Pyhäjärvi. We were both speaking about the testing role. I have some strong disagreements with Maaret about the social situation of testers. But as I watched her talk, I was a little surprised at how I agreed with the text and basic concepts of most of Maaret’s actual slides, and a lot of what she said. (I was surprised because Maaret and I have a history. We have clashed in person and on Twitter.) I was a bit worried that some of what I was going to say would seem like a rehash of what she just did, and I didn’t want to seem like I was papering over the serious differences between us. That’s why I decided to add a contrast slide to make sure our differences weren’t lost in the noise. This means a slide that highlights differences, instead of points of connection. There were already too many points of connection.

The slide was designed specifically:

  • for people to see who were in a specific room at a specific time.
  • for people who had just seen a talk by Maaret which established the basis of the contrast I was making.
  • about differences between two people who are both in the spotlight of public discourse.
  • to express views related to technical culture, not general social culture.
  • to highlight the difference between two talks for people who were about to see the second talk that might seem similar to the first talk.
  • for a situation where both I and Maaret were present in the room during the only time that this slide would ever be seen (unless someone tweeted it to people who would certainly not understand the context).
  • as talking points to accompany my live explanation (which is on video and I assume will be public, someday).
  • for a situation where I had invited anyone in the audience, including Maaret, to ask me questions or make challenges.

These people had just seen Maaret’s talk and were about to see mine. In the room, I explained the slide and took questions about it. Maaret herself spoke up about it, for which I publicly thanked her for doing so. It wasn’t something I was posting with no explanation or context. Nor was it part of the normal slides of my keynote.

Now I will address some specific issues that came up on Twitter:

1. On Naming Maaret

Maaret has expressed the belief that no one should name another person in their talk without getting their permission first. I vigorously oppose that notion. It’s completely contrary to the workings of a healthy society. If that principle is acceptable, then you must agree that there should be no free press. Instead, I would say if you stand up and speak in the guise of an expert, then you must be personally accountable for what you say. You are fair game to be named and critiqued. And the weird thing is that Maaret herself, regardless of what she claims to believe, behaves according to my principle of freedom to call people out. She, herself, tweeted my slide and talked about me on Twitter without my permission. Of course, I think that is perfectly acceptable behavior, so I’m not complaining. But it does seem to illustrate that community discourse is more complicated than “be nice” or “never cause someone else trouble with your speech” or “don’t talk about people publicly unless they gave you permission.”

2. On Being Nice

Maaret had a slide in her talk about how we can be kind to each other even though we disagree. I remember her saying the word “nice” but she may have said “kind” and I translated that into “nice” because I believed that’s what she meant. I react to that because, as a person who believes in the importance of integrity and debate over getting along for the sake of appearances, I observe that exhortations to “be nice” or even to “be kind” are often used when people want to quash disturbing ideas and quash the people who offer them. “Be nice” is often code for “stop arguing.” If I stop arguing, much of my voice goes away. I’m not okay with that. No one who believes there is trouble in the world should be okay with that. Each of us gets to have a voice.

I make protests about things that matter to me, you make protests about things that matter to you.

I think we need a way of working together that encourages debate while fostering compassion for each other. I use the word compassion because I want to get away from ritualized command phrases like “be nice.” Compassion is a feeling that you cultivate, rather than a behavior that you conform to or simulate. Compassion is an antithesis of “Rules of Order” and other lists of commandments about courtesy. Compassion is real. Throughout my entire body of work you will find that I promote real craftsmanship over just following instructions. My concern about “niceness” is the same kind of thing.

Look at what I wrote: I said “I don’t trust nice people.” That’s a statement about my feelings and it is generally true, all things being equal. I said “I’m not nice.” Yet, I often behave in pleasant ways, so what did I mean? I meant I seek to behave authentically and compassionately, which looks like “nice” or “kind”, rather than to imagine what behavior would trick people into thinking I am “nice” when indeed I don’t like them. I’m saying people over process, folks.

I was actually not claiming that Maaret is untrustworthy because she is nice, and my words don’t say that. Rather, I was complaining about the implications of following Maaret’s dictum. I was offering an alternative: be authentic and compassionate, then “niceness” and acts of kindness will follow organically. Yes, I do have a worry that Maaret might say something nice to me and I’ll have to wonder “what does that mean? is she serious or just pretending?” Since I don’t want people to worry about whether I am being real, I just tell them “I’m not nice.” If I behave nicely it’s either because I feel genuine good will toward you or because I’m falling down on my responsibility to be honest with you. That second thing happens, but it’s a lapse. (I do try to stay out of rooms with people I don’t respect so that I am not forced to give them opinions they aren’t willing or able to process.)

I now see that my sentence “I want to be authentic and compassionate” could be seen as an independent statement connected to “how I differ from Maaret,” implying that I, unlike her, am authentic and compassionate. That was an errant construction and does not express my intent. The orange text on that line indicated my proposed policy, in the hope that I could persuade her to see it my way. It was not an attack on her. I apologize for that confusion.

3. Debate vs. Dialogue

Maaret had earlier said she doesn’t want debate, but rather dialogue. I have heard this from other Agilists and I find it disturbing. I believe this is code for “I want the freedom to push my ideas on other people without the burden of explaining or defending those ideas.” That’s appropriate for a brainstorming session, but at some point, the brainstorming is done and the judging begins. I believe debate is absolutely required for a healthy professional community. I’m guided in this by dialectical philosophy, the history of scientific progress, the history of civil rights (in fact, all of politics), and the modern adversarial justice system. Look around you. The world is full of heartfelt disagreement. Let’s deal with it. I helped create the culture of small invitational peer conferences in our industry which foster debate. We need those more than ever.

But if you don’t want to deal with it, that’s okay. All that means is that you accept that there is a wall between your friends and those other people whom you refuse to debate with. I will accept the walls if necessary but I would rather resolve the walls. That’s why I open myself and my ideas for debate in public forums.

Debate is not a process of sticking figurative needles into other people. Debate is the exchange of views with the goal of resolving our differences while being accountable for our words and actions. Debate is a learning process. I have occasionally heard from people I think are doing harm to the craft that they believe I debate for the purposes of hurting people instead of trying to find resolution. This is deeply insulting to me, and to anyone who takes his vocation seriously. What’s more, considering that these same people express the view that it’s important to be “nice,” it’s not even nice. Thus, they reveal themselves to be unable to follow their own values. I worry that “Dialogue not debate” is a slogan for just another power group trying to suppress its rivals. Beware the Niceness Gang.

I understand that debating with colleagues may not be fun. But I’m not doing it for fun. I’m doing it because it is my responsibility to build a respectable craft. All testing professionals share this responsibility. Debate serves another purpose, too, managing the boundaries between rival value systems. Through debate we may discover that we occupy completely different paradigms; schools of thought. Debate can’t bridge gaps between entirely different world views, and yet I have a right to my world view just as you have a right to yours.

Jay Philips said on Twitter:

@jamesmarcusbach pointless 2debate w/ U because in your mind you’re right. Slide &points shouldn’t have happened @JokinAspiazu @ericproegler

— Jay Philips (@jayphilips) September 30, 2016

I admire Jay. I called her and we had a satisfying conversation. I filled her in on the context and she advised me to write this post.

One thing that came up is something very important about debate: the status of ideas is not the only thing that gets modified when you debate someone; what also happens is an evolution of feelings.

Yes I think “I’m right.” I acted according to principles I think are eternal and essential to intellectual progress in society. I’m happy with those principles. But I also have compassion for the feelings of others, and those feelings may hold sway even though I may be technically right. For instance, Maaret tweeted my slide without my permission. That is copyright violation. She’s objectively “wrong” to have done that. But that is irrelevant.

[Note: Maaret points out that this is legal under the fair use doctrine. Of course, that is correct. I forgot about fair use. Of course, that doesn’t change the fact that though I may feel annoyed by her selective publishing of my work, that is irrelevant, because I support her option to do that. I don’t think it was wise or helpful for her to do that, but I wouldn’t seek to bar her from doing so. I believe in freedom to communicate, and I would like her to believe in that freedom, too]

I accept that she felt strongly about doing that, so I [would] choose to waive my rights. I feel that people who tweet my slides, in general, are doing a service for the community. So while I appreciate copyright law, I usually feel okay about my stuff getting tweeted.

I hope that Jay got the sense that I care about her feelings. If Maaret were willing to engage with me she would find that I care about her feelings, too. This does not mean she gets whatever she wants, but it’s a factor that influences my behavior. I did offer her the chance to help me edit this post, but again, she refused.

4. Focus and Energy

Maaret said that eliminating the testing role is a good thing. I worry it will lead to the collapse of craftsmanship. She has a slide that says “from tester to team member” which is a sentiment she has expressed on Twitter that led me to say that I no longer consider her a tester. She confirmed to me that I hurt her feelings by saying that, and indeed I felt bad saying it, except that it is an extremely relevant point. What does it mean to be a tester? This is important to debate. Maaret has confirmed publicly (when I asked a question about this during her talk) that she didn’t mean to denigrate testing by dismissing the value of a testing role on projects. But I don’t agree that we can have it both ways. The testing role, I believe, is a necessary prerequisite for maintaining a healthy testing craft. My key concern is the dilution of focus and energy that would otherwise go to improving the testing craft. This is lost when the role is lost.

This is not an attack on Maaret’s morality. I am worried she is promoting too much generalism for the good of the craft, and she is worried I am promoting too much specialism. This is a matter of professional judgment and perspective. It cannot be settled, I think, but it must be aired.

The Slide Should Not Have Been Tweeted But It’s Okay That It Was

I don’t know what Maaret was trying to accomplish by tweeting my slide out of context. Suffice it to say what is right there on my slide: I believe in authenticity and compassion. If she was acting out of authenticity and compassion then more power to her. But the slide cannot be understood in isolation. People who don’t know me, or who have any axe to grind about what I do, are going to cry “what a cruel man!” My friends contacted me to find out more information.

I want you to know that the slide was one part of a bigger picture that depicts my principled objection to several matters involving another thought leader. That bigger picture is: two talks, one room, all people present for it, a lot of oratory by me explaining the slide, as well as back and forth discussion with the audience. Yes, there were people in the room who didn’t like hearing what I had to say, but “don’t offend anyone, ever” is not a rule I can live by, and neither can you. After all, I’m offended by most of the talks I attend.

Although the slide should not have been tweeted, I accept that it was, and that doing so was within the bounds of acceptable behavior. As I announced at the beginning of my talk, I don’t need anyone to make a safe space for me. Just follow your conscience.

What About My Conscience?
  • My conscience is clean. I acted out of true conviction to discuss important matters. I used a style familiar to anyone who has ever seen a public debate, or read an opinion piece in the New York Times. I didn’t set out to hurt Maaret’s feelings and I don’t want her feelings to be hurt. I want her to engage in the debate about the future of the craft and be accountable for her ideas. I don’t agree that I was presuming too much in doing so.
  • Maaret tells me that my slide was “stupid and hurtful.” I believe she and I do not share certain fundamental values about conferring. I will no longer be conferring with her, until and unless those differences are resolved.
  • Compassion is important to me. I will continue to examine whether I am feeling and showing the compassion for my fellow humans that they are due. These conversations and debates I have with colleagues help me do that.
  • I agree that making a safe space for students is important. But industry consultants and pundits should be able to cope with the full spectrum, authentic, principled reactions by their peers. Leaders are held to a higher standard, and must be ready and willing to defend their ideas in public forums.
  • The reaction on Twitter gave me good information about a possible trend toward fragility in the Twitter-facing part of the testing world. There seems to be a significant group of people who prize complete safety over the value that comes from confrontation. In the next conference I help arrange, I will set more explicit ground rules, rather than assuming people share something close to my own sense of what is reasonable to do and expect.
  • I will also start thinking, for each slide in my presentation: “What if this gets tweeted out of context?”

(Oh, and to those who compared me to Donald Trump… Can you even imagine him writing a post like this in response to criticism? BELIEVE ME, he wouldn’t.)

Categories: Blogs

I Can Manage

Hiccupps - James Thomas - Fri, 09/30/2016 - 16:16

For work reasons, I've recently become interested in resources for those new to line management. I put out an appeal for suggestions on Twitter and Managing The Unmanageable was recommended by Thomas Ponnet, with a little cautious reservation:@qahiccupps Hope you enjoy it. I don't agree with everything but that comes with the job description. Not all translates for my context.— Thomas Ponnet (@ThomasPonnet) August 15, 2016 This quote from the book's preface sets up the authors' intent nicely:
There is no methodology for the newly anointed development manager charged with managing, leading, guiding, and reviewing the performance of a team of programmers — often, the team he was on just days before. There are no off-the-shelf approaches. Unlike project managers, who devote hours and hours of study toward certifcation in their chosen career path, development managers often win their management roles primarily from having been stellar coders while displaying a modicum of people skills.The book is long - over-long for my taste - and, rather than try to rehash the whole thing, I'll take the liberty of making an exceedingly crude precis:
  • people are all different
  • ... but there are broad classes of characteristics that it can useful to acknowledge and look for
  • people are motivated by a relatively small set of important things
  • .. and, after a certain level is reached, salary is not usually the most important thing
  • hiring well is crucial, and can be extremely difficult
  • ... and a manager should be thinking about it even when they are not actively hiring
  • to do well, a manager  needs to be organised
  • ... even more organised than you probably think
  • to command respect from a team, a manager should be able to demonstrate relevant skills
  • ... and need to know when is a good time to do that and when to step back
  • to enjoy the support of a team, a manager needs to show empathy and give protection
  • ... and that sometimes means letting them fail; but shouldn't mean setting them up to fail
  • to function well within a company a manager needs to establish relationships and communicate well
  • ... in all directions: down, up, and across, and in different media
  • a good manager will reflect on their own actions
  • ... and look to improve themselves
  • the source of a team culture is the manager
  • ... and, once established, it requires nurturing

Perhaps these things seem self-evident. Perhaps some of them are self-evident. Broadly speaking I think I'd agree with them, based on my own experience. And, in my own experience I find that I learned many of them only incrementally and some of them the hard way.

Which is where a book like this can help - it's a brain dump of wisdom from the two authors mostly, but also from a bunch of others who offer nugget-sized bites of experience such as
Managers must manage - Andy Grovewith associated commentary:
I’ve used Andy Grove’s phrase innumerable times to coach my managers and directors of programming teams. When confronted with a problem, they can’t just "raise a red flag." I’m always available when needed, but good software managers find ways to solve problems without my involvement or executive management direction.And here's handful of others that chimed with me:
Don’t let the day-to-day eat you up - David Dibble David made this statement to make the point to his management team that managers have "real" work to do; that the seemingly urgent—e-mail, meetings, the routine—could easily fill a day. Only by being intentional about how we use our days can managers overcome letting that happen If you’re a people manager, your people are far more important than anything else you're working on - Tim Swihart Tim notes, "If a team member drops by at an awkward time and wants to chat, set aside what you’re doing and pay attention. They may be building up the courage to tell you something big. I’ve noticed this to be especially true when the sudden chatter isn’t somebody who normally drops by for idle conversation." Managers who use one-on-one meetings consistently fnd them one of the most effective and productive uses of their management time - Johanna Rothman and Esther Derby The statement is a match for our own experience.

We have two ears and one mouth. Use them in this ratio - Kimberly Wiefling While I love theory and can happily spend time in talking shops, dissecting semantics and splitting hairs, as my recent MEWT experience showed ...
@qahiccupps wields distinctions like a surgeon wields a scalpel #mewt— Iain McCowatt (@imccowatt) April 9, 2016... I also recognise the value of activity to explore, inform, test, and back up the theory. I like to think of myself, still, as a practitioner, and Managing the Unmanageable is a book written by practitioners and grounded in their practice, with examples drawn liberally from it.

It's unlikely, as Thomas Ponnet suggested, and I'd agree, to fit exactly with everything that you're doing right now with the team you have in the place you're working - especially as some of it is very specific to managing software developers. Parts of it will probably jar too. For instance, I found the suggested  approach to levels of seniority too simplistic.

But what it can do is give you another perspective, or inspiration, or perhaps fire warning shots across your bow from some position not too dissimilar to yours, and rooted in the real world of managing people in technical disciplines.
Categories: Blogs

Chrome OS Test Automation Lab

Testing TV - Wed, 09/28/2016 - 17:24
Chrome OS is currently shipping 60+ different Chromebook/boxes each running their own software. On the field, customers are getting a fresh system every 6 weeks. This would not be possible without a robust Continuous Integration System vetting check-ins from our 200+ developers. This talk describes the overall architecture with specific emphasis on our test automation […]
Categories: Blogs

It's Complicated

Hiccupps - James Thomas - Wed, 09/28/2016 - 06:59
In a recent episode of Rationally SpeakingSamuel Arbesman, a complexity scientist, talks about complexity in technology. Here's a few quotes I particularly enjoyed.

On levels of understanding of systems:
Technology very broadly is becoming more and more complicated ... actually so complex that no one, whether you're an expert or otherwise, fully understands these things ... They have enormous number of parts that are all interacting in highly nonlinear ways that are subject to emerging phenomena. We're going to have bugs and glitches and failures. And if we think we understand these things well and we don’t, there's going to be tons of gap between how we think we understand the system and how it actually does behave. On modelling reality with a system and then creating a model of that system:
... the world is messy and complex. Therefore, often, in order to capture all that messiness and complexity, you need a system that effectively is often of equal level of messiness and complexity ... whether or not it's explicitly including all the rules and exceptions and kind of the edge cases, or a system that learns these kinds of things in some sort of probabilistic, counterintuitive manner. It might be hard to understand all the logic in [a] machine learning system, but it still captures a lot of that messiness. I think you can see the situation where in machine learning, the learning algorithm might be fairly understandable. But then the end result ... You might be able to say, theoretically, I can step through the mathematical logic in each individual piece of the resulting system, but effectively there's no way to really understand what's going on.On "physics" and "biological" thinking:
[Physics:] A simple set of equations explains a whole host of phenomena. So you write some equations to explain gravity, and it can explain everything from the orbits, the planets, the nature of the tides ...  It has this incredibly explanatory power. It might not explain every detail, but it maybe it could explain the vast majority of what's going on within a system. That's the physics. The physics thinking approach, abstracting away details, deals with some very powerful insights. [Biology:] Which is the recognition that oftentimes ... the details not only are fun and enjoyable to focus on, but they're also extremely important. They might even actually make up the majority of the kinds of behavior that the system can exhibit. Therefore, if you sweep away the details and you try to create this abstracted notion of the system, you're actually missing the majority of what is going on. Oftentimes I think people in their haste to understand technology ... because technologies are engineered things ... think of them as perhaps being more the physics thinking side of the spectrum.On robustness:
There's this idea within complexity science ... "robust yet fragile," and the idea behind this is that a lot of these very complex systems are highly robust. They've been tested thoroughly. They had a lot of edge cases and exceptions built in and baked into the system. They're robust to an enormously large set of things but oftentimes, they're only the set of things that have been anticipated by the engineers. However, they're actually quite fragile to the unanticipated situations. Side note: I don't think there's an attempt in this discussion to draw a distinction between complex and complicated, which some do.
Categories: Blogs

Five Tricky Things With Testing

Thoughts from The Test Eye - Tue, 09/27/2016 - 20:39

I went to SAST Väst Gothenburg today to hold a presentation that can be translated to something like “Five Tricky Things With Testing”. It was a very nice day, and I met old and new friends. Plus an opportunity to write the first blog post in a long time, so here is a very condensed version:

1. People don’t understand testing, but still have opinions. They see it as a cost, without considering the value.
Remedy: Discuss information needs, important stuff testing can help you know.

2. Psychologically hard. The more problems you find, the longer it will take to get finished.
Remedy: Stress the long-term, for yourself and for others.

3. You are never finished. There is always more to test, but you have to stop.
Remedy: Talk more to colleagues, perform richer testing.

4. Tacit knowledge. It is extremely rare that you can write down how to test, and good testing will follow.
Remedy: More contact of the third degree.

5. There are needs, but less money.
Remedy: Talk about testing’s value with the right words, and deliver value with small effort, not only with bugs.

Summary: Make sure you provide value with your testing, also for the sake of the testing community,


There were very good questions, including one very difficult:
How do you make sure the information reaches the ones who should get it?

Answer: For people close to you, it is not so difficult; talk about which information to report and how from the beginning. I don’t like templates, so I usually make a new template for each project, and ask if it has the right information in it.

But I guess you mean people more far away, and especially if they are higher in the hierarchy this can be very difficult. It might be people you aren’t “allowed” to talk to, and you are not invited to the meetings.
One trick I have tried is to report in a spread-worthy format, meaning that it is very easy to copy and paste the essence so your words reach participants you don’t talk to.

Better answers is up to you to find for your context.

Categories: Blogs

The Forgotten Agile Role – the Customer

Many Agile implementations tend to focus on the roles inside an organization – the Scrum Master, Product Owner, Business Owner, Agile Team, Development Team, etc.  These are certainly important roles in identifying and creating a valuable product or service.  However, what has happened to the Customer role?  I contend the Customer is the most important role in the Agile world.  Does it seem to be missing from many of the discussions?
While not always obvious, the Customer role should be front-and-center in all Agile methods and when working in an Agile context.  You must embrace them as your business partner with the goal of building strong customer relationships and gathering their valuable feedback.  Within an Agile enterprise, while customers should be invited to Sprint Reviews or demonstrations and provide feedback, they should really be asked to provide feedback all along the product development journey from identification of an idea to delivery of customer value.
Let's remind ourselves of the importance of the customer.  A customer is someone who has a choice on what to buy and where to buy it. By purchasing your product, a customer pays you with money to help your company stay in business.  For these factors, engaging the customer is of utmost importance.  Customers are external to the company and can provide the initial ideas and feedback to validate the ideas into working products.  Or if your customer is internal, are you treating them as part of your team and are you collecting their feedback regularly?
As you look across your Agile context, are customers one of your major Agile roles within your organization?  Are they front and center?  Are customers an integral part of your Agile practice?  Are you collecting their valuable feedback regularly?  If not, it may be time to do so.  
Categories: Blogs

Magic Buttons and Code Coverage

Sustainable Test-Driven Development - Fri, 09/23/2016 - 19:29
This will be a quickie.  But sometimes good things come in small packages. This idea came to us from Amir's good friend Eran Pe'er, when he was visiting Net Objectives from his home in Israel. I'd like you to imagine something, then I'm going to ask you a question.  Once I ask the question you'll see a horizontal line of dashes.  Stop reading at that point and really try to answer the question.
Categories: Blogs