Skip to content

Blogs

Keep what you’ve got, by giving it all away

Yet another bloody blog - Mark Crowther - Sat, 03/28/2015 - 04:45

I remember working at an electronics manufacturer many years ago. The managing director there was an interesting guy, interesting in that his attitude was a little bit back in the 1970s.
One conversation I had with him was about training. My argument was that we should implement a simple training course, more a skills check, for the staff on the line. The idea being that we could ensure a consistency of approach and spot any training needs. After all, getting staff into the business, a new start up at that time, then trained on the products, was a big investment. It was surely wise to ensure everyone was around the same skill level for the core skills.
His response was straight out of the How Not to Manage handbook. Yep, “If we train them, they could leave.” As the retort goes, “… and if we don’t train them and they stay?”This attitude has no place in the modern professional environment. It’s a small minded and fearful attitude. One that reeks of ‘lack’ instead of ‘abundance’. It’s an attitude that if applied to other matters, will help kill off any team, business and even personal life. When consulting or working for an employer in the software testing field, this attitude will be your death knell.
Small Minded Consultants I recently encountered a similar attitude to this again, in my current testing consultancy role.
The scenario here was a conversation with another consultant. I proposed that we should collaborate on some White Papers, proposing service offerings our client could deliver. Showing them how to structure the services, tools needed, likely costs, blockers and enablers. All you’d expect in something that was essentially a business proposal.
The response from the consultant, 15+ years on since the encounter at the electronics company, was like hearing an echo. This time the complaint was that if we show them what they could do in such detail, they wouldn’t need us, they would go off and do it themselves.
Why this is wrong, wrong This is an attitude of fear, of small mindedness and a reflection that this person’s consultative mindset is way off.
First off, I absolutely believe you cannot keep secret everything you know, in an attempt to keep it rarified, believing it will keep you employable. In doing so you, if it’s something new for your client, you will fail to ignite interest in what you can do, fail to place yourself amongst those that are known to know about whatever it is you’re hiding. That means you can’t engage them and have them pay you for the work.
If you have a business, would you hide your products and services or describe them as fully as possible to engage potential clients? Why is it any different when you’re an IT contractor, especially if you’re working for a testing consultancy? It’s not. You need to help your client understand their options and what you can deliver. Then engage them in that and bill them.By being open and giving away what you know, you have the potential to get more back.
What, not How.There is a caveat here though, a little nuance in the approach that any wise consultant will understand.
When schooling your client in whatever it is you believe they may be interested in and that you could deliver – you need to tell them WHAT it is, WHAT the benefits are, etc. You don’t want to tell them HOW. That Know How combined with prior experience is why you’re needed. This is the part of the game that’s understood. Every company shares WHAT it has on offer, the exact How To is why we need them afterwards.
Don’t worry about this approach. When you present something your client is interested in, explaining your prior successes and how you can help guarantee their success – of course the first person they will look to hire to do this thing, is you. Any smart business owner will at any rate. If they’re not smart, then you’re on a route to nothing presenting ideas and opportunities anyway!

Keep an attitude of abundance, not scarcity. And share what you know, tell of the benefits and your experience, give your client options to explore; then wait for it to come back around to you.
Mark
------------------------------------------------------------------Enjoyed this post?Say thanks by sharing, clicking an advert or checking out the links above!------------------------------------------------------------------


Image src cleverchameleonconcepts.comVia a Free to Share and Use image search filter on bing.com



Categories: Blogs

Clean Tests: Isolation with Fakes

Jimmy Bogard - Tue, 03/24/2015 - 17:58

Other posts in this series:

So far in this series, I’ve walked through different modes of isolation – from internal state using child containers and external state with database resets and Respawn. In my tests, I try to avoid fakes/mocks as much as possible. If I can control the state, isolating it, then I’ll leave the real implementations in my tests.

There are some edge cases in which there are dependencies that I can’t control – web services, message queues and so on. For these difficult to isolate dependencies, fakes are acceptable. We’re using AutoFixture to supply our mocks, and child containers to isolate any modifications. It should be fairly straightforward then to forward mocks in our container.

As far as mocking frameworks go, I try to pick the mocking framework with the simplest interface and the least amount of features. More features is more headache, as mocking frameworks go. For me, that would be FakeItEasy.

First, let’s look at a simple scenario of creating a mock and modifying our container.

Manual injection

We’ve got our libraries added, now we just need to add a way to create a fake and inject it into our child container. Since we’ve built an explicit fixture object, this is the perfect place to put our code:

public T Fake<T>()
{
    var fake = A.Fake<T>();

    Container.EjectAllInstancesOf<T>();
    Container.Inject(typeof(T), fake);

    return fake;
}

We create the fake using FakeItEasy, then inject the instance into our child container. Because we might have some existing instances configured, I use “EjectAllInstancesOf” to purge any configured instances. Once we’ve injected our fake, we can now both configure the fake and use our container to build out an instance of a root component. The code we’re trying to test is:

public class InvoiceApprover : IInvoiceApprover
{
    private readonly IApprovalService _approvalService;

    public InvoiceApprover(IApprovalService approvalService)
    {
        _approvalService = approvalService;
    }

    public void Approve(Invoice invoice)
    {
        var canBeApproved = _approvalService.CheckApproval(invoice.Id);

        if (canBeApproved)
        {
            invoice.Approve();
        }
    }
}

In our situation, the approval service is some web service that we can’t control and we’d like to stub that out. Our test now becomes:

public class InvoiceApprovalTests
{
    private readonly Invoice _invoice;

    public InvoiceApprovalTests(Invoice invoice,
        SlowTestFixture fixture)
    {
        _invoice = invoice;

        var mockService = fixture.Fake<IApprovalService>();
        A.CallTo(() => mockService.CheckApproval(invoice.Id)).Returns(true);

        var invoiceApprover = fixture.Container.GetInstance<IInvoiceApprover>();

        invoiceApprover.Approve(invoice);
        fixture.Save(invoice);
    }

    public void ShouldMarkInvoiceApproved()
    {
        _invoice.IsApproved.ShouldBe(true);
    }

    public void ShouldMarkInvoiceLocked()
    {
        _invoice.IsLocked.ShouldBe(true);
    }
}

Instead of using FakeItEasy directly, we go through our fixture instead. Once our fixture creates the fake, we can use the fixture’s child container directly to build out our root component. We configured the child container to use our fake instead of the real web service – but this is encapsulated in our test. We just grab a fake and start going.

The manual injection works fine, but we can also instruct AutoFixture to handle this a little more intelligently.

Automatic injection

We’re trying to get out of creating the fake and root component ourselves – that’s what AutoFixture is supposed to take care of, creating our fixtures. We can instead create an attribute that AutoFixture can key into:

[AttributeUsage(AttributeTargets.Parameter)]
public sealed class FakeAttribute : Attribute { }

Instead of building out the fixture items ourselves, we go back to AutoFixture supplying them, but now with our new Fake attribute:

public InvoiceApprovalTests(Invoice invoice, 
    [Fake] IApprovalService mockService,
    IInvoiceApprover invoiceApprover,
    SlowTestFixture fixture)
{
    _invoice = invoice;

    A.CallTo(() => mockService.CheckApproval(invoice.Id)).Returns(true);

    invoiceApprover.Approve(invoice);
    fixture.Save(invoice);
}

In order to build out our fake instances, we need to create a specimen builder for AutoFixture:

public class FakeBuilder : ISpecimenBuilder
{
    private readonly IContainer _container;

    public FakeBuilder(IContainer container)
    {
        _container = container;
    }

    public object Create(object request, ISpecimenContext context)
    {
        var paramInfo = request as ParameterInfo;

        if (paramInfo == null)
            return new NoSpecimen(request);

        var attr = paramInfo.GetCustomAttribute<FakeAttribute>();

        if (attr == null)
            return new NoSpecimen(request);

        var method = typeof(A)
            .GetMethod("Fake", Type.EmptyTypes)
            .MakeGenericMethod(paramInfo.ParameterType);

        var fake = method.Invoke(null, null);

        _container.Configure(cfg => cfg.For(paramInfo.ParameterType).Use(fake));

        return fake;
    }
}

It’s the same code as inside our context object’s “Fake” method, made a tiny bit more verbose since we’re dealing with type metadata. Finally, we need to register our specimen builder with AutoFixture:

public class SlowTestsCustomization : ICustomization
{
    public void Customize(IFixture fixture)
    {
        var contextFixture = new SlowTestFixture();

        fixture.Register(() => contextFixture);

        fixture.Customizations.Add(new FakeBuilder(contextFixture.Container));
        fixture.Customizations.Add(new ContainerBuilder(contextFixture.Container));
    }
}

We now have two options when building out fakes – manually through our context object, or automatically through AutoFixture. Either way, our fakes are completely isolated from other tests but we still build out our root components we’re testing through our container. Building out through the container forces our test to match what we’d do in production as much as possible. This cuts down on false positives/negatives.

That’s it for this series on clean tests – we looked at isolating internal and external state, using Fixie to build out how we want to structure tests, and AutoFixture to supply our inputs. At one point, I wasn’t too interested in structuring and refactoring test code. But having been on projects with lots of tests, I’ve found that tests retain their value when we put thought into their design, favor composition over inheritance, and try to keep them as tightly focused as possible (just like production code).

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Significance of alwaysRun=true @Test annotation property.

Testing tools Blog - Mayank Srivastava - Tue, 03/24/2015 - 16:15
alwaysRun=true property informs the system that TestNg should run the test method if depends on @Test method fails also. Basically it helps to achieve the soft dependency, the feature of TestNG which helps to execute the testng test methods in order. Below is the code example: So above code states that system will execute the […]
Categories: Blogs

How to Launch Chrome browser with Selenium WebDriver?

Testing tools Blog - Mayank Srivastava - Tue, 03/24/2015 - 15:17
Before thinking how we can run the our application on Chrome browser, download the ChromeDriver from here. To lunch chrome browser with Selenium Webdriver takes less than a minute and 3 lines of code. So here we go- System.setProperty(“webdriver.chrome.driver”, “D:\\Chrome_Driver\\chromedriver.exe”); WebDriver driver=new ChromeDriver(); driver.get(“http://seleniumeasy.com/&#8221;);
Categories: Blogs

Facing Fears, Pairing

Agile Testing with Lisa Crispin - Tue, 03/24/2015 - 05:42
Pairing FTW

Pairing FTW!

This post benefits from a bit of context: my day job is as a tester on the Pivotal Tracker team. Tracker is a project tracking tool with an awesome API. Our API doc is awesome too. It’s full of examples which are generated from the automated regression tests that run in our CI, so they are always accurate and up to date.

Oh, and part of my day job is to do customer support for Tracker. We often hear from users who would like to get a report of cycle time for their project’s stories. Cycle time is a great metric. The way I like to look at it is, when did you actively start working on a story, and when did that story finally get accepted? Unfortunately, this information isn’t easily visible in our tool. It’s not hard to obtain it via our API, but, the best way to get it is not immediately obvious either.

For a couple of years, I have wished I had an example script using our API to compute cycle time for stories that we could give customers. Sadly, I’ve had to let my Ruby skills rust away, because at work I don’t get to automate regression tests (the programmers do all that), and I spent most of the past couple of years co-writing a book.

Recently, our team had three “hack days” to do whatever we chose. One of the activities I planned was to work on this example cycle time script. My soon-to-be-erstwhile teammate Glen Ivey quickly wrote up an example script for me. But my own feeble efforts to enhance it were futile. Unfortunately, all my teammates were engaged in their own special hack days projects.

I whinged about this on Twitter, and Amitai Schlair, whom I only knew from Twitter, offered to pair with me. This was good and bad. One the one hand, awesome to have someone to pair with me! OTOH, gosh I’m terrible at Ruby now, how embarrassing to pair with anyone! I started thinking of excuses NOT to do it. However, Amitai was so kind, I faced my fear. We paired.

First we had to figure out how to communicate and screenshare. We ended up with Skype and Screenhero. Amitai introduced me to a great concept: first, write what I want to do in psuedocode. Then turn each line of that into a method. Then, start fleshing out those methods with real code. Amitai’s instinct was to do this TDD. But due to time limitations, our approach was: write a line of code, then run the script to see what it does. We worked step by step. By the end of our session (which was no more than a couple of hours), we had gotten as far as showing the last state change for each of the stories that were pertinent to the report.

After competing priorities led us to stop our session, I identified an issue with our new code. A teammate paired with me to show me a way to debug it and we were able to fix it. The script still wasn’t finished. Luckily, my soon-to-be-erstwhile teammate Glen later paired with me to get the script to a point where it produces a report of cycle times for the most recently accepted 500 stories.

The script runs way too slow, and Glen has explained to me a better approach. I await another pairing opportunity to do this. So what’s the takeaway for you, the reader?

Pairing is hard. You fear exposing your weaknesses to another human being. You feel pressure to keep up your side. But that’s only before you actually start pairing. The truth is we all want each other to succeed. Your friends and teammates aren’t out to make you feel stupid. And pairing is so powerful. It gets you over that “hump” of fear. Two heads really are better than one.

Is there something you’d like to try, but you feel you don’t know enough? Find a pair, and go for it!

 

 

 

The post Facing Fears, Pairing appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

My Tests Are Killing Me

Testing TV - Mon, 03/23/2015 - 19:12
Since the early days of eXtreme Programming, tests have been touted as “executable documentation”. In practice, however, many tests fail to live up to that ideal and worse become a time sink when trying to decipher test failures. Using examples and focusing specifically on unit or developer tests, this talk examines the characteristics of readability […]
Categories: Blogs

Test Strategy Checklist

Thoughts from The Test Eye - Mon, 03/23/2015 - 18:52
Ideas

In Den Lilla Svarta om Teststrategi, I included a checklist for test strategy content. It can be used in order to get inspired about what might be included in a test strategy (regardless if it is in your head, in a document, or said in a conversation.)

I use it in two ways:

  • To double-check that I have thought about the important things in my test strategy
  • To draw out more information when I review/dicuss others’ test strategies

It might be useful to you, in other ways, and the English translation can be downloaded here.

Special thanks to Per Klingnäs, who told me to expand on this, and Henrik Emilsson, who continuously give me important thoughts and feedback.

Categories: Blogs

Constellation Icebreaker - Getting to know you

How do you get a group of folks to quickly learn about each other’s common interests? Consider the Constellation icebreaker technique.  Constellation is used to identify to what degree people from a group agree or disagree with certain statements (which can be based on a belief, idea, or value). 

This icebreaker is an informal way to get people to share a bit about themselves at the beginning of a training session, workshop, or when a new team is forming.  It is a non-confrontation way of learning people's opinion on a topic or statement.  Within seconds of applying this technique, the participants will clearly tell you what they think of the statement. Here’s how it works:
The set up:  
  • Identify a "center" of the room (or constellation).  This is the Sun.  The location of the Sun represents the highest degree of agreement with the statement.
  • Optionally, use masking or blue paint tape to create dashed lines around the sun in 3 feet/1 meter increments away from the sun.   
The activity: 
  • Ask everyone to stand on/around the Sun (aka., the center) - don't crowd too much
  • Speak the statement, e.g., "I love the Red Sox" 
  • Ask folks to place themselves either close to or away from the sun according to how much they agree or disagree with this statement (each person becomes the planet)
  • Once everyone has placed themselves, ask some/many/all of the folks why they have placed themselves where they are

Its a great way to learn about people fairly quickly.  Have you tried the Constellation icebreaker before?  If so, what do you think?  Do you have another icebreaker that you have found of value to getting folks to learn about each other?    
Categories: Blogs

On Being a Test Charter

Hiccupps - James Thomas - Sat, 03/21/2015 - 08:53
Managing a complex set of variables, of variables that interact, of interactions that are interdependent, of interdependencies that are opaque, of opacities that are ... well, you get the idea. That can be hard. And that's just the job some days.

Investigating an apparent performance issue recently, I had variables including platform, product version, build type (debug vs release), compiler options, hardware, machine configuration, data sources and more. I was working with both our Dev and Ops teams to determine which of these seemed most likely in what combination to be able to explain the observations I'd made.

Up to my neck in a potential combinatorial explosion, it occurred to me that in order to proceed I was adopting an approach similar to the ideas behind chart parsing in linguistics. Essentially:
  • keep track of all findings to date, but don't necessarily commit to them (yet)
  • maintain multiple potentially contradictory analyses (hopefully efficiently)
  • pack all of the variables that are consistent to some level in some scenario together while looking at other factors
Some background: parsing is the process of analysing a sequence of symbols for conformance to a set of grammatical rules. You've probably come across this in the context of computer programs - when the compiler or interpreter rejects your carefully crafted code by pointing at a stupid schoolboy syntax error, it's a parser that's laughing at you.

Programming languages will generally be engineered to reduce ambiguity in their syntax in order to reduce the scope for ambiguity in the meaning of any statement. It's advantageous to a programmer if they can be reasonably certain that the compiler or interpreter will understand the same thing that they do for any given program. (And in that respect Perl haters should get a chuckle from this.)

But natural languages such as English are a whole different story. These kinds of languages are by definition not designed and much effort has been expended by linguists to create grammars that describe them. The task is difficult for several reasons, amongst which is the sheer number of possible syntactic analyses in general. And this is a decent analogy for open-ended investigations.

Here's an incomplete syntactic analysis of the simple sentence Sam saw a baby with a telescope - note that the PP node is not attached to the rest.


The parse combines words in the sentence together into a structures according to grammatical rules like these which are conceptually very similar to the kind of grammar you'll see for programming languages such as Python or in, say, the XML specs:
 NP -> DET N
 VP -> V NP
 PP -> P NP
 NP -> Det N PP
 VP -> V NP PP
 S -> NP VP
The bottom level of these structures are the grammatical category of each word in the sentence e.g. nouns (N), verbs (V), determiners such as "a" or "the" (DET) and prepositions like "in" or "with" (P).

Above this level, a noun phrase (NP) can be a determiner followed by a noun (e.g. the product) and a verb phrase (VP) can be a verb followed by a noun phrase (tested the product) and a sentence can be a noun phrase followed by a verb phrase (A tester tested the product).

The sentence we're considering is taken from a paper by Doug Arnold:
 Sam saw the baby with the telescopeIn a first pass, looking only at the words we can see that saw is be ambiguous between a noun and a verb. Perhaps you'd think that because you understand the sentence it'd be easy to reject the noun interpretation, but there are similar examples with the same structure which are probably acceptable to you such as:
 Bill Boot the gangster with the gunSo, on the basis of simple syntax alone, we probably don't want to reject anything yet - although we might assign a higher or lower weight to the possibilities. In the case of chart parsing, both are preserved in a single chart data structure which will aggregate information through the parse:
In the analogy with an exploratory investigation, this corresponds to an experimental result with multiple potential causes. We need to keep both in mind but we can prefer one over the other to some extent at any stage, and change our minds as new information is discovered.

As a parser attempts to fit some subset of its rules to a sentence there's a chance that it'll discover the same potential analyses multiple times. For efficiency reasons we'd prefer not to spend time working out that a baby is a noun phrase from first principles over and over.

The chart data structure achieves this by holding information discovered in previous passes for reuse in subsequent ones, but crucially doesn't preclude some other analysis also being found by some other rule. So, although a baby fits one rule well, another rule might say that baby with is a potential, if contradictory, analysis. Both will be available in the chart.

Mapping this to testing, we might say that multiple experiments can generate data which supports a particular analysis and we should provide ourselves the opportunity to recognise when data does this, but not be side-tracked into thinking that there are not other interpretations which cross-cut one another.

In some cases of ambiguity in parsing we'll find that high-level analyses can be satisfied by multiple different lower-level analyses. Recall that the example syntactic analysis given above did not have the PP with the telescope incorporated into it. How might it fit? Well, two possible interpretations involve seeing a baby through a telescope or seeing a baby who has a telescope.

This kind of ambiguity comes from the problem of prepositional phrase attachment: which other element in the parse does the PP with the telescope modify: the seeing (so it attaches to the VP) or the baby (so NP)?

Interestingly, at the syntactic level, both of these result in a verb phrase covering the words saw the baby with the telescope and so in any candidate parse we can consider the rest of the sentence without reference to any of the internal structure below the VP. Here's a chart showing just the two VP interpretations:

You can think of this as a kind of "temporary black box" approach that can usefully reduce complexity when coping with permutations of variables in experiments.

The example sentence and grammar used here are trivial: real natural language grammars might have hundreds of rules and real-life sentences can have hundreds of potential parses. In the course of generating, running and interpreting experiments, however, we don't necessarily yet know the words in the sentence, or know that we have the complete set of rules, so there's another dimension of complexity to consider.

I've tried to restrict to simple syntax in this discussion, but other factors will come into play when determining whether or not a potential parse is plausible - knowledge of the frequencies with which particular sets of words occur in combination would be one. The same will be true in the experimental context, for example you won't always need to complete an experiment to know that the results are going to be useless because you have knowledge from elsewhere.

Also, in making this analogy I'm not suggesting that any particular chart parsing algorithm provides a useful way through the experimental complexity, although there's probably some mapping between such algorithms and ways of exploring the test space.  I am suggesting that being aware of data structures that are designed to cope with complexity can be useful when confronted with it.
Images: Doug Arnoldhttps://flic.kr/p/dxFxXG
Categories: Blogs

Android UI Automated Testing

Google Testing Blog - Fri, 03/20/2015 - 22:51
by Mona El Mahdy

Overview

This post reviews four strategies for Android UI testing with the goal of creating UI tests that are fast, reliable, and easy to debug.

Before we begin, let’s not forget an import rule: whatever can be unit tested should be unit tested. Robolectric and gradle unit tests support are great examples of unit test frameworks for Android. UI tests, on the other hand, are used to verify that your application returns the correct UI output in response to a sequence of user actions on a device. Espresso is a great framework for running UI actions and verifications in the same process. For more details on the Espresso and UI Automator tools, please see: test support libraries.

The Google+ team has performed many iterations of UI testing. Below we discuss the lessons learned during each strategy of UI testing. Stay tuned for more posts with more details and code samples.

Strategy 1: Using an End-To-End Test as a UI Test

Let’s start with some definitions. A UI test ensures that your application returns the correct UI output in response to a sequence of user actions on a device. An end-to-end (E2E) test brings up the full system of your app including all backend servers and client app. E2E tests will guarantee that data is sent to the client app and that the entire system functions correctly.

Usually, in order to make the application UI functional, you need data from backend servers, so UI tests need to simulate the data but not necessarily the backend servers. In many cases UI tests are confused with E2E tests because E2E is very similar to manual test scenarios. However, debugging and stabilizing E2E tests is very difficult due to many variables like network flakiness, authentication against real servers, size of your system, etc.


When you use UI tests as E2E tests, you face the following problems:
  • Very large and slow tests. 
  • High flakiness rate due to timeouts and memory issues. 
  • Hard to debug/investigate failures. 
  • Authentication issues (ex: authentication from automated tests is very tricky).

Let’s see how these problems can be fixed using the following strategies.

Strategy 2: Hermetic UI Testing using Fake Servers

In this strategy, you avoid network calls and external dependencies, but you need to provide your application with data that drives the UI. Update your application to communicate to a local server rather than external one, and create a fake local server that provides data to your application. You then need a mechanism to generate the data needed by your application. This can be done using various approaches depending on your system design. One approach is to record server responses and replay them in your fake server.

Once you have hermetic UI tests talking to a local fake server, you should also have server hermetic tests. This way you split your E2E test into a server side test, a client side test, and an integration test to verify that the server and client are in sync (for more details on integration tests, see the backend testing section of blog).

Now, the client test flow looks like:


While this approach drastically reduces the test size and flakiness rate, you still need to maintain a separate fake server as well as your test. Debugging is still not easy as you have two moving parts: the test and the local server. While test stability will be largely improved by this approach, the local server will cause some flakes.

Let’s see how this could this be improved...

Strategy 3: Dependency Injection Design for Apps.

To remove the additional dependency of a fake server running on Android, you should use dependency injection in your application for swapping real module implementations with fake ones. One example is Dagger, or you can create your own dependency injection mechanism if needed.

This will improve the testability of your app for both unit testing and UI testing, providing your tests with the ability to mock dependencies. In instrumentation testing, the test apk and the app under test are loaded in the same process, so the test code has runtime access to the app code. Not only that, but you can also use classpath override (the fact that test classpath takes priority over app under test) to override a certain class and inject test fakes there. For example, To make your test hermetic, your app should support injection of the networking implementation. During testing, the test injects a fake networking implementation to your app, and this fake implementation will provide seeded data instead of communicating with backend servers.


Strategy 4: Building Apps into Smaller Libraries

If you want to scale your app into many modules and views, and plan to add more features while maintaining stable and fast builds/tests, then you should build your app into small components/libraries. Each library should have its own UI resources and user dependency management. This strategy not only enables mocking dependencies of your libraries for hermetic testing, but also serves as an experimentation platform for various components of your application.

Once you have small components with dependency injection support, you can build a test app for each component.

The test apps bring up the actual UI of your libraries, fake data needed, and mock dependencies. Espresso tests will run against these test apps. This enables testing of smaller libraries in isolation.

For example, let’s consider building smaller libraries for login and settings of your app.


The settings component test now looks like:


Conclusion

UI testing can be very challenging for rich apps on Android. Here are some UI testing lessons learned on the Google+ team:
  1. Don’t write E2E tests instead of UI tests. Instead write unit tests and integration tests beside the UI tests. 
  2. Hermetic tests are the way to go. 
  3. Use dependency injection while designing your app. 
  4. Build your application into small libraries/modules, and test each one in isolation. You can then have a few integration tests to verify integration between components is correct . 
  5. Componentized UI tests have proven to be much faster than E2E and 99%+ stable. Fast and stable tests have proven to drastically improve developer productivity.

Categories: Blogs

Live Virtual Training in User Acceptance Testing, Testing Mobile Devices, and ISTQB Foundation Level Agile Extension Certification


I am offering three courses in live virtual format. I will be the instructor on each of these. You can ask me questions along the way in each course and we will have some interactive exercises. Here is the line-up:
April 1 - 2 (Wed - Thursday) - User Acceptance Testing 1:00 p.m. - 4:00 p.m. EDT.  This is a great course to learn a structured way to approach user acceptance testing that validates systems from a real-world perspective. Register at https://www.mysoftwaretesting.com/Structured_User_Acceptance_Testing_Live_Virtual_p/uat-lv.htm
April 7 - 8 (Tuesday - Wednesday) - Testing Mobile Devices 1:00 p.m. - 4:00 p.m. EDT. Learn how to define a strategy and approach for mobile testing that fits the unique needs of your customers and technology. We will cover a wide variety of tests you can perform and you are welcome to try some of the tests on your own mobile devices during the class. Register at https://www.mysoftwaretesting.com/Testing_Mobile_Applications_Live_Virtual_Course_p/mobilelv.htm
April 14 - 16 (Tuesday - Thursday) - ISTQB Foundation Level Agile Extension Course 1:00 p.m. - 4:30 p.m. EDT. This is a follow-on to the ISTQB Foundation Level Certification. The focus is on understanding agile development and testing principles, as well as agile testing methods and tools. The course follows the ISTQB Foundation Level Extension Agile Tester Syllabus 2014. Register at https://www.mysoftwaretesting.com/ISTQB_Agile_Extension_Course_Public_Course_p/agilelv.htm
I really hope you can join me in one of these courses. If you want to have your team trained, just let me know and I'll create a custom quote for you.
Categories: Blogs

How to use Evernote and Postachio for Product Documentation

The Social Tester - Thu, 03/19/2015 - 09:00

I posted before Christmas about using Evernote and MohioMap to map out the product you’re building. I’ve taken it a further in this post. In this post I’ll explain how to use Evernote and Postachio for Product Documentation. So let’s...
Read more

The post How to use Evernote and Postachio for Product Documentation appeared first on The Social Tester.

Categories: Blogs

How to compare two PDF files with ITextSharp and C#

Testing tools Blog - Mayank Srivastava - Tue, 03/17/2015 - 09:06
I have struggled lot to compare two PDF files and display the differences. Finally I came with approach where I am extracting all the text from PDF files, splitting by lines, comparing line by line and showing the differences. So if you have same kind of requirement, you can use below code to resolve it. […]
Categories: Blogs

Exploratory Testing 3.0

DevelopSense Blog - Tue, 03/17/2015 - 06:09
This blog post was co-authored by James Bach and me. In the unlikely event that you don’t already read James’ blog, I recommend you go there now. The summary is that we are beginning the process of deprecating the term “exploratory testing”, and replacing it with, simply, “testing”. We’re happy to receive replies either here […]
Categories: Blogs

The Thinking Tester, Evolved

Testing TV - Mon, 03/16/2015 - 20:19
Back in 2001 or so, there were some within the nascent Agile community who believed that direct collaboration between programmers and customers (or their proxies) would eventually obviate the need for professional testers. The context-driven testing community knew how unlikely that was. We are, after all, a community of critical thinkers. The argument that testers […]
Categories: Blogs

Exploratory Testing 3.0

James Bach's Blog - Mon, 03/16/2015 - 18:50

[Authors’ note: Others have already made the point we make here: that exploratory testing ought to be called testing. In fact, Michael said that about tests in 2009, and James wrote a blog post in 2010 that seems to say that about testers. Aaron Hodder said it quite directly in 2011, and so did Paul Gerrard. While we have long understood and taught that all testing is exploratory (here’s an example of what James told one student, last year), we have not been ready to make the rhetorical leap away from pushing the term “exploratory testing.” Even now, we are not claiming you should NOT use the term, only that it’s time to begin assuming that testing means exploratory testing, instead of assuming that it means scripted testing that also has exploration in it to some degree.]

By James Bach and Michael Bolton

In the beginning, there was testing. No one distinguished between exploratory and scripted testing. Jerry Weinberg’s 1961 chapter about testing in his book, Computer Programming Fundamentals, depicted testing as inherently exploratory and expressed caution about formalizing it. He wrote, “It is, of course, difficult to have the machine check how well the program matches the intent of the programmer without giving a great deal of information about that intent. If we had some simple way of presenting that kind of information to the machine for checking, we might just as well have the machine do the coding. Let us not forget that complex logical operations occur through a combination of simple instructions executed by the computer and not by the computer logically deducing or inferring what is desired.”

Jerry understood the division between human work and machine work. But, then the formalizers came and confused everyone. The formalizers—starting officially in 1972 with the publication of the first testing book, Program Test Methods—focused on the forms of testing, rather than its essences. By forms, we mean words, pictures, strings of bits, data files, tables, flowcharts and other explicit forms of modeling. These are things that we can see, read, point to, move from place to place, count, store, retrieve, etc. It is tempting to look at these artifacts and say “Lo! There be testing!” But testing is not in any artifact. Testing, at the intersection of human thought processes and activities, makes use of artifacts. Artifacts of testing without the humans are like state of the art medical clinics without doctors or nurses: at best nearly useless, at worst, a danger to the innocents who try to make use of them.

We don’t blame the innovators. At that time, they were dealing with shiny new conjectures. The sky was their oyster! But formalization and mechanization soon escaped the lab. Reckless talk about “test factories” and poorly designed IEEE standards followed. Soon all “respectable” talk about testing was script-oriented. Informal testing was equated to unprofessional testing. The role of thinking, feeling, communicating humans became displaced.

James joined the fray in 1987 and tried to make sense of all this. He discovered, just by watching testing in progress, that “ad hoc” testing worked well for finding bugs and highly scripted testing did not. (Note: We don’t mean to make this discovery sound easy. It wasn’t. We do mean to say that the non-obvious truths about testing are in evidence all around us, when we put aside folklore and look carefully at how people work each day.) He began writing and speaking about his experiences. A few years into his work as a test manager, mostly while testing compilers and other developer tools, he discovered that Cem Kaner had coined a term—”exploratory testing”—to represent the opposite of scripted testing. In that original passage, just a few pages long, Cem didn’t define the term and barely described it, but he was the first to talk directly about designing tests while performing them.

Thus emerged what we, here, call ET 1.0.

(See The History of Definitions of ET for a chronological guide to our terminology.)

ET 1.0: Rebellion

Testing with and without a script are different experiences. At first, we were mostly drawn to the quality of ideas that emerged from unscripted testing. When we did ET, we found more bugs and better bugs. It just felt like better testing. We hadn’t yet discovered why this was so. Thus, the first iteration of exploratory testing (ET) as rhetoric and theory focused on escaping the straitjacket of the script and making space for that “better testing”. We were facing the attitude that “Ad hoc testing is uncontrolled and unmanageable; something you shouldn’t do.” We were pushing against that idea, and in that context ET was a special activity. So, the crusaders for ET treated it as a technique and advocated using that technique. “Put aside your scripts and look at the product! Interact with it! Find bugs!”

Most of the world still thinks of ET in this way: as a technique and a distinct activity. But we were wrong about characterizing it that way. Doing so, we now realize, marginalizes and misrepresents it. It was okay as a start, but thinking that way leads to a dead end. Many people today, even people who have written books about ET, seem to be happy with that view.

This era of ET 1.0 began to fade in 1995. At that time, there were just a handful of people in the industry actively trying to develop exploratory testing into a discipline, despite the fact that all testers unconsciously or informally pursued it, and always have. For these few people, it was not enough to leave ET in the darkness.

ET 1.5: Explication

Through the late ‘90s, a small community of testers beginning in North America (who eventually grew into the worldwide Context-Driven community, with some jumping over into the Agile testing community) was also struggling with understanding the skills and thought processes that constitute testing work in general. To do that, they pursued two major threads of investigation. One was Jerry Weinberg’s humanist approach to software engineering, combining systems thinking with family psychology. The other was Cem Kaner’s advocacy of cognitive science and Popperian critical rationalism. This work would soon cause us to refactor our notions of scripted and exploratory testing. Why? Because our understanding of the deep structures of testing itself was evolving fast.

When James joined ST Labs in 1995, he was for the first time fully engaged in developing a vision and methodology for software testing. This was when he and Cem began their fifteen-year collaboration. This was when Rapid Software Testing methodology first formed. One of the first big innovations on that path was the introduction of guideword heuristics as one practical way of joining real-time tester thinking with a comprehensive underlying model of the testing process. Lists of test techniques or documentation templates had been around for a long time, but as we developed vocabulary and cognitive models for skilled software testing in general, we started to see exploratory testing in a new light. We began to compare and contrast the important structures of scripted and exploratory testing and the relationships between them, instead of seeing them as activities that merely felt different.

In 1996, James created the first testing class called “Exploratory Testing.”  He had been exposed to design patterns thinking and had tried to incorporate that into the class. He identified testing competencies.

Note: During this period, James distinguished between exploratory and ad hoc testing—a distinction we no longer make. ET is an ad hoc process, in the dictionary sense: ad hoc means “to this; to the purpose”. He was really trying to distinguish between skilled and unskilled testing, and today we know better ways to do that. We now recognize unskilled ad hoc testing as ET, just as unskilled cooking is cooking, and unskilled dancing is dancing. The value of the label “exploratory testing” is simply that it is more descriptive of an activity that is, among other things, ad hoc.

In 1999, James was commissioned to define a formalized process of ET for Microsoft. The idea of a “formal ad hoc process” seemed paradoxical, however, and this set up a conflict which would be resolved via a series of constructive debates between James and Cem. Those debates would lead to we here will call ET 2.0.

There was also progress on making ET more friendly to project management. In 2000, inspired by the work for Microsoft, James and Jon Bach developed “Session-Based Test Management” for a group at Hewlett-Packard. In a sense this was a generalized form of the Microsoft process, with the goal of creating a higher level of accountability around informal exploratory work. SBTM was intended to help defend exploratory work from compulsive formalizers who were used to modeling testing in terms of test cases. In one sense, SBTM was quite successful in helping people to recognize that exploratory work was entirely manageable. SBTM helped to transform attitudes from “don’t do that” to “okay, blocks of ET time are things just like test cases are things.”

By 2000, most of the testing world seemed to have heard something about exploratory testing. We were beginning to make the world safe for better testing.

ET 2.0: Integration

The era of ET 2.0 has been a long one, based on a key insight: the exploratory-scripted continuum. This is a sliding bar on which testing ranges from completely exploratory to completely scripted. All testing work falls somewhere on this scale. Having recognized this, we stopped speaking of exploratory testing as a technique, but rather as an approach that applies to techniques (or as Cem likes to say, a “style” of testing).

We could think of testing that way because, unlike ten years earlier, we now had a rich idea of the skills and elements of testing. It was no longer some “creative and mystical” act that some people are born knowing how to do “intuitively”. We saw testing as involving specific structures, models, and cognitive processes other than exploring, so we felt we could separate exploring from testing in a useful way. Much of what we had called exploratory testing in the early 90’s we now began to call “freestyle exploratory testing.”

By 2006, we settled into a simple definition of ET, simultaneous learning, test design, and test execution. To help push the field forward, James and Cem convened a meeting called the Exploratory Testing Research Summit in January 2006. (The participants were James Bach, Jonathan Bach, Scott Barber, Michael Bolton, Elisabeth Hendrickson, Cem Kaner, Mike Kelly, Jonathan Kohl, James Lyndsay, and Rob Sabourin.) As we prepared for that, we made a disturbing discovery: every single participant in the summit agreed with the definition of ET, but few of us agreed on what the definition actually meant. This is a phenomenon we had no name for at the time, but is now called shallow agreement in the CDT community. To combat shallow agreement and promote better understanding of ET, some of us decided to adopt a more evocative and descriptive definition of it, proposed originally by Cem and later edited by several others: “a style of testing that emphasizes the freedom and responsibility of the individual tester to continually optimize the quality of his work by treating test design, test execution, test result interpretation, and learning as mutually supporting activities that continue in parallel throughout the course of the project.” Independently of each other, Jon Bach and Michael had suggested the “freedom and responsibility” part to that definition.

And so we had come to a specific and nuanced idea of exploration and its role in testing. Exploration can mean many things: searching a space, being creative, working without a map, doing things no one has done before, confronting complexity, acting spontaneously, etc. With the advent of the continuum concept (which James’ brother Jon actually called the “tester freedom scale”) and the discussions at the ExTRS peer conference, we realized most of those different notions of exploration are already central to testing, in general. What the adjective “exploratory” added, and how it contrasted with “scripted,” was the dimension of agency. In other words: self-directedness.

The full implications of the new definition became clear in the years that followed, and James and Michael taught and consulted in Rapid Software Testing methodology. We now recognize that by “exploratory testing”, we had been trying to refer to rich, competent testing that is self-directed. In other words, in all respects other than agency, skilled exploratory testing is not distinguishable from skilled scripted testing. Only agency matters, not documentation, nor deliberation, nor elapsed time, nor tools, nor conscious intent. You can be doing scripted testing without any scrap of paper nearby (scripted testing does not require that you follow a literal script). You can be doing scripted testing that has not been in any way pre-planned (someone else may be telling you what to do in real-time as they think of ideas). You can be doing scripted testing at a moment’s notice (someone might have just handed you a script, or you might have just developed one yourself). You can be doing scripted testing with or without tools (tools make testing different, but not necessarily more scripted). You can be doing scripted testing even unconsciously (perhaps you feel you are making free choices, but your models and habits have made an invisible prison for you). The essence of scripted testing is that the tester is not in control, but rather is being controlled by some other agent or process. This one simple, vital idea took us years to apprehend!

In those years we worked further on our notions of the special skills of exploratory testing. James and Jon Bach created the Exploratory Skills and Tactics reference sheet to bring specificity and detail to answer the question “what specifically is exploratory about exploratory testing?”

In 2007, another big slow leap was about to happen. It started small: inspired in part by a book called The Shape of Actions, James began distinguishing between processes that required human judgment and wisdom and those which did not. He called them “sapient” vs. “non-sapient.” This represented a new frontier for us: systematic study and development of tacit knowledge.

In 2009, Michael followed that up by distinguishing between testing and checking. Testing cannot be automated, but checking can be completely automated. Checking is embedded within testing. At first, James objected that, since there was already a concept of sapient testing, the distinction was unnecessary. To him, checking was simply non-sapient testing. But after a few years of applying these ideas in our consulting and training, we came to realize (as neither of us did at first) that checking and testing was a better way to think and speak than sapience and non-sapience. This is because “non-sapience” sounds like “stupid” and therefore it sounded like we were condemning checking by calling it non-sapient.

Do you notice how fine distinctions of language and thought can take years to work out? These ideas are the tools we need to sort out our practical decisions. Yet much like new drugs on the market, it can sometimes take a lot of experience to understand not only benefits, but also potentially harmful side effects of our ideas and terms. That may explain why those of us who’ve been working in the craft a long time are not always patient with colleagues or clients who shrug and tell us that “it’s just semantics.” It is our experience that semantics like these mean the difference between clear communication that motivates action and discipline, and fragile folklore that gets displaced by the next swarm of buzzwords to capture the fancy of management.

ET 3.0: Normalization

In 2011, sociologist Harry Collins began to change everything for us. It started when Michael read Tacit and Explicit Knowledge. We were quickly hooked on Harry’s clear writing and brilliant insight. He had spent many years studying scientists in action, and his ideas about the way science works fit perfectly with what we see in the testing field.

By studying the work of Harry and his colleagues, we learned how to talk about the difference between tacit and explicit knowledge, which allows us to recognize what can and cannot be encoded in a script or other artifacts. He distinguished between behaviour (the observable, describable aspects of an activity) and actions (behaviours with intention) (which had inspired James’ distinction between sapient and non-sapient testing). He untangled the differences between mimeomorphic actions (actions that we want to copy and to perform in the same way every time) and polimorphic actions (actions that we must vary in order to deal with social conditions); in doing that, he helped to identify the extents and limits of automation’s power. He wrote a book (with Trevor Pinch) about how scientific knowledge is constructed; another (with Rob Evans) about expertise; yet another about how scientists decide to evaluate a specific experimental result.

Harry’s work helped lend structure to other ideas that we had gathered along the way.

  • McLuhan’s ideas about media and tools
  • Karl Weick’s work on sensemaking
  • Venkatesh Rao’s notions of tempo which in turn pointed us towards James C. Scott’s notion of legibility
  • The realization (brought to our attention by an innocent question from a tester at Barclays Bank) that the “exploratory-scripted continuum” is actually the “formality continuum.” In other words, to formalize an activity means to make it more scripted.
  • The realization of the important difference between spontaneous and deliberative testing, which is the degree of reflection that the tester is exercising. (This is not the same as exploratory vs. scripted, which is about the degree of agency.)
  • The concept of “responsible tester” (defined as a tester who takes full, personal, responsibility for the quality of his work).
  • The advent of the vital distinction between checking and testing, which replaced need to talk about “sapience” in our rhetoric of testing.
  • The subsequent redefinition of the term “testing” within the Rapid Software Testing namespace to make these things more explicit (see below).

About That Last Bullet Point

ET 3.0 as a term is a bit paradoxical because what we are working toward, within the Rapid Software Testing methodology, is nothing less than the deprecation of the term “exploratory testing.”

Yes, we are retiring that term, after 22 years. Why?

Because we now define all testing as exploratory.  Our definition of testing is now this:

“Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes: questioning, study, modeling, observation and inference, output checking, etc.”

Where does scripted testing fit, then?  By “script” we are speaking of any control system or factor that influences your testing and lies outside of your realm of choice (even temporarily). This does not refer only to specific instructions you are given and that you must follow. Your biases script you. Your ignorance scripts you. Your organization’s culture scripts you. The choices you make and never revisit script you.

By defining testing to be exploratory, scripting becomes a guest in the house of our craft; a potentially useful but foreign element to testing, one that is interesting to talk about and apply as a tactic in specific situations. An excellent tester should not be complacent or dismissive about scripting, any more than a lumberjack can be complacent or dismissive about heavy equipment. This stuff can help you or ruin you, but no serious professional can ignore it.

Are you doing testing? Then you are already doing exploratory testing. Are you doing scripted testing? If you’re doing it responsibly, you are doing exploratory testing with scripting (and perhaps with checking).  If you’re only doing “scripted testing,” then you are just doing unmotivated checking, and we would say that you are not really testing. You are trying to behave like a machine, not a responsible tester.

ET 3.0, in a sentence, is the demotion of scripting to a technique, and the promotion of exploratory testing to, simply, testing.

Categories: Blogs

History of Definitions of ET

James Bach's Blog - Mon, 03/16/2015 - 17:20

History of the term “Exploratory Testing” as applied to software testing within the Rapid Software Testing methodology space.

For a discussion of the some of the social and philosophical issues surrounding this chronology, see Exploratory Testing 3.0.

1988 First known use of the term, defined variously as “quick tests”; “whatever comes to mind”; “guerrilla raids” – Cem Kaner, Testing Computer Software(There is explanatory text for different styles of ET in the 1988 edition of Testing Computer Software. Cem says that some of the text was actually written in 1983.) 1990 “Organic Quality Assurance”, James Bach’s first talk on agile testing filmed by Apple Computer, which discussed exploratory testing without using the words agile or exploratory. 1993 June: “Persistence of Ad Hoc Testing” talk given at ICST conference by James Bach. Beginning of James’ abortive attempt to rehabilitate the term “ad hoc.” 1995 February: First appearance of “exploratory testing” on Usenet in message by Cem Kaner 1995 Exploratory testing means learning, planning, and testing all at the same time. – James Bach (Market Driven Software Testing class) 1996 Simultaneous exploring, planning, and testing. – James Bach (Exploratory Testing class v1.0) 1999 An interactive process of concurrent product exploration, test design, and test execution. – James Bach (Exploratory Testing class v2.0) 2001(post WHET #1) The Bach ViewAny testing to the extent that the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.

The Kaner View

Any testing to the extent that the tester actively controls the design of the tests as those tests are performed, uses information gained while testing to design new and better tests, and where the following conditions apply:

  • The tester is not required to use or follow any particular test materials or procedures.
  • The tester is not required to produce materials or procedures that enable test re-use by another tester or management review of the details of the work done.

– Resolution between Bach and Kaner following WHET #1 and BBST class at Satisfice Tech Center.

(To account for both of views, James started speaking of the “scripted/exploratory continuum” which has greatly helped in explaining ET to factory-style testers) 2003-2006 Simultaneous learning, test design, and test execution – Bach, Kaner 2006-2015 An approach to software testing that emphasizes the personal freedom and responsibility of each tester to continually optimize the value of his work by treating learning, test design and test execution as mutually supportive activities that run in parallel throughout the project. – (Bach/Bolton edit of Kaner suggestion) 2015 Exploratory testing is now a deprecated term within Rapid Software Testing methodology. See testing, instead. (In other words, all testing is exploratory to some degree. The definition of testing in the RST space is now: Evaluating a product by learning about it through exploration and experimentation including to some degree: questioning, study, modeling, observation, inference, etc.)

 

Categories: Blogs

Updated haiku

I’m surprised at how relevant it all still is, but I have added something that I think is missing that I’ve learned in my last couple of roles. See my ‘Essence of Agile’ Haiku for the update.

Categories: Blogs

Starting with the Mutual Learning Model

Thought Nursery - Jeffrey Fredrick - Mon, 03/16/2015 - 01:53

On February 25th we held the first meeting of the London Action Science MeetupStarting with the Mutual Learning Model. My goal for this session was to describe Action Science as a discipline, introduce key concepts, and illustrate some of those concepts through a hands-on exercise. I’m going to cover the same ground in this blog post including the exercise. I’d love to get your feedback in the comments, especially your experiences with the exercise.

First, what is Action Science anyway? According to the book (see Preface) it is “a science that can generate knowledge that is useful, valid, descriptive of the world, and informative of how we might change it.” More simply, it is the science of effective action in organizations. But this is no ivory tower science, remote and theoretical. Chris Argyris as scientist was also an interventionist. To practice action science is to learn how to be more effective, to help others to be more effective, and to increase the opportunities for organizational learning.

For my introduction to Action Science we began with the Mutual Learning Model as described in Eight Behaviors for Smarter Teams. This white paper from Roger Schwartz has been my go-to introduction for people interested in improving the relationships in their teams. Formerly called Ground Rules for Effective Teams, it provides a Shu-level set of behaviors that you can use to have better, more productive conversations. It also provides the motivating mindset that must be present, the mutual learning mindset, and the core values that go with it:

  • Transparency
  • Curiosity
  • Informed Choice
  • Accountability
  • Compassion

The mutual learning mindset and the accompanying values are easy to espouse. Who doesn’t want to learn? But one of the key areas of exploration in Action Science is the gap between what people claim to value — Espoused Theory — and the values that can be inferred from their actual behavior — Theory in Use.

While effective behavior starts with the theory-in-use of the mutual learning mindset (what Argyris called Model II), there’s a common default action strategy that is used instead, the Unilateral Control Model (what Argyris termed Model I). In contrast to the mutual learning model, the governing values of the unilateral control model are:

  • Achieve the purpose as I define it
  • Win, don’t lose
  • Suppress negative feelings
  • Emphasize rationality

Read these lists again, and test these values against your own experience, reflect on your own behavior… Heading into a meeting where you think there’s an important decision to be made, where there’s something of significance on the line, how do you behave? If you are like me — and according to Argyris if you are like virtually everyone — you will act consistent with the unilateral control model. You will believe your understanding of the situation is correct, you will believe the inferences you make are reality, and you will act so that “the right thing” gets done.

“Well… maybe. But so what?”, I hear you ask. The implication of the unilateral control model is limited learning; I am listening only to formulate my response, only speaking to persuade. It is defensive relationships; you know that I’m acting to get my way. It is reduced opportunity for double-loop learning; we are unlikely to question our norms, goals and values when we are struggling over whose strategy to use. Now perhaps in your context these implications don’t matter. But in teams that want to excel, companies that want to innovate, organizations looking to evolve, we can’t afford to lose learning opportunities. And on a more personal level the defensiveness engendered by the unilateral control model is ultimately unpleasant to live with.

“Okay, I’m convinced! Are we done?” Actually, now we are ready to try that hands-on exercise I mentioned at the start. It is a variation of the two-column case study, tool for exploring and understanding our behavior in stressful conversations, the kinds where it is difficult to produce mutual learning behavior. To perform the exercise you’ll need three items: a full page of lined paper (A4 / 8.5 x 11), and two pens, preferably blue and red. You’ll also need 15-20 minutes and a desire to learn more about yourself. Do you have everything you need to begin?

Prepare by folding the paper in half vertically so that you’ve got two lined columns on either half of the paper. Now think of a difficult conversation you’ve recently had, or are expecting, or a have been putting off. At the top of the page on the left hand of the fold write (blue pen) a sentence describing what you would desire out of that conversation and on the right what actually happened or what you fear would happen. Then in the right hand column write out the dialog as you remember or imagine it. (If this was an actual conversation don’t worry about remembering exact wording; for this exercise the sentiment as you remember it is more valuable.) There isn’t much room on the half-page so this process should only take 5-10 minutes and capture the essence of the exchange. Take the time to write out the dialog now, on the right hand side, before proceeding.

Once you’ve captured the key dialog on the right hand of the paper turn your attention to the left hand column. In this column write down what you expect you’d be thinking and feeling during the conversation. This could be what you are thinking as you formulate your words, this could be your feelings in reaction to theirs. This will likely take less time than the right hand column, only 3-5 minutes. Take the time to fill in the left hand column with your thoughts and feelings before proceeding.

Red pen time. Take a read through your dialog and circle in red every question you asked, every occurrence of the question mark. Write this on the top of the page on the right hand side as the denominator of a fraction. Next, review all those circled questions and count all the genuine questions, every question asked with a real interest in learning something from the answer. A genuine question is not a statement in disguise: “If the team feel they need the time isn’t that good enough?” When you’ve counted the genuine questions write that on the top of the page as the numerator of your fraction.

Red pen time, part two. Look through the left hand column and circle every thought or feeling that did not appear in the right hand column, that is the thoughts and feelings that you did not express. Write a fraction at the top of the page which is the number of unexpressed thoughts/feelings over the total number.

How did you do? In our meetup session the total number of questions was low (0-2) and the number of genuine questions even lower (0-1); very little red ink on the right hand column. The left hand column, by contrast, looked like it was bled upon. Most of the thoughts and feelings were not expressed. Now consider those lists of values again and compare it to your marks in red. If we espouse curiosity, why do we ask so few genuine questions? If we espouse transparency, why aren’t we sharing our thoughts and feelings? This gap between our espoused theory and our theory in use was clearly exhibited by the exercise in the meetup. Being aware of that gap is required to start with the mutual learning model.

What’s next? Experience and reflection has shown me that while I believe the mutual learning mindset is better, it takes effort, real mindful willful effort for me to produce it. But unilateral control behavior? I can produce that effortlessly! I know how to persuade, how to influence, how to build consensus. I know when to press my point and when to back off to avoid conflict. That sounds good, but if our intent is to learn this instinct leads me in the wrong direction. It is a symptom of what Argyis terms Skilled Incompetence. The good news is we can unlearn our incompetence and learn redesign our conversation. Starting down that road is the where we will head in the next Action Science meetup, March 25th. Hope to see you there!

Categories: Blogs