Skip to content

Blogs

Very Short Blog Posts (22): “That wouldn’t be practical”

DevelopSense Blog - Sun, 01/25/2015 - 01:46
I have this conversation rather often. A test manager asks, “We’ve got a development project coming up that is expected to take six months. How do I provide an estimate for how long it will take to test it?” My answer would be “Six months.” Testing begins as soon as someone has an idea for […]
Categories: Blogs

A new level of testing?

Yesterdsay I saw this awesome video of Lars Andersen: a new level of archery. It is going viral on the web being watched over 11 million times within 48 hours. Now watch this movie carefully…

The first time I watched this movie, I was impressed. Having tried archery several times, I know how hard it is to do. Remember Legolas from the Lord of Rings movie? I thought that was “only” a movie and his shooting speed was greatly exaggerated. But it turns out Lars Andersen is faster than Legolas. My colleague Sander send me an email telling me about the movie I just watched saying this was an excellent example of craftsmanship, something we have been discussing earlier this week. So I watched the movie again…

Also read what Lars has to say in the comments on YouTube and make sure you read his press release.

This movie is exemplar for the importance of practice and skills! This movie explains archery in a way a context-driven tester would explain his testing…

0:06 These skills have been long since been forgotten. But master archer Lars Andersen is trying to reinvent was has been lost…

Skills are the backbone of everything being done well. So in testing skills are essential too. I’ll come back to that later on. And the word reinvent triggers me as well. Every tester should reinvent his own testing. Only by going very deep, understand every single bit, practice and practice more, you will truly know how to be an excellent tester.

0:32 This is the best type of shooting and there is nothing beyond it in power or accuracy. Using this technique Larsen set several speed shooting records and he shoots more than twice as fast as his closest competitors…

Excellent testers are faster and better. Last week I heard professor Chris Verhoef speak about skills in IT and he mentioned that he has seen a factor 200 in productivity difference between excellent programmers and bad programmers (he called them “Timber Smurf” or “Knutselsmurf” in Dutch).

0:42 … being able to shoot fast is only one of the benefits of the method

Faster testing! Isn’t that what we are after?

0:55 Surprisingly the quiver turned out to be useless when it comes to moving fast. The back quiver was a Hollywood Myth…

The back quiver is a Hollywood myth. It looks cool and may look handy on first sight, since you can put a lot of arrows in it. Doesn’t this sound like certificates and document-heavy test approaches? The certificates looks good on your resume and the artifacts look convenient to help you structure your testing… But turn out to be worthless when it comes to test fast.

1:03 Why? Because modern archers do not move. They stand still firing at a target board.

I see a parallel here with old school testing: testers had a lot of time to prepare in the waterfall projects. The basic assumption was that target wasn’t moving, so it was like shooting at a target board.  Although the target proved always to be moving, the testing methods are designed for target boards.

1:27 Placing the arrow left around the bow is not good while you are in motion. By placing your hand on the left side, your hand is on the wrong side of the string. So you need several movements before you can actually shoot..

Making a ton of documentation before starting to test is like several movements before you can actually test.

1:35 From studying old pictures of archers, Lars discovered that some historical archers held their arrow on the right side of the bow. This means that the arrow can be fired in one single motion. Both faster and better!

Research and study is what is lacking in testing for many. There is much we can learn from the past, but also from social science, measurement, designing experiments, etc.

1:56 If he wanted to learn to shoot like the master archers of old, he had to unlearn what he had learned…

Learning new stuff, learning how to use heuristics and train real skills, needs testers to unlearn APPLYING techniques.

2:07: When archery was simpler and more natural, exactly like throwing a ball. In essence making archery as simple as possible. It’s harder how to learn to shoot this way, but it gives more options and ultimately it is also more fun.

It is hard to learn and it takes a lot of practice to learn to do stuff in the most efficient en effective way. Context-driven testing sounds difficult, but in essence it makes testing as simple as possible. That means it becomes harder to learn because it removes all the methodical stuff that slows us down. These instrumental approaches trying to put everything in a recipe so it can be applied by people who do not want to practice, make testing slow and ineffective.

2:21 A war archer must have total control over his bow in all situations! He must be able to handle his bow and arrows in a controlled way, under the most varied of circumstances.

Lesson 272 in the book Lessons Learned in Software Testing: “If you can get a black belt in only two weeks, avoid fights”. You have to learn and practice a lot to have total control! That is what we mean by excellent testing: being able to do testing in a controlled way, under the most varied of circumstances. Doesn’t that sound like Rapid Software Testing? RST is the skill of testing any software, any time, under any conditions, such that your work stands up to scrutiny. This is how RST differs from normal software testing.

2:36 … master archers can shoot the bow with both hands. And still hit the target. So he began practicing…

Being able to the same thing in different ways is a big advantage. Also in testing we should learn to test in as many different ways as possible.

3:15 perhaps more importantly: modern slow archery has led people to believe that war archers only shot at long distances. However, Lars found that they can shoot at any distance. Even up close. This does require the ability to fire fast though.

Modern slow testing has led to believe that professional testers always need test cases. However, some testers found that they could work without heavyweight test documentation and test cases. Even on very complex or critical systems also in a regulated environment. This does require the ability to test fast though.

 a new level of archery3:34 In the beginning archers probably drew arrows from quivers or belts. But since then they started holding the arrows in the bow hand. And later in the draw hand. Taking it to this third level. That of holding the arrows in the bow hand, requires immense practice and skill and only professional archers, hunters and so on would have had the time for it. … and the only reason Lars is able to do it, is he has been years of practicing intensely.

Practice, practice, practice. And this really makes the difference. I hear people say that context-driven is not for everybody. We have to accept that some testing professional only want to work 9 to 5. This makes me mad!

I think professional excellence can and should be for everyone! And sure you need to put a lot of work in it! Compare it to football (or any other good thing you want to be in like solving crossword puzzles, drawing, chess or … archery). It takes a lot of practice to play football in the Premiership or the Champions League. I am convinced that anyone can be a professional football player. But it doesn’t come easily. It demands a lot of effort in learning, drive (intrinsic motivation, passion), the right mindset and choosing the right mentors/teachers. Talent maybe helps, and perhaps you need some talent to be the very best, like Lionel Messie … But dedication, learning and practice will take you a long way. We are professionals! So that subset of testers who do not want to practice and work hard, in football they will soon end up on the bench,  won’t get a new contract and soon disappear to the amateurs.

 a new level of archery4:06 The hard part is not how to hold the arrows, but learning how to handle them properly. And draw and fire in one single motion not matter what methods is used.

Diversity has been key in context-driven testing for many years. As testers we need to learn how to properly use many different skills, approaches, techniques, heuristics…

4:12 It works in all positions and while in motion…

… so we can use then in all situations even when we are under great pressure, we have to deal with huge complexity, confusion, changes, new insights and half answers. 

5:17 While speed is important, hitting the target is essential.

Fast testing is great, doing the right thing, like hitting the target is essential. Context-driven testers know how to analyze and model their context to determine what the problem is that needs to be solved. Knowing the context is essential to do the right things to discover the status of the product and any threats to its value effectively, so that ultimately our clients can make informed decisions about it. Context analysis and modelling are some of the essential skills for testers!

There are probably more parallels to testing. Please let me know if you see any more.

 

”Many people have accused me of being fake or have theories on how there’s cheating involved. I’ve always found it fascinating how human it is, to want to disbelieve anything that goes against our world view – even when it’s about something as relatively neutral as archery.” (Lars Andersen)
Categories: Blogs

The Wrought Idea

Hiccupps - James Thomas - Sat, 01/24/2015 - 11:18

So the other day I bleeted about how I like to write to help me collect my thoughts and how that feels like a dialogue through the page.

Somewhat ironically, you might think, I hadn't intended that action to be more than jotting down the realisation I'd just had.  But, of course, as soon as it was out there I began to challenge it, and by proxy myself.

Here's a sample:
  • "When I need to think through an issue, I write." Really? Always?
  • Does getting the ideas down free mental resource for inspection of the ideas? 
  • Does making it concrete mean that it's easier to spot inconsistency? I know people who are adept at maintaining multiple views of a thing. When a different angle of attack is used a different kind of defence is made. The defences are not compatible, but because they are never seen together, this can be overlooked.
  • Why didn't I talk about pictures? I draw a lot too.
  • I recalled that James Lyndsay mentioned the other day that he makes a point of writing down his hypotheses during exploratory testing. If he fails to do that he feels he does a worse job.
  • What about giving some examples - could I make a draft, list the challenges, show the new draft and repeat?
  • I just read a great piece on George Carlin where he says "So I’m drawn to something and start writing about it ... and that’s when the real ideas pounce out, and new ideas, and new thoughts and images, and then bing, ba-bam ba-boom, that’s the creative part."
  • Haven't I been in this area before?
And so I write and right until my thought is wrought.Image: https://flic.kr/p/aNMhL4
Categories: Blogs

Lies, Damned Lies, and Code Coverage

Sustainable Test-Driven Development - Wed, 01/21/2015 - 20:58
Download the Podcast As unit testing has gained a strong foothold in many development organizations, many teams are now laboring under a code coverage requirement.  75% - 80% of the code, typically, must be covered by unit tests.  Most popular Integrated Development Environments (IDE’s) include tools for measuring this percentage, often as part of their testing framework. Let’s ask a
Categories: Blogs

Free Software Tests Are Better Than Free Bananas

Testing TV - Wed, 01/21/2015 - 18:39
There is growing interest in leveraging data mining and machine learning techniques in the analysis, maintenance and testing of software systems. This talk discusses how Google uses such techniques to automatically mine system invariants, uses those invariants in monitoring our systems in real-time and alerts engineers of any potential production problems within minutes. The talk […]
Categories: Blogs

Writing up a storm

Agile Testing with Lisa Crispin - Wed, 01/21/2015 - 16:08

Since publishing More Agile Testing with Janet Gregory, I’ve enjoyed time for writing new articles and participating in interviews. Please see my Articles page for links to these. I’d love to hear your feedback on any of these. Have you tried any of the practices or ideas discussed in the articles or interviews?

The post Writing up a storm appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

TDD and Defects

Sustainable Test-Driven Development - Tue, 01/20/2015 - 23:56
We've said all along that TDD is not really about "testing" but rather about creating an executable form of specification that drives development forward.  This is true, and important, but it does not mean that TDD does not have a relationship to testing.  One interesting issue where there is significant synergy is in our relationship to defects. Two important issues we'll focus on are: when/how
Categories: Blogs

Integrating MediatR with Web API

Jimmy Bogard - Tue, 01/20/2015 - 19:25

One of the design goals I had in mind with MediatR was to limit the 3rd party dependencies (and work) needed to integrate MediatR. To do so, I only take a dependency on CommonServiceLocator. In MediatR, I need to resolve instances of request/notification handlers. Rather than build my own factory class that others would need to implement, I lean on CSL to define this interface:

public interface IServiceLocator : IServiceProvider
{
    object GetInstance(Type serviceType);
    object GetInstance(Type serviceType, string key);
    IEnumerable<object> GetAllInstances(Type serviceType);
    TService GetInstance<TService>();
    TService GetInstance<TService>(string key);
    IEnumerable<TService> GetAllInstances<TService>();
}

But that wasn’t quite enough. I also wanted to support child/nested containers, which meant I didn’t want a single instance of the IServiceLocator. Typically, when you want a component’s lifetime decided by a consumer, you depend on Func<Foo>. It turns out though that CSL already defines a delegate to provide a service locator, aptly named ServiceLocatorProvider:

public delegate IServiceLocator ServiceLocatorProvider();

In resolving handlers, I execute the delegate to get an instance of an IServiceLocatorProvider and off we go. I much prefer this approach than defining my own yet-another-factory-interface for people to implement. Just not worth it. As a consumer, you will need to supply this delegate to the mediator.

I’ll show an example using StructureMap. The first thing I do is add a NuGet dependency to the Web API IoC shim for StructureMap:

Install-Package StructureMap.WebApi2

This will also bring in the CommonServiceLocator dependency and some files to shim with Web API:

image

I have the basic building blocks for what I need in order to have a Web API project using StructureMap. The next piece is to configure the DefaultRegistry to include handlers in scanning:

public DefaultRegistry() {
    Scan(
        scan => {
            scan.TheCallingAssembly();
            scan.AssemblyContainingType<PingHandler>();
            scan.WithDefaultConventions();
			scan.With(new ControllerConvention());
            scan.AddAllTypesOf(typeof(IRequestHandler<,>));
            scan.AddAllTypesOf(typeof(IAsyncRequestHandler<,>));
            scan.AddAllTypesOf(typeof(INotificationHandler<>));
            scan.AddAllTypesOf(typeof(IAsyncNotificationHandler<>));
        });
    For<IMediator>().Use<Mediator>();
}

This is pretty much the same code you’d find in any of the samples in the MediatR project. The final piece is to hook up the dependency resolver delegate, ServiceLocatorProvider. Since most/all containers have implementations of the IServiceLocator, it’s really about finding the place where the underlying code creates one of these IServiceLocator implementations and supplies it to the infrastructure. In my case, there’s the Web API IDependencyResolver implementation:

public IDependencyScope BeginScope()
{
    IContainer child = this.Container.GetNestedContainer();
    return new StructureMapWebApiDependencyResolver(child);
}

I modify this to use the current nested container and attach the resolver to this:

public IDependencyScope BeginScope()
{
    var resolver = new StructureMapWebApiDependencyResolver(CurrentNestedContainer);

    ServiceLocatorProvider provider = () => resolver;

    CurrentNestedContainer.Configure(cfg => cfg.For<ServiceLocatorProvider>().Use(provider));
    
    return resolver;
}

This is also the location where I’ll attach per-request dependencies (NHibernate, EF etc.). Finally, I can use a mediator in a controller:

public class ValuesController : ApiController
{
    private readonly IMediator _mediator;

    public ValuesController(IMediator mediator)
    {
        _mediator = mediator;
    }

    // GET api/values
    public IEnumerable<string> Get()
    {
        var result = _mediator.Send(new Ping());

        return new string[] { result.Message };
    }

That’s pretty much it. How you need to configure the mediator in your application might be different, but the gist of the means is to configure the ServiceLocatorProvider delegate dependency to return the “thing that the framework uses for IServiceLocator”. What that is depends on your context, and unfortunately changes based on every framework out there.

In my example above, I’m preferring to configure the IServiceLocator instance to be the same instance as the IDependencyScope instance, so that any handler instantiated is from the same composition root/nested container as whatever instantiated my controller.

See, containers are easy, right?

(crickets)

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Welcome Max Guernsey

Sustainable Test-Driven Development - Tue, 01/20/2015 - 00:56
Max has joined Net Objectives, as some of you may know, as a trainer, coach, and mentor.  We've been friends with Max for a long while, and he has been a contributor to this blog and to the progress of our thinking in general. So, we're adding him to the official authorship here and when (if ever :)) we get this thing written, he will be co-author with Amir and I. I know this has been terribly
Categories: Blogs

State of the Art

Hiccupps - James Thomas - Fri, 01/16/2015 - 08:17
A trend is better than a snapshot, right?

That's Joel Montvelisky, introducing the State of Testing Survey 2015.

I'm certainly in favour of data and I'd agree that a trend can be better than a snapshot. But if you want to know the state of some system right now for the investigation you're performing right now and you've no reason to think that right now is related to back then, then perhaps right now you'll take the snapshot, right?

Openness and openness to challenge was one of the things I liked most about the previous, inaugural, survey. In the discussion between Jerry Weinberg and Fiona Charles about the results (transcript here) Weinberg's opening remarks include:
We need to be careful on how we interpret this data [...] One way to look at the survey is that it’s giving information about what information we should be getting. I'm looking forward to seeing what was learned.
Image: https://flic.kr/p/oq5E3x
Categories: Blogs

Why I'm Always Write

Hiccupps - James Thomas - Thu, 01/15/2015 - 23:18
When I need to think through an issue, I write. And when I do that I feel I'm having a dialogue with myself. I write. I challenge. I rewrite. I re-challenge. Within or across drafts. Dynamically or with reflection. At length or fleetingly. As a means to an end, or as an end in itself. It both clarifies and exposes the need for clarification. For me.

When I asked on Twitter I got a couple of useful references to similar things:
I'd be very interested in any others.

Edit: I followed up on this post later.Image: https://flic.kr/p/5UWSs9
Categories: Blogs

Combating the lava-layer anti-pattern with rolling refactoring

Jimmy Bogard - Thu, 01/15/2015 - 17:37

Mike Hadlow blogged about the lava-layer anti-pattern, describing, which I have ranted about in nearly every talk I do, the nefarious issue of opinionated but lazy tech leads introducing new concepts into a system but never really seeing the idea through all the way to the end. Mike’s story was about different opinions on the correct DAL tool to use, but none of them actually ever goes away:

LavaLayer

It’s not just DALs that I see this occur. Another popular strata I see are database naming conventions, starting from:

  • ORDERS
  • tblOrders
  • Orders
  • Order
  • t_Order

And on and on – none of which add any value, but it’s not a long-lived codebase without a little bike shedding, right?

That’s a pointless change, but I’ve seen others, especially in places where design is evolving rapidly. Places where the refactorings really do add value. I called the result long-tail design, where we have a long tail of different versions of an idea or design in a system, and each successive version occurs less and less:

Long-tail and lava-layer design destroy productivity in long-running projects. But how can we combat it?

Jimmy’s rule of 2: There can be at most two versions of a concept in an application

In practice, what this means is we don’t move on to the next iteration of a concept until we’ve completely refactored all existing instances. It starts like this:

image

A set of functionality we don’t like all exists in one version of the design. We don’t like it, and want to make a change. We start by carving out a slice to test out a new version of the design:

image

We poke at our concept, get input, refine it in this one slice. When we think we’re on to something, we apply it to a couple more places:

image

It’s at this point where we can start to make a decision: is our design better than the existing design? If not, we need to roll back our changes. Not leave it in, not comment it out, but roll it all the way back. We can always do our work in a branch to preserve our work, but we need to make a commitment one way or the other. If we do commit, our path forward is to refactor V1 out of existence:

image

image

image

image

image

We never start V3 of our concept until we’ve completely eradicated V1 – and that’s the law of 2. At most two versions of our design can be in our application at any one time.

We’re not discouraging refactoring or iterative/evolutionary design, but putting in parameters to discipline ourselves.

In practice, our successive designs become better than they could have been in our long-tail/lava-layer approach. The more examples we have of our idea, the stronger our case becomes that our idea is better. We wind up having a rolling refactoring result:

output_yWnRTm

A rolling refactoring is the only way to have a truly evolutionary design; our original neanderthal needs to die out before moving on to the next iteration.

Why don’t we apply a rolling refactoring design? Lots of excuses, but ultimately, it requires courage and discipline, backed by tests. Doing this without tests isn’t courage – it’s reckless and developer hubris.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

The State Of Testing Survey 2015

The Social Tester - Thu, 01/15/2015 - 15:20

It’s great to see Joel Montvelisky and the Practi-test team running the State Of Testing survey again.   “The survey seeks to identify the existing characteristics, practices and challenges facing the testing community in hopes to shed light and provoke...
Read more

The post The State Of Testing Survey 2015 appeared first on The Social Tester.

Categories: Blogs

Testing on the Toilet: Prefer Testing Public APIs Over Implementation-Detail Classes

Google Testing Blog - Wed, 01/14/2015 - 19:35
by Andrew Trenk

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.


Does this class need to have tests?
class UserInfoValidator {
public void validate(UserInfo info) {
if (info.getDateOfBirth().isInFuture()) { throw new ValidationException()); }
}
}
Its method has some logic, so it may be good idea to test it. But what if its only user looks like this?
public class UserInfoService {
private UserInfoValidator validator;
public void save(UserInfo info) {
validator.validate(info); // Throw an exception if the value is invalid.
writeToDatabase(info);
}
}
The answer is: it probably doesn’t need tests, since all paths can be tested through UserInfoService. The key distinction is that the class is an implementation detail, not a public API.

A public API can be called by any number of users, who can pass in any possible combination of inputs to its methods. You want to make sure these are well-tested, which ensures users won’t see issues when they use the API. Examples of public APIs include classes that are used in a different part of a codebase (e.g., a server-side class that’s used by the client-side) and common utility classes that are used throughout a codebase.

An implementation-detail class exists only to support public APIs and is called by a very limited number of users (often only one). These classes can sometimes be tested indirectly by testing the public APIs that use them.

Testing implementation-detail classes is still useful in many cases, such as if the class is complex or if the tests would be difficult to write for the public API class. When you do test them, they often don’t need to be tested in as much depth as a public API, since some inputs may never be passed into their methods (in the above code sample, if UserInfoService ensured that UserInfo were never null, then it wouldn’t be useful to test what happens when null is passed as an argument to UserInfoValidator.validate, since it would never happen).

Implementation-detail classes can sometimes be thought of as private methods that happen to be in a separate class, since you typically don’t want to test private methods directly either. You should also try to restrict the visibility of implementation-detail classes, such as by making them package-private in Java.

Testing implementation-detail classes too often leads to a couple problems:

- Code is harder to maintain since you need to update tests more often, such as when changing a method signature of an implementation-detail class or even when doing a refactoring. If testing is done only through public APIs, these changes wouldn’t affect the tests at all.

- If you test a behavior only through an implementation-detail class, you may get false confidence in your code, since the same code path may not work properly when exercised through the public API. You also have to be more careful when refactoring, since it can be harder to ensure that all the behavior of the public API will be preserved if not all paths are tested through the public API.
Categories: Blogs

Taking Severity Seriously

DevelopSense Blog - Wed, 01/14/2015 - 12:10
There’s a flaw in the way most organizations classify the severity of a bug. Here’s an example from the Elementool Web site (as of 14 January, 2015); I’m sure you’ve seen something like it: Critical: The bug causes a failure of the complete software system, subsystem or a program within the system. High: The bug […]
Categories: Blogs

Python Packaging and Testing with devpi and tox

Testing TV - Tue, 01/13/2015 - 20:01
This talk discusses good ways to organize packaging and testing for Python projects. It walks through a per-company and an open source scenario and explains how to best use the “devpi-server” and “tox” for making sure you are delivering good and well tested and documented packages. As time permits, we also discuss in-development features such […]
Categories: Blogs

Generic variance in DI containers

Jimmy Bogard - Tue, 01/13/2015 - 03:02

DI containers, as complex as they might be, still provide quite a lot of value when it comes to defining and realizing the composition of your system. I use the variance features quite a bit, especially in my MediatR project and composing a rich pipeline. A side note, one of the design goals of MediatR is not to take any dependency on a 3rd party DI container. I instead take a dependency on Common Service Locator, which all major DI containers already have. As part of this exercise, I still wanted to provide examples of all major containers, and this led me to figure out which containers supported what.

I looked at the major containers out there:

  • Autofac
  • Ninject
  • Simple Injector
  • StructureMap
  • Unity
  • Windsor

And tried to build examples of using MediatR. As part of this, I was able to see what containers supported which scenarios, and how difficult it was to achieve this.

The scenario is this: I have an interface, IMediator, in which I can send a single request/response or a notification to multiple recipients:

public interface IMediator
{
    TResponse Send<TResponse>(IRequest<TResponse> request);

    Task<TResponse> SendAsync<TResponse>(IAsyncRequest<TResponse> request);

    void Publish<TNotification>(TNotification notification)
        where TNotification : INotification;

    Task PublishAsync<TNotification>(TNotification notification)
        where TNotification : IAsyncNotification;
}

I then created a base set of requests/responses/notifications:

public class Ping : IRequest<Pong>
{
    public string Message { get; set; }
}
public class Pong
{
    public string Message { get; set; }
}
public class PingAsync : IAsyncRequest<Pong>
{
    public string Message { get; set; }
}
public class Pinged : INotification { }
public class PingedAsync : IAsyncNotification { }

I was interested in looking at a few things with regards to container support for generics:

  • Setup for open generics (registering IRequestHandler<,> easily)
  • Setup for multiple registrations of open generics (two or more INotificationHandlers)
  • Setup for generic variance (registering handlers for base INotification/creating request pipelines)

My handlers are pretty straightforward, they just output to console:

public class PingHandler : IRequestHandler<Ping, Pong> { /* Impl */ }
public class PingAsyncHandler : IAsyncRequestHandler<PingAsync, Pong> { /* Impl */ }

public class PingedHandler : INotificationHandler<Pinged> { /* Impl */ }
public class PingedAlsoHandler : INotificationHandler<Pinged> { /* Impl */ }
public class GenericHandler : INotificationHandler<INotification> { /* Impl */ }

public class PingedAsyncHandler : IAsyncNotificationHandler<PingedAsync> { /* Impl */ }
public class PingedAlsoAsyncHandler : IAsyncNotificationHandler<PingedAsync> { /* Impl */ }

I should see a total of seven messages output from the result of the run. Let’s see how the different containers stack up!

Autofac

Autofac has been around for quite a bit, and has extensive support for generics and variance. The configuration for Autofac is:

var builder = new ContainerBuilder();
builder.RegisterSource(new ContravariantRegistrationSource());
builder.RegisterAssemblyTypes(typeof (IMediator).Assembly).AsImplementedInterfaces();
builder.RegisterAssemblyTypes(typeof (Ping).Assembly).AsImplementedInterfaces();

Autofac does require us to explicitly add a registration source for recognizing contravariant interfaces (covariant is a lot rarer, so I’m ignoring that for now). With minimal configuration, Autofac scored perfectly and output all the messages.

Open generics: yes, implicitly

Multiple open generics: yes, implicitly

Generic contravariance: yes, explicitly

Ninject

Ninject has also been around for quite a while, and also has extensive support for generics. The configuration for Ninject looks like:

var kernel = new StandardKernel();
kernel.Components.Add<IBindingResolver, ContravariantBindingResolver>();
kernel.Bind(scan => scan.FromAssemblyContaining<IMediator>()
    .SelectAllClasses()
    .BindDefaultInterface());
kernel.Bind(scan => scan.FromAssemblyContaining<Ping>()
    .SelectAllClasses()
    .BindAllInterfaces());
kernel.Bind<TextWriter>().ToConstant(Console.Out);

Ninject was able to display all the messages, and the configuration looks very similar to Autofac. However, that “ContravariantBindingResolver” is not built in to Ninject and is something you’ll have to spelunk Stack Overflow to figure out. It’s somewhat possible when you have one generic parameter, but for multiple it gets a lot harder. I won’t embed the gist as it’s quite ugly, but you can find the full resolver here.

Open generics: yes, implicitly

Multiple open generics: yes, implicitly

Generic contravariance: yes, with user-built extensions

Simple Injector

Simple Injector is a bit of an upstart from the same folks behind NancyFx someone not related to NancyFx at all yet has a very similar Twitter handle, and it focuses really on the simple, straightforward scenarios. This is the first container that requires a bit more to hook up:

var container = new Container();
var assemblies = GetAssemblies().ToArray();
container.Register<IMediator, Mediator>();
container.RegisterManyForOpenGeneric(typeof(IRequestHandler<,>), assemblies);
container.RegisterManyForOpenGeneric(typeof(IAsyncRequestHandler<,>), assemblies);
container.RegisterManyForOpenGeneric(typeof(INotificationHandler<>), container.RegisterAll, assemblies);
container.RegisterManyForOpenGeneric(typeof(IAsyncNotificationHandler<>), container.RegisterAll, assemblies);

While multiple open generics is supported, contravariance is not. In fact, to hook up contravariance requires quite a few hoops to jump through to set it up. It’s documented, but I wouldn’t call it “out of the box” because you have to build your own wrapper around the handlers to manually figure out the handlers to call. UPDATE: as of 2.7, contravariance *is* supported out-of-the-box. Configuration is the same as it is above, the variance now “just works”.

Open generics: yes, explicitly

Multiple open generics: yes, explicitly

Generic contravariance: no yes, implicitly

StructureMap

This is the most established container in this list, and one I’ve used the most personally. StructureMap is a little bit different in that it applies conventions during scanning assemblies to determine how to wire requests for types up. Here’s the StructureMap configuration:

var container = new Container(cfg =>
{
    cfg.Scan(scanner =>
    {
        scanner.AssemblyContainingType<Ping>();
        scanner.AssemblyContainingType<IMediator>();
        scanner.WithDefaultConventions();
        scanner.AddAllTypesOf(typeof(IRequestHandler<,>));
        scanner.AddAllTypesOf(typeof(IAsyncRequestHandler<,>));
        scanner.AddAllTypesOf(typeof(INotificationHandler<>));
        scanner.AddAllTypesOf(typeof(IAsyncNotificationHandler<>));
    });
});

I do have to manually wire up the open generics in this case.

Open generics: yes, explicitly

Multiple open generics: yes, explicitly

Generic contravariance: yes, implicitly

Unity

And now for the most annoying container I had to deal with. Unity doesn’t like one type registered with two implementations, so you have to do extra work to even be able to run the application with multiple handlers for a message. My Unity configuration is:

container.RegisterTypes(AllClasses.FromAssemblies(typeof(Ping).Assembly),
   WithMappings.FromAllInterfaces,
   GetName,
   GetLifetimeManager);

/* later down */

static bool IsNotificationHandler(Type type)
{
    return type.GetInterfaces().Any(x => x.IsGenericType && (x.GetGenericTypeDefinition() == typeof(INotificationHandler<>) || x.GetGenericTypeDefinition() == typeof(IAsyncNotificationHandler<>)));
}

static LifetimeManager GetLifetimeManager(Type type)
{
    return IsNotificationHandler(type) ? new ContainerControlledLifetimeManager() : null;
}

static string GetName(Type type)
{
    return IsNotificationHandler(type) ? string.Format("HandlerFor" + type.Name) : string.Empty;
}

Yikes. Unity handles the very simple case of open generics, but that’s about it.

Open generics: yes, implicitly

Multiple open generics: yes, with user-built extension

Generic contravariance: derp

Windsor

The last container in this completely unnecessarily long list is Windsor. Windsor was a bit funny, it required a lot more configuration than others, but it was configuration that was built in and very wordy. My Windsor configuration is:

var container = new WindsorContainer();
container.Register(Classes.FromAssemblyContaining<IMediator>().Pick().WithServiceAllInterfaces());
container.Register(Classes.FromAssemblyContaining<Ping>().Pick().WithServiceAllInterfaces());
container.Kernel.AddHandlersFilter(new ContravariantFilter());

Similar to Ninject, the simple scenarios are built-in, but the more complex need a bit of Stack Overflow spelunking. The “ContravariantFilter” is very similar to the Ninject implementation, with the same limitations as well.

Open generics: yes, implicitly

Multiple open generics: yes, implicitly

Generic contravariance: yes, with user-built extension

Final score

Going in, I thought the containers would be closer in ability for a feature like these that are pretty popular these days. Instead, they’re miles apart. I originally was going to use this as a post to complain that there are too many DI containers in the .NET space, but honestly, the feature set and underlying models are so completely different it would take quite a bit of effort to try to consolidate and combine projects.

What is pretty clear from my experience here is that Unity as a choice is probably a mistake.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

How to do Database Testing with CodedUI (C#)?

Testing tools Blog - Mayank Srivastava - Mon, 01/12/2015 - 12:20
Here I am providing sample code to start Database Testing with CodedUI (C#)- //Below code willl login and open the DB connection. string connetionString = “Data Source=DBSeverName;Initial Catalog=DBName;User ID=UserName;Password=DBPassword”; SqlConnection connection = new SqlConnection(connetionString); connection.Open(); //Below code will help to execute the Select query. String Selectquery = “Select FirstName from Emp where EmpID in (01)”; […]
Categories: Blogs

Special Offers

Hiccupps - James Thomas - Fri, 01/09/2015 - 10:58
Linguamatics hosted James Lyndsay at the Cambridge Tester Meetup  last night.

His workshop started with some improv games based on the work of Keith Johnstone  which, by exposing our awkwardness, showed us that we were conditioned to behave in certain ways, that we have patterns of operation. As testers we want to be free to think, investigate, explore, perform.

A second round of exercises had us giving and receiving imaginary presents to illustrate the notions of offers and blocking. Here, one person creates a context for the other (the offer) which can be accepted or rejected, but both parties must be aware of the ways in which they might constrain that context (the block).

For example, I might mime the shape of something to pass to my partner and then, as their hands reach for it, change the shape. This constitutes a block - I am not collaborating as fully as I might. Blocks come in many varieties; the receiver may block by not accepting the gift or refuting some aspect of the context.

We formed small groups assigned to apply the notion of an offer - with no suggestion about the ways in which we might do it - to testing a task management application. Here's just a few of the thoughts I noted down, pretty much raw out of my notes:
  • every interactive component of the application is an offer.
  • the user interface, user experience, terminology, documentation and all other aspects of the product are offers to make a judgement about the software, its quality, its value to the user, its function, its domain and so on.
  • offers may be implicit or explicit.
  • is there a difference between an offer that is recognised as such by the receiver and one which is not?
  • some offers are compound; a form has a submit button but also fields that can be filled in. The fields are individually offers, but the whole is also an offer.
  • some offers are conditional; a particular field in a form might only be available when other fields are populated.
  • it is frustrating when the the relationships at play in a conditional offer are not clear. An offer that appears and is then removed for reasons the reciver doesn't understand is distracting and frustrating. The receiver feels let down.
  • when we saw some offer (say, a date field in a form), our first thought was often how "can we accept this offer in a way that violates its likely intent?" (say, a date in the past for the start of a task).
  • is the receiver blocking when they accept an offer in a way not intended by the giver?
  • an offer that doesn't obviously result in some change is confusing to the receiver; for example, pressing a button but seeing no obvious consequence.
  • the likely consequence of accepting some offers is clear, but in others we're taking a leap of faith. The error dialog that says "You tried to do some complex action, but there was a problem. Do you want to continue? Choose OK or Cancel" doesn't help us to understand the consequences of accepting the offer.
  • rejecting an offer is not a null action. It still has consequences.
  • accepting or rejecting offers can have unintended consequences. When multiple groups were testing the same application we were (probably) changing each others' data, resulting in some confusion (to my group, at least, until we had a hypothesis).
  • inconsistency of offers is confusing. Multiple different ways to report form submission failure; different icons for the same functionality; the same functionality under buttons with different icons; use of colour in some places for some data, but not others. The receiver doesn't know what to make of offers that are apparently similar to others in some respects - should they expect the same outcome or something different? This is a kind of block.
  • an offer that is taken up (say, a form is submitted) but then results in a block (say, a validation error) is unpleasant for the person who accepted the offer. It is possibly more unpleasant than an offer that is taken up only after all negotiation on the terms of the offer has been done (such as when fields are validated during input).
  • offers are always choices. If nothing else the receiver can accept or reject. But they are often more than binary, even in simple cases like an OK/Cancel dialog with two obvious buttons there may be a close button in the title bar, keyboard shortcuts for cancelling (often Escape), different ways to navigate the dialog (e.g. tab, shift-tab, using space or return to select a button, or using the mouse); the dialog might be modal or not and if not, the offer is deferrable.
  • offers can be thought of as nodes on a graph of the testing search space. And the reverse: any node on a graph of the search space is an offer, although not necessarily one made by the software, but perhaps made by the data or the tester, or some external context or constraint (such as time or project priorities).
  • deferring choices is a kind of blocking - is it important to defer consciously?
  • noticing, and accepting, offers is a way of breaking patterns of behaviour. Perhaps I always get to the admin page of some product by opening it, clicking on the Tools menu and selecting Admin. But the product offers me many other ways of getting there - I can create a browser bookmark for that function; I can customise the toolbar of the application; I can launch the application using an Admin-only account. Accepting the offers puts me in a different context, ready to see something different(ly).
  • There's a literature on human psychology around giving (also giving up) and receiving. How much of this could be relevant to human-computer interactions?
  • I like to give software the chance to demonstrate itself to me. Am I making it an offer?
  • what can I do to avoid being overwhelmed by the explosion of offers?
I've only recently linked improv and testing (and I'm quite late to that party) but just recasting my interaction with the software as a sequence of offers and blocks last night generated tons of ideas and a new tool to consider deploying on a very familiar problem.

That possibility of a different perspective, a new view, a cleaner vision is incredibly exciting, but until I've used the tool some more, built and broken something with it, uncovered some of its foibles and fortes and put some sweat into its handles, I won't know whether it's a microscope, telescope, a prism, a mirror, a window, rose-tinted spectacles or a blindfold.
Image: https://flic.kr/p/aGhYRT
Categories: Blogs

linux.conf.au in Auckland

The Build Doctor - Fri, 01/09/2015 - 06:12
Happy New Year. I’m speaking about Graphs and Neo4j at linux.conf.au next Friday.  Don’t think I’m the star attraction though, I think that’s Linus.

Visit The Build Doctor for the full article.
Categories: Blogs