Skip to content

Jimmy Bogard
Syndicate content
Strong opinions, weakly held
Updated: 1 hour 18 min ago

Clean Tests: Isolating the Database

Mon, 03/02/2015 - 19:35

Other posts in this series:

Isolating the database can be pretty difficult to do, but I’ve settled on a general approach that allows me to ensure my tests are built from a consistent starting point. I prefer a consistent starting point over something like rolled back transactions, since a rolled back transaction assumes that the database is in a consistent state to begin with.

I’m going to use my tool Respawn to build a reliable starting point in my tests, and integrate it into my tests. In my last post, I walked through creating a common fixture in which my tests use to build internal state. I’m going to extend that fixture to also include my Respawn project:

public class SlowTestFixture
{
    private static IContainer Root = IoC.BuildCompositionRoot();
    private static Checkpoint Checkpoint = new Checkpoint
    {
        TablesToIgnore = new[]
        {
            "sysdiagrams",
            "tblUser",
            "tblObjectType",
        },
        SchemasToExclude = new[]
        {
            "RoundhousE"
        }
    };

    public SlowTestFixture()
    {
        Container = Root.CreateChildContainer();
        Checkpoint.Reset("MyConnectionStringName");
    }

Since my SlowTestFixture is used in both styles of organization (fixture per test class/test method), my database will either get reset before my test class is constructed, or before each test method. My tests start with a clean slate, and I never have to worry about my tests failing because of inconsistent state again. The one downside I have is that my tests can’t be run in parallel at this point, but that’s a small price to pay.

That’s pretty much all there is – because I’ve created a common fixture class, it’s easy to add more behavior as necessary. In the next post, I’ll bring all these concepts together with a couple of complete examples.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Reliable database tests with Respawn

Thu, 02/19/2015 - 19:21

Creating reliable tests that exercise the database can be a tricky beast to tame. There are many different sub-par strategies for doing so, and most of the documented methods talk about resetting the database at teardown, either using rolled back transactions or table truncation.

I’m not a fan of either of these methods – for truly reliable tests, the fixture must have a known starting point at the start of the test, not be relying on something to clean up after itself. When a test fails, I want to be able to examine the data during or after the test run.

That’s why I created Respawn, a small tool to reset the database back to its clean beginning. Instead of using transaction rollbacks, database restores or table truncations, Respawn intelligently navigates the schema metadata to build out a static, correct order in which to clear out data from your test database, at fixture setup instead of teardown.

Respawn is available on NuGet, and can work with SQL Server or Postgres (or any ANSI-compatible database that supports INFORMATION_SCHEMA views correctly).

You create a checkpoint:

private static Checkpoint checkpoint = new Checkpoint
{
    TablesToIgnore = new[]
    {
        "sysdiagrams",
        "tblUser",
        "tblObjectType",
    },
    SchemasToExclude = new []
    {
        "RoundhousE"
    }
};

You can supply tables to ignore and schemas to exclude for tables you don’t want cleared out. In your test fixture setup, reset your checkpoint:

checkpoint.Reset("MyConnectionStringName");

Or if you’re using a database besides SQL Server, you can pass in an open DbConnection:

using (var conn = new NpgsqlConnection("ConnectionStringName"))
{
    conn.Open();

    var checkpoint = new Checkpoint {
        SchemasToInclude = new[]
        {
            "public"
        },
        DbAdapter = DbAdapter.Postgres
    };

    checkpoint.Reset(conn);
}

Because Respawn stores the correct SQL in the right order to clear your tables, you don’t need to maintain a list of tables to delete or recalculate on every checkpoint reset. And since table truncation won’t work with tables that include foreign key constraints, DELETE will be faster than table truncation for test databases.

We’ve used this method at Headspring for the last six years or so, battle tested on a dozen projects we’ve put into production.

Stop worrying about unreliable database tests – respawn at the starting point instead!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Clean Tests: Isolating Internal State

Tue, 02/17/2015 - 19:46

Other posts in this series:

One of the more difficult problems with slow tests that touch shared resources is building a clean starting point. In order for tests to be reliable, the environment in which the test executes needs to be in a reliable, consistent starting state. In slow tests, in which I’m accessing out-of-process dependencies, I’m worried about two things:

  • External state is known and consistent
  • Internal state is known and consistent

In order to keep my sanity, I want to put the responsibility of building that known starting point into a Standard Fixture. This fixture is responsible for creating that starting point, and it’s this starting point that ensures the long-term maintainability of my system.

Consistent internal state

Since I’m using AutoFixture for the creation and configuration of my fixture, it will be AutoFixture I use to build out my Standard Fixture. My standard fixture will be a single class in which my tests will interact with, and because the name “Fixture” is a bit overused in many libraries, I have to name my class somewhat specifically, and it will start with building out an isolated sandbox for my internal state:

public class SlowTestFixture
{
    private static IContainer Root = IoC.BuildCompositionRoot();

    public SlowTestFixture()
    {
        Container = Root.CreateChildContainer();
    }

    public IContainer Container { get; }
}

I use a DI container as my composition root in my systems, and this combined with child containers allows me to ensure that I have a unique, isolated sandbox for running my tests. The root container is my blueprint for an execution context, and represents what I do in production. The child container’s configuration, whatever I might do to it, lives only for the context of this one test.

Throughout the rest of my tests, I can access that container to build components as need be. The next piece I’ll need is to tell AutoFixture about this fixture, and to use it both when someone needs access to the context as well as when someone needs an instance of something.

In AutoFixture, this is done via fixture customizations:

public class SlowTestsCustomization : ICustomization
{
    public void Customize(IFixture fixture)
    {
        var contextFixture = new SlowTestFixture();

        fixture.Register(() => contextFixture);

        fixture.Customizations.Add(new ContainerBuilder(contextFixture.Container));
    }
}

Customizations alter behaviors of the AutoFixture’s fixture object, allowing me to add effectively new links in a chain of responsbility pattern. I want two behaviors added:

  • Access to the fixture
  • Building container-supplied instances

The first is simple, I can register individual instances with AutoFixture using the “Register” method. The second, since it depends on the type supplied, needs its own isolated customization:

public class ContainerBuilder : ISpecimenBuilder
{
    private readonly IContainer _container;

    public ContainerBuilder(IContainer container)
    {
        _container = container;
    }

    public object Create(object request, ISpecimenContext context)
    {
        var type = request as Type;

        if (type == null || type.IsPrimitive)
        {
            return new NoSpecimen(request);
        }

        var service = _container.TryGetInstance(type);

        return service ?? new NoSpecimen(request);
    }
}

AutoFixture calls each specimen builder, one at a time, and each specimen builder either builds out an instance or returns a null object, the “NoSpecimen” object.

Ultimately, the goal is to be able to have my tests to use a pre-built component, or to use the fixture as necessary:

public InvoiceApprovalTests(Invoice invoice,
    SlowTestFixture fixture,
    IInvoiceApprover invoiceApprover)
{
    _invoice = invoice;

    invoiceApprover.Approve(invoice);
    fixture.Save(invoice);
}

The last part I need to fill in is to modify Fixie to use my customizations when building up test instances. This is in my Fixie convention where I had previously configured Fixie to use AutoFixture to instantiate my test classes:

private object CreateFromFixture(Type type)
{
    var fixture = new Fixture();

    new SlowTestsCustomization().Customize(fixture);

    return new SpecimenContext(fixture).Resolve(type);
}

My tests now have an isolated sandbox for internal state, as each child container instance is isolated per fixture. If I need to inject stubs/fakes, I don’t affect any other tests because of how I’ve built the boundaries of my test in Fixie.

In the next post, I’ll look at isolating external state (the database).

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Cross-Platform AutoMapper (again)

Fri, 02/13/2015 - 17:04

Building cross-platform support for AutoMapper has taken some…interesting twists and turns. First, I supported AutoMapper in Silverlight 3.0 five (!) years ago. I did this with compiler directives.

Next, I got tired of compiler directives, tired of Silverlight, and went back to only supporting .NET 4.

Then in AutoMapper 3.0, I supported multiple platforms via portable class libraries. When that first came out, I started get reports of exceptions that I didn’t think should ever show up, but there was a problem. MSBuild doesn’t want to copy referenced assemblies that aren’t actually being used, so I’d get issues where you’d reference platform-specific assemblies in a “Core” library, but your “UI” project that referenced “Core” didn’t pull in the platform-specific assembly.

So began a journey to force the platform-specific assembly to get copied over, no matter what. But even that was an issue – I went through several different iterations of this before it finally, reliably worked.

Unless you’re on Xamarin, which doesn’t support using this method (of PowerShell scripts) to run install scripts on Mac.

Then I had a GitHub issue from Microsoft folks asking for CoreCLR support. And with vNext projects, the project itself describes the platforms to support, including all files in the directory. Meaning I wouldn’t be picking and choosing which files should be in the assembly or not. So, we’re back to square one.

A new path

With CoreCLR and the vNext project style that is folder-based rather than scattershot, pick-and-choose file based, I could only get CoreCLR support working by using conditional compiler directives. This was already in AutoMapper a few places, but mainly in files between the platform specific assemblies. I’ve always had to do a little bit of this:

image

Not absolutely horrible, but now with CoreCLR, I need to do this everywhere. To keep my sanity, I needed to include every file in every project. Ideally, I could just have the one portable library, but that won’t work until CoreCLR is fully released. With CoreCLR, I wanted to just have one single project that built multiple platforms. vNext class libraries can do this out-of-the-box:

image

However, I couldn’t move all platforms/frameworks since they’re not all supported in vNext class projects (yet). I still had to have individual projects.

Back when I supported Silverlight 3 for the first time, I abandoned support because it was a huge pain managing multiple projects and identical files. With vNext project files, which just includes all files in a folder without doing any explicit adding, I could have a great experience. I needed that with my other projects. The final project structure looked like this:

image

In the root PCL project, I’ll do all of the work. Refactoring, coding, anything. All of the platform-specific projects will just include all the source files to compile. To get them to do this, however, meant I needed to modify the project files to include files via wildcard:

image

My projects automatically include *all* files within folders (I needed to explicitly specify individual folders for whatever reason). With this configuration, my projects now include all files automatically:

image

I just have to be very careful that when I’m adding files, I only do this in the core PCL project, where files ARE added explicitly. There seems to be strange behavior that if I added a file manually to a project with wildcard includes, all of the files will be explicitly added. Not what I’d like.

Ultimately, this greatly simplified the deployment story as well. Each dependency only includes the one, single assembly:

image

At the end of the day, this deployment strategy is best for the users. I don’t have to worry about platform-specific extension libraries, GitHub issues about builds breaking in certain environments or the application crashing in cloud platforms.

If I had to do it over again, I’m not unhappy with the middle step I took of platform-specific extension assemblies. It forced me to modularize whereas with pure compiler directives I could have accepted a spaghetti mess of code.

Eventually, I’d like to collapse all into one project, but until it’s supported, this seems to work for everyone involved.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Clean Tests: Building Test Types

Thu, 02/05/2015 - 23:38

Posts in this series:

In the primer, I described two types of tests I generally run into in my systems:

  • Arrange/act/assert fully encapsulated in a single method
  • Arrange/act in one place, assertions in each method

Effectively, I build tests in a procedural mode or in a context/specification mode. In xUnit Test Patterns language, I’m building execution plans around:

There’s another pattern listed there, “Testcase Class per Feature”, but I’ve found it to be a version of one of these two – AAA in a single method, or split out.

Most test frameworks have some extension point for you to be able to accomplish both of these patterns. Unfortunately, none of them are very flexible. In my tests, I want to have complete control over lifecycle, as my tests become more complicated to set up. My ideal would be to author tests as I do everything else:

  • Method arguments for variation in a single isolated test
  • Constructor arguments for delivering fixtures for multiple tests

Since I’m using Fixie, I can teach Fixie how to recognize these two types of tests and build individual test plans for both kinds. We could be silly and cheat with things like attributes, but I think we can be smarter, right? Looking at our two test types, we have two kinds of test classes:

  • No-arg constructor, methods have arguments for context/fixtures
  • Constructor with arguments, methods have no arguments (shared fixture)

With Fixie, I can easily distinguish between the two kinds of tests. I could do other things, like key off of namespaces (put all fast tests in one folder, slow tests in another), or separate by assemblies, it’s all up to me.

But what should supply my fixtures? With most other test frameworks, the fixtures need to be plain – class with a no-arg constructor or similar. I don’t want that. I want to use a library in which I can control and build out my fixtures in a deterministic, flexible manner.

Enter AutoFixture!

I’ll teach Fixie how to run my tests, and I’ll teach AutoFixture how to build out those constructor arguments. AutoFixture is my “Arrange”, my code is the Act, and for assertions, I’ll use Shoudly (this one I don’t care as much about, just should-based is enough).

First, let’s look at the simple kinds of tests – ones where the test is completely encapsulated in a single method.

Testcase Class per Class

For TestClass Per Class, my Fixie convention is:

public class TestcaseClassPerClassConvention : Convention
{
    public TestcaseClassPerClassConvention()
    {
        Classes
            .NameEndsWith("Tests")
            .Where(t => 
                t.GetConstructors()
                .All(ci => ci.GetParameters().Length == 0)
            );

        Methods.Where(mi => mi.IsPublic && mi.IsVoid());

        Parameters.Add(FillFromFixture);
    }

    private IEnumerable<object[]> FillFromFixture(MethodInfo method)
    {
        var fixture = new Fixture();

        yield return GetParameterData(method.GetParameters(), fixture);
    }

    private object[] GetParameterData(ParameterInfo[] parameters, Fixture fixture)
    {
        return parameters
            .Select(p => new SpecimenContext(fixture).Resolve(p.ParameterType))
            .ToArray();
    }
}

First, I need to tell Fixie what to look for in terms of test classes. I could have gone a lot of routes here like existing test frameworks “Things with a class attribute” or “Things with methods that have an attribute” or a base class or a namespace. To keep things simple, I look for classes named “Tests”. Next, because I want to target a workflow where AAA is in a single method, I make sure that this class has only no-arg constructors.

For test methods, that’s a bit easy – I just want public void methods. No attributes.

Finally, I want to fill parameters from AutoFixture. I tell Fixie to add parameters from AutoFixture, resolving each parameter value one at a time from AutoFixture.

For now, I’ll leave the AutoFixture configuration alone, but we’ll soon be layering on more behaviors as we go.

With this in place, my test becomes:

public class CalculatorTests
{
    public void ShouldAdd(Calculator calculator)
    {
        calculator.Add(2, 3).ShouldBe(5);
    }

    public void ShouldSubtract(Calculator calculator)
    {
        calculator.Subtract(5, 3).ShouldBe(2);
    }
}

So far so good! Now let’s look at our Testcase Class per Fixture example.

Testcase Class per Fixture

When we want to have a single arrange/act, but multiple assertions, our test lifecycle changes. We now want to not have to re-run the Arrange/Act every single time, we only want it run once and then each Assert work off of the results of the Act. This means that with our test class, we want it only run/instantiated once, and then the asserts happen. This is different than parameterized test methods, where we want the fixture recreated with every test.

Our Fixie configuration changes slightly:

public class TestcaseClassPerFixtureConvention : Convention
{
    public TestcaseClassPerFixtureConvention()
    {
        Classes
            .NameEndsWith("Tests")
            .Where(t => 
                t.GetConstructors().Count() == 1
                && t.GetConstructors().Count(ci => ci.GetParameters().Length > 0) == 1
            );

        Methods.Where(mi => mi.IsPublic && mi.IsVoid());

        ClassExecution
            .CreateInstancePerClass()
            .UsingFactory(CreateFromFixture);
    }

    private object CreateFromFixture(Type type)
    {
        var fixture = new Fixture();

        return new SpecimenContext(fixture).Resolve(type);
    }
}

With Fixie, I can create as many configurations as I like for different kinds of tests. Fixie layers them on each other, and I can customize styles appropriately. If I’m migrating from an existing testing platform, I could even configure Fixie to run the existing attribute-based tests!

In the configuration above, I’m looking for test classes ending with “Tests”, but also having a single constructor that has arguments. I don’t know what to do with classes with multiple constructors, so I’ll just ignore those for now.

The test methods I’m looking for are the same – except now I’ll not configure any method parameters. It would be weird to combine constructor arguments with method parameters for this style of test, so I’m ignoring that for now.

Finally, I configure test execution to create a single instance per class, using AutoFixture as my test case factory. This is the piece that starts to separate Fixie from other frameworks – you can completely customize how you want your tests to run and execute. Opinionated frameworks are great – but if I disagree, I’m left to migrate tests. Not a fun proposition.

A test that uses this convention becomes:

public class InvoiceApprovalTests
{
    private readonly Invoice _invoice;

    public InvoiceApprovalTests(Invoice invoice)
    {
        _invoice = invoice;

        _invoice.Approve();
    }

    public void ShouldMarkInvoiceApproved()
    {
        _invoice.IsApproved.ShouldBe(true);
    }

    public void ShouldMarkInvoiceLocked()
    {
        _invoice.IsLocked.ShouldBe(true);
    }
}

The constructor is invoked by AutoFixture, filling in the parameters as needed. The Act, inside the constructor, is executed once. Finally, I make individual assertions on the result of the Act.

With this style, I can build up a context and incrementally add behavior via assertions. This is a fantastic approach for lightweight BDD, since I’m focusing on behaviors and adding them one at a time.

Next up, we’ll look at going one step further and integrating the database into our tests and using Fixie to wrap interesting behaviors around them.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

AutoMapper support for ASP.NET 5.0 and ASP.NET Core 5.0

Mon, 02/02/2015 - 22:45

In the vein of “supporting all the frameworks”, I’ve extended AutoMapper to support ASP.NET 5.0 and CoreCLR (aspnetcore50). For those that are counting, I’m up to 11-12 different platforms supported, depending on how you tally:

  • aspnet50
  • aspnetcore50
  • MonoAndroid
  • MonoTouch
  • net40
  • portable-windows8+net40+wp8+sl5+MonoAndroid+MonoTouch
  • portable-windows8+net40+wp8+wpa81+sl5+MonoAndroid+MonoTouch
  • sl5
  • windows81
  • wp8
  • wpa81
  • Xamarin.iOS10

This one was a bit difficult to push out, I wound up creating two separate solutions, compiling both separately, and then creating a single NuGet package from the output of both. The aspnet50/aspnetcore50 versions are only a single assembly and use compiler directives for different platforms, while the other packages use platform-specific assemblies for extensions.

I did try to create one multi-target project using the new vNext project structure, but I failed miserably in converting the existing projects over. My goal for the 4.0 release is to have each platform being a single assembly, with no more platform-specific extensions, but it will take a bit more work to get there.

This support is included in packages “4.0.0-ci1026” and later. Enjoy!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

AutoMapper support for Xamarin Unified (64 bit)

Fri, 01/30/2015 - 16:00

I pushed out a prerelease package of AutoMapper for Xamarin Unified, including 64-bit support for iOS.

http://www.nuget.org/packages/AutoMapper/

If you’ve had issues with Xamarin on 64-bit iOS, removing an adding the AutoMapper NuGet package reference should do the trick.

And yes, I verified this on a 64-bit device, not the simulator that is full of false hope and broken dreams.

Enjoy!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Clean Tests: A Primer

Thu, 01/29/2015 - 16:25

Posts in this series:

Over the course of my career, I’ve an opportunity to work with a number of long lived codebases. Ones that I’ve been a part of since commit one and continue on for six or seven years. Over that time, I’ll see how my opinions on writing tests have changed throughout the years. It’s gone from mid 2000s mock-heavy TDD, to story-driven BDD (I even wrote an ill-advised framework, NBehave), to context/spec BDD. I looked at more exotic testing frameworks, such as MSpec and NSpec.

One advantage I see in working with codebases for many years is that certain truths start to arise that normally you wouldn’t catch if you only work with a codebase for a few months. And one of the biggest truths to arise is that simple beats clever. Looking at my tests, especially in long-lived codebases, the ability for me to understand behavior in a test quickly and easily is the most important aspect of my tests.

Unfortunately, this has meant that for most of the projects I’ve worked with, I’ve had to fight against testing frameworks more than work with them. Convoluted test hierarchies, insufficient extensibility, breaking changes and pipelines are some of the problems I’ve had to deal with over the years.

That is, until an enterprising coworker Patrick Lioi started authoring a testing framework that (inadvertently) addressed all of my concerns and frustrations with testing frameworks.

In short, I wanted a testing framework that:

  • Was low, low ceremony
  • Allowed me to work with different styles of tests
  • Favored composition over inheritance
  • Actually looked like code I was writing in production
  • Allowed me to control lifecycle, soup to nuts

Testing frameworks are opinionated, but normally not in a good way. I wanted to work with a testing framework whose opinions were that it should be up to you to decide what good tests are. Because what I’ve found is that testing frameworks don’t keep up with my opinions, nor are they flexible in the vectors in which my opinions change.

That’s why for every project I’ve been on in the last 18 months or so, I’ve used Fixie as my test framework of choice. I want tests as clean as this:

using Should;

public class CalculatorTests
{
    public void ShouldAdd()
    {
        var calculator = new Calculator();
        calculator.Add(2, 3).ShouldEqual(5);
    }

    public void ShouldSubtract()
    {
        var calculator = new Calculator();
        calculator.Subtract(5, 3).ShouldEqual(2);
    }
}

I don’t want gimmicks, I don’t want clever, I want code that actually matches what I do. I don’t want inheritance, I don’t want restrictions on fixtures, I want to code my test how I code everything else. I want to build different rules based on different test organization patterns:

public class ApproveInvoiceTests {
    private Invoice _invoice;
    private CommandResult _result;
    
    public ApproveInvoiceTests(TestContext context) {
        var invoice = new Invoice("John Doe", 30m);
        
        context.Save(invoice);
        
        var message = new ApproveInvoice(invoice.Id);
        
        _result = context.Send(message);
        
        _invoice = context.Reload(invoice);
    }
    
    public void ShouldApproveInvoice() {
        _invoice.Status.ShouldEqual(InvoiceStatus.Approved);
    }
    
    public void ShouldRaiseApprovedEvent() {
        _result.Events.OfType<InvoiceApproved>().Count().ShouldEqual(1);
    }
}

Fixie gives me this, while none others can completely match its flexibility. Fixie’s philosophy is that assertions shouldn’t be a part of your test framework. Executing tests is what a test framework should provide out of the box, but test discovery, pipelines and customization should be completely up to you.

In the next few posts, I’ll detail how I like to use Fixie to build clean tests, where I’ve stopped fighting the framework and I take control of my tests.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Integrating MediatR with Web API

Tue, 01/20/2015 - 19:25

One of the design goals I had in mind with MediatR was to limit the 3rd party dependencies (and work) needed to integrate MediatR. To do so, I only take a dependency on CommonServiceLocator. In MediatR, I need to resolve instances of request/notification handlers. Rather than build my own factory class that others would need to implement, I lean on CSL to define this interface:

public interface IServiceLocator : IServiceProvider
{
    object GetInstance(Type serviceType);
    object GetInstance(Type serviceType, string key);
    IEnumerable<object> GetAllInstances(Type serviceType);
    TService GetInstance<TService>();
    TService GetInstance<TService>(string key);
    IEnumerable<TService> GetAllInstances<TService>();
}

But that wasn’t quite enough. I also wanted to support child/nested containers, which meant I didn’t want a single instance of the IServiceLocator. Typically, when you want a component’s lifetime decided by a consumer, you depend on Func<Foo>. It turns out though that CSL already defines a delegate to provide a service locator, aptly named ServiceLocatorProvider:

public delegate IServiceLocator ServiceLocatorProvider();

In resolving handlers, I execute the delegate to get an instance of an IServiceLocatorProvider and off we go. I much prefer this approach than defining my own yet-another-factory-interface for people to implement. Just not worth it. As a consumer, you will need to supply this delegate to the mediator.

I’ll show an example using StructureMap. The first thing I do is add a NuGet dependency to the Web API IoC shim for StructureMap:

Install-Package StructureMap.WebApi2

This will also bring in the CommonServiceLocator dependency and some files to shim with Web API:

image

I have the basic building blocks for what I need in order to have a Web API project using StructureMap. The next piece is to configure the DefaultRegistry to include handlers in scanning:

public DefaultRegistry() {
    Scan(
        scan => {
            scan.TheCallingAssembly();
            scan.AssemblyContainingType<PingHandler>();
            scan.WithDefaultConventions();
			scan.With(new ControllerConvention());
            scan.AddAllTypesOf(typeof(IRequestHandler<,>));
            scan.AddAllTypesOf(typeof(IAsyncRequestHandler<,>));
            scan.AddAllTypesOf(typeof(INotificationHandler<>));
            scan.AddAllTypesOf(typeof(IAsyncNotificationHandler<>));
        });
    For<IMediator>().Use<Mediator>();
}

This is pretty much the same code you’d find in any of the samples in the MediatR project. The final piece is to hook up the dependency resolver delegate, ServiceLocatorProvider. Since most/all containers have implementations of the IServiceLocator, it’s really about finding the place where the underlying code creates one of these IServiceLocator implementations and supplies it to the infrastructure. In my case, there’s the Web API IDependencyResolver implementation:

public IDependencyScope BeginScope()
{
    IContainer child = this.Container.GetNestedContainer();
    return new StructureMapWebApiDependencyResolver(child);
}

I modify this to use the current nested container and attach the resolver to this:

public IDependencyScope BeginScope()
{
    var resolver = new StructureMapWebApiDependencyResolver(CurrentNestedContainer);

    ServiceLocatorProvider provider = () => resolver;

    CurrentNestedContainer.Configure(cfg => cfg.For<ServiceLocatorProvider>().Use(provider));
    
    return resolver;
}

This is also the location where I’ll attach per-request dependencies (NHibernate, EF etc.). Finally, I can use a mediator in a controller:

public class ValuesController : ApiController
{
    private readonly IMediator _mediator;

    public ValuesController(IMediator mediator)
    {
        _mediator = mediator;
    }

    // GET api/values
    public IEnumerable<string> Get()
    {
        var result = _mediator.Send(new Ping());

        return new string[] { result.Message };
    }

That’s pretty much it. How you need to configure the mediator in your application might be different, but the gist of the means is to configure the ServiceLocatorProvider delegate dependency to return the “thing that the framework uses for IServiceLocator”. What that is depends on your context, and unfortunately changes based on every framework out there.

In my example above, I’m preferring to configure the IServiceLocator instance to be the same instance as the IDependencyScope instance, so that any handler instantiated is from the same composition root/nested container as whatever instantiated my controller.

See, containers are easy, right?

(crickets)

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Combating the lava-layer anti-pattern with rolling refactoring

Thu, 01/15/2015 - 17:37

Mike Hadlow blogged about the lava-layer anti-pattern, describing, which I have ranted about in nearly every talk I do, the nefarious issue of opinionated but lazy tech leads introducing new concepts into a system but never really seeing the idea through all the way to the end. Mike’s story was about different opinions on the correct DAL tool to use, but none of them actually ever goes away:

LavaLayer

It’s not just DALs that I see this occur. Another popular strata I see are database naming conventions, starting from:

  • ORDERS
  • tblOrders
  • Orders
  • Order
  • t_Order

And on and on – none of which add any value, but it’s not a long-lived codebase without a little bike shedding, right?

That’s a pointless change, but I’ve seen others, especially in places where design is evolving rapidly. Places where the refactorings really do add value. I called the result long-tail design, where we have a long tail of different versions of an idea or design in a system, and each successive version occurs less and less:

Long-tail and lava-layer design destroy productivity in long-running projects. But how can we combat it?

Jimmy’s rule of 2: There can be at most two versions of a concept in an application

In practice, what this means is we don’t move on to the next iteration of a concept until we’ve completely refactored all existing instances. It starts like this:

image

A set of functionality we don’t like all exists in one version of the design. We don’t like it, and want to make a change. We start by carving out a slice to test out a new version of the design:

image

We poke at our concept, get input, refine it in this one slice. When we think we’re on to something, we apply it to a couple more places:

image

It’s at this point where we can start to make a decision: is our design better than the existing design? If not, we need to roll back our changes. Not leave it in, not comment it out, but roll it all the way back. We can always do our work in a branch to preserve our work, but we need to make a commitment one way or the other. If we do commit, our path forward is to refactor V1 out of existence:

image

image

image

image

image

We never start V3 of our concept until we’ve completely eradicated V1 – and that’s the law of 2. At most two versions of our design can be in our application at any one time.

We’re not discouraging refactoring or iterative/evolutionary design, but putting in parameters to discipline ourselves.

In practice, our successive designs become better than they could have been in our long-tail/lava-layer approach. The more examples we have of our idea, the stronger our case becomes that our idea is better. We wind up having a rolling refactoring result:

output_yWnRTm

A rolling refactoring is the only way to have a truly evolutionary design; our original neanderthal needs to die out before moving on to the next iteration.

Why don’t we apply a rolling refactoring design? Lots of excuses, but ultimately, it requires courage and discipline, backed by tests. Doing this without tests isn’t courage – it’s reckless and developer hubris.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Generic variance in DI containers

Tue, 01/13/2015 - 03:02

DI containers, as complex as they might be, still provide quite a lot of value when it comes to defining and realizing the composition of your system. I use the variance features quite a bit, especially in my MediatR project and composing a rich pipeline. A side note, one of the design goals of MediatR is not to take any dependency on a 3rd party DI container. I instead take a dependency on Common Service Locator, which all major DI containers already have. As part of this exercise, I still wanted to provide examples of all major containers, and this led me to figure out which containers supported what.

I looked at the major containers out there:

  • Autofac
  • Ninject
  • Simple Injector
  • StructureMap
  • Unity
  • Windsor

And tried to build examples of using MediatR. As part of this, I was able to see what containers supported which scenarios, and how difficult it was to achieve this.

The scenario is this: I have an interface, IMediator, in which I can send a single request/response or a notification to multiple recipients:

public interface IMediator
{
    TResponse Send<TResponse>(IRequest<TResponse> request);

    Task<TResponse> SendAsync<TResponse>(IAsyncRequest<TResponse> request);

    void Publish<TNotification>(TNotification notification)
        where TNotification : INotification;

    Task PublishAsync<TNotification>(TNotification notification)
        where TNotification : IAsyncNotification;
}

I then created a base set of requests/responses/notifications:

public class Ping : IRequest<Pong>
{
    public string Message { get; set; }
}
public class Pong
{
    public string Message { get; set; }
}
public class PingAsync : IAsyncRequest<Pong>
{
    public string Message { get; set; }
}
public class Pinged : INotification { }
public class PingedAsync : IAsyncNotification { }

I was interested in looking at a few things with regards to container support for generics:

  • Setup for open generics (registering IRequestHandler<,> easily)
  • Setup for multiple registrations of open generics (two or more INotificationHandlers)
  • Setup for generic variance (registering handlers for base INotification/creating request pipelines)

My handlers are pretty straightforward, they just output to console:

public class PingHandler : IRequestHandler<Ping, Pong> { /* Impl */ }
public class PingAsyncHandler : IAsyncRequestHandler<PingAsync, Pong> { /* Impl */ }

public class PingedHandler : INotificationHandler<Pinged> { /* Impl */ }
public class PingedAlsoHandler : INotificationHandler<Pinged> { /* Impl */ }
public class GenericHandler : INotificationHandler<INotification> { /* Impl */ }

public class PingedAsyncHandler : IAsyncNotificationHandler<PingedAsync> { /* Impl */ }
public class PingedAlsoAsyncHandler : IAsyncNotificationHandler<PingedAsync> { /* Impl */ }

I should see a total of seven messages output from the result of the run. Let’s see how the different containers stack up!

Autofac

Autofac has been around for quite a bit, and has extensive support for generics and variance. The configuration for Autofac is:

var builder = new ContainerBuilder();
builder.RegisterSource(new ContravariantRegistrationSource());
builder.RegisterAssemblyTypes(typeof (IMediator).Assembly).AsImplementedInterfaces();
builder.RegisterAssemblyTypes(typeof (Ping).Assembly).AsImplementedInterfaces();

Autofac does require us to explicitly add a registration source for recognizing contravariant interfaces (covariant is a lot rarer, so I’m ignoring that for now). With minimal configuration, Autofac scored perfectly and output all the messages.

Open generics: yes, implicitly

Multiple open generics: yes, implicitly

Generic contravariance: yes, explicitly

Ninject

Ninject has also been around for quite a while, and also has extensive support for generics. The configuration for Ninject looks like:

var kernel = new StandardKernel();
kernel.Components.Add<IBindingResolver, ContravariantBindingResolver>();
kernel.Bind(scan => scan.FromAssemblyContaining<IMediator>()
    .SelectAllClasses()
    .BindDefaultInterface());
kernel.Bind(scan => scan.FromAssemblyContaining<Ping>()
    .SelectAllClasses()
    .BindAllInterfaces());
kernel.Bind<TextWriter>().ToConstant(Console.Out);

Ninject was able to display all the messages, and the configuration looks very similar to Autofac. However, that “ContravariantBindingResolver” is not built in to Ninject and is something you’ll have to spelunk Stack Overflow to figure out. It’s somewhat possible when you have one generic parameter, but for multiple it gets a lot harder. I won’t embed the gist as it’s quite ugly, but you can find the full resolver here.

Open generics: yes, implicitly

Multiple open generics: yes, implicitly

Generic contravariance: yes, with user-built extensions

Simple Injector

Simple Injector is a bit of an upstart from the same folks behind NancyFx someone not related to NancyFx at all yet has a very similar Twitter handle, and it focuses really on the simple, straightforward scenarios. This is the first container that requires a bit more to hook up:

var container = new Container();
var assemblies = GetAssemblies().ToArray();
container.Register<IMediator, Mediator>();
container.RegisterManyForOpenGeneric(typeof(IRequestHandler<,>), assemblies);
container.RegisterManyForOpenGeneric(typeof(IAsyncRequestHandler<,>), assemblies);
container.RegisterManyForOpenGeneric(typeof(INotificationHandler<>), container.RegisterAll, assemblies);
container.RegisterManyForOpenGeneric(typeof(IAsyncNotificationHandler<>), container.RegisterAll, assemblies);

While multiple open generics is supported, contravariance is not. In fact, to hook up contravariance requires quite a few hoops to jump through to set it up. It’s documented, but I wouldn’t call it “out of the box” because you have to build your own wrapper around the handlers to manually figure out the handlers to call. UPDATE: as of 2.7, contravariance *is* supported out-of-the-box. Configuration is the same as it is above, the variance now “just works”.

Open generics: yes, explicitly

Multiple open generics: yes, explicitly

Generic contravariance: no yes, implicitly

StructureMap

This is the most established container in this list, and one I’ve used the most personally. StructureMap is a little bit different in that it applies conventions during scanning assemblies to determine how to wire requests for types up. Here’s the StructureMap configuration:

var container = new Container(cfg =>
{
    cfg.Scan(scanner =>
    {
        scanner.AssemblyContainingType<Ping>();
        scanner.AssemblyContainingType<IMediator>();
        scanner.WithDefaultConventions();
        scanner.AddAllTypesOf(typeof(IRequestHandler<,>));
        scanner.AddAllTypesOf(typeof(IAsyncRequestHandler<,>));
        scanner.AddAllTypesOf(typeof(INotificationHandler<>));
        scanner.AddAllTypesOf(typeof(IAsyncNotificationHandler<>));
    });
});

I do have to manually wire up the open generics in this case.

Open generics: yes, explicitly

Multiple open generics: yes, explicitly

Generic contravariance: yes, implicitly

Unity

And now for the most annoying container I had to deal with. Unity doesn’t like one type registered with two implementations, so you have to do extra work to even be able to run the application with multiple handlers for a message. My Unity configuration is:

container.RegisterTypes(AllClasses.FromAssemblies(typeof(Ping).Assembly),
   WithMappings.FromAllInterfaces,
   GetName,
   GetLifetimeManager);

/* later down */

static bool IsNotificationHandler(Type type)
{
    return type.GetInterfaces().Any(x => x.IsGenericType && (x.GetGenericTypeDefinition() == typeof(INotificationHandler<>) || x.GetGenericTypeDefinition() == typeof(IAsyncNotificationHandler<>)));
}

static LifetimeManager GetLifetimeManager(Type type)
{
    return IsNotificationHandler(type) ? new ContainerControlledLifetimeManager() : null;
}

static string GetName(Type type)
{
    return IsNotificationHandler(type) ? string.Format("HandlerFor" + type.Name) : string.Empty;
}

Yikes. Unity handles the very simple case of open generics, but that’s about it.

Open generics: yes, implicitly

Multiple open generics: yes, with user-built extension

Generic contravariance: derp

Windsor

The last container in this completely unnecessarily long list is Windsor. Windsor was a bit funny, it required a lot more configuration than others, but it was configuration that was built in and very wordy. My Windsor configuration is:

var container = new WindsorContainer();
container.Register(Classes.FromAssemblyContaining<IMediator>().Pick().WithServiceAllInterfaces());
container.Register(Classes.FromAssemblyContaining<Ping>().Pick().WithServiceAllInterfaces());
container.Kernel.AddHandlersFilter(new ContravariantFilter());

Similar to Ninject, the simple scenarios are built-in, but the more complex need a bit of Stack Overflow spelunking. The “ContravariantFilter” is very similar to the Ninject implementation, with the same limitations as well.

Open generics: yes, implicitly

Multiple open generics: yes, implicitly

Generic contravariance: yes, with user-built extension

Final score

Going in, I thought the containers would be closer in ability for a feature like these that are pretty popular these days. Instead, they’re miles apart. I originally was going to use this as a post to complain that there are too many DI containers in the .NET space, but honestly, the feature set and underlying models are so completely different it would take quite a bit of effort to try to consolidate and combine projects.

What is pretty clear from my experience here is that Unity as a choice is probably a mistake.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs