Skip to content

Jimmy Bogard
Syndicate content
Strong opinions, weakly held
Updated: 6 hours 31 min ago

NServiceBus 5.0 behaviors in action: routing slips

Thu, 10/02/2014 - 14:57

I’ve wrote in the past how routing slips can provide a nice alternative to NServiceBus sagas, using a stateless, upfront approach. In NServiceBus 4.x, it was quite clunky to actually implement them. I had to plug in to two interfaces that didn’t really apply to routing slips, only because those were the important points in the pipeline to get the correct behavior.

In NServiceBus 5, these behaviors are much easier to build, because of the new behavior pipeline features. Behaviors in NServiceBus are similar to HttpHandlers, or koa.js callbacks, in which form a series of nested wrappers around inner behaviors in a sort of Russian doll model. It’s an extremely popular model, and most modern web frameworks include some form of it (Web API filters, node, Fubu MVC behaviors, etc.)

Behaviors in NServiceBus are applied to two distinct contexts: incoming messages and outgoing messages. Contexts are represented by context objects, allowing you to get access to information about the current context instead of doing things like dependency injection to do so.

In converting the route supervisor in my routing slips implementation, I greatly simplified the whole thing, and got rid of quite a bit of cruft.

Creating the behavior

To first create my behavior, I need to create an implementation of an IBehavior interface with the context I’m interested in:

public class RouteSupervisor
    : IBehavior<IncomingContext> {
    
    public void Invoke(IncomingContext context, Action next) {
        next();
    }
}

Next, I need to fill in the behavior of my invocation. I need to detect if the current request has a routing slip, and if so, perform the operation of routing to the next step. I’ve already built a component to manage this logic, so I just need to add it as a dependency:

private readonly IRouter _router;

public RouteSupervisor(IRouter router)
{
    _router = router;
}

Then in my Invoke call:

public void Invoke(IncomingContext context, Action next)
{
    string routingSlipJson;

    if (context.IncomingLogicalMessage.Headers.TryGetValue(Router.RoutingSlipHeaderKey, out routingSlipJson))
    {
        var routingSlip = JsonConvert.DeserializeObject<RoutingSlip>(routingSlipJson);

        context.Set(routingSlip);

        next();

        _router.SendToNextStep(routingSlip);
    }
    else
    {
        next();
    }
}

I first pull out the routing slip from the headers. But this time, I can just use the context to do so, NServiceBus manages everything related to the context of handling a message in that object.

If I don’t find the header for the routing slip, I can just call the next behavior. Otherwise, I deserialize the routing slip from JSON, and set this value in the context. I do this so that a handler can access the routing slip and attach additional contextual values.

Next, I call the next action (next()), and finally, I send the current message to the next step.

With my behavior created, I now need to register my step.

Registering the new behavior

Since I have now a pipeline of behavior, I need to tell NServiceBus when to invoke my behavior. I do so by first creating a class that represents the information on how to register this step:

public class Registration : RegisterStep
{
    public Registration()
        : base(
            "RoutingSlipBehavior", typeof (RouteSupervisor),
            "Unpacks routing slip and forwards message to next destination")
    {
        InsertBefore(WellKnownStep.LoadHandlers);
    }
}

I tell NServiceBus to insert this step before a well-known step, of loading handlers. I (actually Andreas) picked this point in the pipeline because in doing so, I can modify the services injected into my step. This last piece is configuring and turning on my behavior:

public static BusConfiguration RoutingSlips(this BusConfiguration configure)
{
    configure.RegisterComponents(cfg =>
    {
        cfg.ConfigureComponent<Router>(DependencyLifecycle.SingleInstance);
        cfg.ConfigureComponent(b => 
            b.Build<PipelineExecutor>()
                .CurrentContext
                .Get<RoutingSlip>(),
           DependencyLifecycle.InstancePerCall);
    });
    configure.Pipeline.Register<RouteSupervisor.Registration>();

    return configure;
}

I register the Router component, and next the current routing slip. The routing slip instance is pulled from the current context’s routing slip – what I inserted into the context in the previous step.

Finally, I register the route supervisor into the pipeline. With the current routing slip registered as a component, I can allow handlers to access the routing slip and add attachment for subsequent steps:

public RoutingSlip RoutingSlip { get; set; }

public void Handle(SequentialProcess message)
{
    // Do other work

    RoutingSlip.Attachments["Foo"] = "Bar";
}

With the new pipeline behaviors in place, I was able to remove quite a few hacks to get routing slips to work. Building and registering this new behavior was simple and straightforward, a testament to the design benefits of a behavior pipeline.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

The value proposition of Hypermedia

Tue, 09/23/2014 - 16:13

REST is a well-defined architectural style, and despite many misuses of the term towards general Web APIs, can be a very powerful tool. One of the constraints of a REST architecture is HATEOAS, which describes the use of Hypermedia as a means of navigating resources and manipulating state.

It’s not a particularly difficult concept to understand, but it’s quite a bit more difficult to choose and implement a hypermedia strategy. The obvious example of hypermedia is HTML, but even it has its limitations.

But first, when is REST, and in particular, hypermedia important?

For the vast majority of Web APIs, hypermedia is not only inappropriate, but complete overkill. Hypermedia, as part of a self-descriptive message, includes descriptions on:

  • Who I am
  • What you can do with me
  • How you can manipulate my state
  • What resources are related to me
  • How those resources are related to me
  • How to get to resources related to me

In a typical web application, client (HTML + JavaScript + CSS) are developed and deployed at the same time as the server (HTTP endpoints). Because of this acceptable coupling, the client can “know” all the ways to navigate relationships, manipulate state and so on. There’s no downside to this coupling, since the entire app is built and deployed together, and the same application that serves the HTTP endpoints also serves up the client:

image

For clients whose logic and behavior are served by the same endpoint as the original server, there’s little to no value in hypermedia. In fact, it adds a lot of work, both in the server API, where your messages now need to be self-descriptive, and in the client, where you need to build behavior around interpreting self-descriptive messages.

Disjointed client/server deployments

Where hypermedia really shines are in cases where clients and servers are developed and deployed separately. If client releases aren’t in line with server releases, we need to decouple our communication. One option is to simply build a well-defined protocol, and don’t break it.

That works well in cases where you can define your API very well, and commit to not breaking future clients. This is the approach the Azure Web API takes. It also works well when your API is not meant to be immediately consumed by human interaction – machines are rather lousy at understanding following links, relations and so on. Search crawlers can click links well, but when it comes to manipulating state through forms, they don’t work so well (or work too well, and we build CAPTCHAs).

No, hypermedia shines in cases where the API is built for immediate human interaction, and clients are built and served completely decoupled from the server. A couple of cases could be:

image

Deployment to an app store can take days to weeks, and even then you’re not guaranteed to have all your clients at the same app version:

image

Or perhaps it’s the actual API server that’s deployed to your customers, and you consume their APIs at different versions:

image

These are the cases where hypermedia shines. But to do so, you need to build generic components on the client app to interpret self-describing messages. Consider Collection+JSON:

{ "collection" :
  {
    "version" : "1.0",
    "href" : "http://example.org/friends/",
    
    "links" : [
      {"rel" : "feed", "href" : "http://example.org/friends/rss"},
      {"rel" : "queries", "href" : "http://example.org/friends/?queries"},
      {"rel" : "template", "href" : "http://example.org/friends/?template"}
    ],
    
    "items" : [
      {
        "href" : "http://example.org/friends/jdoe",
        "data" : [
          {"name" : "full-name", "value" : "J. Doe", "prompt" : "Full Name"},
          {"name" : "email", "value" : "jdoe@example.org", "prompt" : "Email"}
        ],
        "links" : [
          {"rel" : "blog", "href" : "http://examples.org/blogs/jdoe", "prompt" : "Blog"},
          {"rel" : "avatar", "href" : "http://examples.org/images/jdoe", "prompt" : "Avatar", "render" : "image"}
        ]
      }
    ]
  } 
}

Interpreting this, I can build a list of links for this item, and build the text output and labels. Want to change the label shown to the end user? Just change the “prompt” value, and your text label is changed. Want to support internationalization? Easy, just handle this on the server side. Want to provide additional links? Just add new links in the “links” array, and your client can automatically build them out.

In one recent application, we built a client API that automatically followed first-level item collection links and displayed the results as a “master-detail” view. A newer version of the API that added a new child collection didn’t require any change to the client – the new table automatically showed up because we made the generic client controls hypermedia-aware.

This did require an investment in our clients, but it was a small price to pay to allow clients to react to the server API, instead of having their implementation coupled to an understanding of the API that could be out-of-date, or just wrong.

The rich hypermedia formats are quite numerous now:

The real challenge is building clients that can interpret these formats. In my experience, we don’t really need a generic solution for interaction, but rather individual components (links, forms, etc.) The client still needs to have some “understanding” of the server, but these can instead be in the form of metadata rather than hard-coded understanding of raw JSON.

Ultimately, hypermedia matters, but in far fewer places than are today incorrectly labeled with a “RESTful API”, but is not entirely vaporware or astronaut architecture. It’s somewhere in the middle, and like many nascent architectures (SOA, Microservices, Reactive), it will take a few iterations to nail down the appropriate scenarios, patterns and practices.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Container Usage Guidelines

Wed, 09/17/2014 - 21:25

Over the years, I’ve used and abused IoC containers. While the different tools have come and gone, I’ve settled on a set of guidelines on using containers effectively. As a big fan of the Framework Design Guidelines book and its style of “DO/CONSIDER/AVOID/DON’T”, I tried to capture what has made me successful with containers over the years in a series of guidelines below.

Container configuration

Container configuration typically occurs once at the beginning of the lifecycle of an AppDomain, creating an instance of a container as the composition root of the application, and configuring and framework-specific service locators. StructureMap combines scanning for convention-based registration, and Registries for component-specific configuration.

X AVOID scanning an assembly more than once.

Scanning is somewhat expensive, as scanning involves passing each type in an assembly through each convention. A typical use of scanning is to target one or more assemblies, find all custom Registries, and apply conventions. Conventions include generics rules, matching common naming conventions (IFoo to Foo) and applying custom conventions. A typical root configuration would be:

var container = new Container(cfg =>
    cfg.Scan(scan => {
        scan.AssemblyContainingType<IMediator>();
        scan.AssemblyContainingType<UiRegistry>();
 
        scan.AddAllTypesOf(typeof(IValidator<>));
        scan.AddAllTypesOf(typeof(IRequestHandler<,>));
 
        scan.WithDefaultConventions();
        scan.LookForRegistries();
    }));

Component-specific configuration is then separated out into individual Registry objects, instead of mixed with scanning. Although it is possible to perform both scanning and component configuration in one step, separating component-specific registration in individual registries provides a better separation of conventions and configuration.

√ DO separate configuration concerning different components or concerns into different Registry classes.

Individual Registry classes contain component-specific registration. Prefer smaller, targeted Registries, organized around function, scope, component etc. All container configuration for a single 3rd-party component organized into a single Registry makes it easy to view and modify all configuration for that one component:

public class NHibernateRegistry : Registry {
    public NHibernateRegistry() {
        For<Configuration>().Singleton().Use(c => new ConfigurationFactory().CreateConfiguration());
        For<ISessionFactory>().Singleton().Use(c => c.GetInstance<Configuration>().BuildSessionFactory());
 
        For<ISession>().Use(c => {
            var sessionFactory = c.GetInstance<ISessionFactory>();
            var orgInterceptor = new OrganizationInterceptor(c.GetInstance<IUserContext>());
            return sessionFactory.OpenSession(orgInterceptor);
        });
    }
}

X DO NOT use the static API for configuring or resolving.

Although StructureMap exposes a static API in the ObjectFactory class, it is considered obsolete. If a static instance of a composition root is needed for 3rd-party libraries, create a static instance of the composition root Container in application code.

√ DO use the instance-based API for configuring.

Instead of using ObjectFactory.Initialize and exposing ObjectFactory.Instance, create a Container instance directly. The consuming application is responsible for determining the lifecycle/configuration timing and exposing container creation/configuration as an explicit function allows the consuming runtime to determine these (for example, in a web application vs. integration tests).

X DO NOT create a separate project solely for dependency resolution and configuration.

Container configuration belongs in applications requiring those dependencies. Avoid convoluted project reference hierarchies (i.e., a “DependencyResolution” project). Instead, organize container configuration inside the projects needing them, and defer additional project creation until multiple deployed applications need shared, common configuration.

√ DO include a Registry in each assembly that needs dependencies configured.

In the case where multiple deployed applications share a common project, include inside that project container configuration for components specific to that project. If the shared project requires convention scanning, then a single Registry local to that project should perform the scanning of itself and any dependent assemblies.

X AVOID loading assemblies by name to configure.

Scanning allows adding assemblies by name, “scan.Assembly(“MyAssembly”)”. Since assembly names can change, reference a specific type in that assembly to be registered. 
Lifecycle configuration

Most containers allow defining the lifecycle of components, and StructureMap is no exception. Lifecycles determine how StructureMap manages instances of components. By default, instances for a single request are shared. Ideally, only Singleton instances and per-request instances should be needed. There are cases where a custom lifecycle is necessary, to scope a component to a given HTTP request (HttpContext).

√ DO use the container to configure component lifecycle.

Avoid creating custom factories or builder methods for component lifecycles. Your custom factory for building a singleton component is probably broken, and lifecycles in containers have undergone extensive testing and usage over many years. Additionally, building factories solely for controlling lifecycles leaks implementation and environment concerns to services consuming lifecycle-controlled components. In the case where instantiation needs to be deferred or lifecycle needs to be explicitly managed (for example, instantiating in a using block), depending on a Func<IService> or an abstract factory is appropriate.

√ CONSIDER using child containers for per-request instances instead of HttpContext or similar scopes.

Child/nested containers inherit configuration from a root container, and many modern application frameworks include the concept of creating scopes for requests. Web API in particular creates a dependency scope for each request. Instead of using a lifecycle, individual components can be configured for an individual instance of a child container:

public IDependencyScope BeginScope() {
    IContainer child = this.Container.GetNestedContainer();
    var session = new ApiContext(child.GetInstance<IDomainEventDispatcher>());
    var resolver = new StructureMapDependencyResolver(child);
    var provider = new ServiceLocatorProvider(() => resolver);
    child.Configure(cfg =>
    {
        cfg.For<DbContext>().Use(session);
        cfg.For<ApiContext>().Use(session);
        cfg.For<ServiceLocatorProvider>().Use(provider);
    });
             
    return resolver;
}

Since components configured for a child container are transient for that container, child containers provide a mechanism to create explicit lifecycle scopes configured for that one child container instance. Common applications include creating child containers per integration test, MVVM command handler, web request etc.

√ DO dispose of child containers.

Containers contain a Dispose method, so if the underlying service locator extensions do not dispose directly, dispose of the container yourself. Containers, when disposed, will call Dispose on any non-singleton component that implements IDisposable. This will ensure that any resources potentially consumed by components are disposed properly.
Component design and naming

Much of the negativity around DI containers arises from their encapsulation of building object graphs. A large, complicated object graph is resolved with single line of code, hiding potentially dozens of disparate underlying services. Common to those new to Domain-Driven Design is the habit of creating interfaces for every small behavior, leading to overly complex designs. These design smells are easy to spot without a container, since building complex object graphs by hand is tedious. DI containers hide this pain, so it is up to the developer to recognize these design smells up front, or avoid them entirely.

X AVOID deeply nested object graphs.

Large object graphs are difficult to understand, but easy to create with DI containers. Instead of a strict top-down design, identify cross-cutting concerns and build generic abstractions around them. Procedural code is perfectly acceptable, and many design patterns and refactoring techniques exist to address complicated procedural code. The behavioral design patterns can be especially helpful, combined with refactorings dealing with long/complicated code can be especially helpful. Starting with the Transaction Script pattern keeps the number of structures low until the code exhibits enough design smells to warrant refactoring.

√ CONSIDER building generic abstractions around concepts, such as IRequestHandler<T>, IValidator<T>.

When designs do become unwieldy, breaking down components into multiple services often leads to service-itis, where a system contains numerous services but each only used in one context or execution path. Instead, behavioral patterns such as the Mediator, Command, Chain of Responsibility and Strategy are especially helpful in creating abstractions around concepts. Common concepts include:

  • Queries
  • Commands
  • Validators
  • Notifications
  • Model binders
  • Filters
  • Search providers
  • PDF document generators
  • REST document readers/writers

Each of these patterns begins with a common interface:

public interface IRequestHandler<in TRequest, out TResponse>
    where TRequest : IRequest<TResponse> {
    TResponse Handle(TRequest request);
}
 
public interface IValidator<in T> {
    ValidationResult Validate(T instance);
}
 
public interface ISearcher {
    bool IsMatch(string query);
    IEnumerable<Person> Search(string query);
}

Registration for these components involves adding all implementations of an interface, and code using these components request an instance based on a generic parameter or all instances in the case of the chain of responsibility pattern.

One exception to this rule is for third-party components and external, volatile dependencies.

√ CONSIDER encapsulating 3rd-party libraries behind adapters or facades.

While using a 3rd-party dependency does not necessitate building an abstraction for that component, if the component is difficult or impossible to fake/mock in a test, then it is appropriate to create a facade around that component. File system, web services, email, queues and anything else that touches the file system or network are prime targets for abstraction.

The database layer is a little more subtle, as requests to the database often need to be optimized in isolation from any other request. Switching database/ORM strategies is fairly straightforward, since most ORMs use a common language already (LINQ), but have subtle differences when it comes to optimizing calls. Large projects can switch between major ORMs with relative ease, so any abstraction would limit the use of any one ORM into the least common denominator.

X DO NOT create interfaces for every service.

Another common misconception of SOLID design is that every component deserves an interface. DI containers can resolve concrete types without an issue, so there is no technical limitation to depending directly on a concrete type. In the book Growing Object-Oriented Software, Guided by Tests, these components are referred to as Peers, and in Hexagonal Architecture terms, interfaces are reserved for Ports.

√ DO depend on concrete types when those dependencies are in the same logical layer/tier.

A side effect of depending directly on concrete types is that it becomes very difficult to over-specify tests. Interfaces are appropriate when there is truly an abstraction to a concept, but if there is no abstraction, no interface is needed.

X AVOID implementation names whose name is the implemented interface name without the “I”.

StructureMap’s default conventions do match up IFoo with Foo, and this can be a convenient default behavior, but when you have implementations whose name is the same as their interface without the “I”, that is a symptom that you are arbitrarily creating an interface for every service, when just resolving the concrete service type would be sufficient instead.  In other words, the mere ability to resolve a service type by an interface is not sufficient justification for introducing an interface.

√ DO name implementation classes based on details of the implementation (AspNetUserContext : IUserContext).

An easy way to detect excessive abstraction is when class names are directly the interface name without the prefix “I”. An implementation of an interface should describe the implementation. For concept-based interfaces, class names describe the representation of the concept (ChangeNameValidator, NameSearcher etc.) Environment/context-specific implementations are named after that context (WebApiUserContext : IUserContext).
Dynamic resolution

While most component resolution occurs at the very top level of a request (controller/presenter), there are occasions when dynamic resolution of a component is necessary. For example, model binding in MVC occurs after a controller is created, making it slightly more difficult to know at controller construction time what the model type is, unless it is assumed using the action parameters. For many extension points in MVC, it is impossible to avoid service location.

X AVOID using the container for service location directly.

Ideally, component resolution occurs once in a request, but in the cases where this is not possible, use a framework’s built-in resolution capabilities. In Web API for example, dynamically resolved dependencies should be resolved from the current dependency scope:

var validationProvider = actionContext
    .Request
    .GetDependencyScope()
    .GetService(typeof(IValidatorProvider));

Web API creates a child container per request and caches this scoped container within the request message. If the framework does not provide a scoped instance, store the current container in an appropriately scoped object, such as HttpContext.Items for web requests. Occasionally, you might need to depend on a service but need to explicitly decouple or control its lifecycle. In those cases, containers support depending directly on a Func.

√ CONSIDER depending on a Func<IService> for late-bound services.

For cases where known types need to be resolved dynamically, instead of trying to build special caching/resolution services, you can instead depend on a constructor function in the form of a Func. This separates wiring of dependencies from instantiation, allowing client code to have explicit construction without depending directly on a container.

public EmailController(Func<IEmailService> emailServiceProvider) {
    _emailServiceProvider = emailServiceProvider;
}
 
public ActionResult SendEmail(string to, string subject, string body) {
    using (var emailService = _emailServiceProvider()) {
        emailService.Send(to, subject, body);
    }
}

In cases where this becomes complicated, or reflection code is needed, a factory method or delegate type explicitly captures this intent.

√ DO encapsulate container usage with factory classes when invoking a container is required.

The Patterns and Practices Common Service Locator defines a delegate type representing the creation of a service locator instance:

public delegate IServiceLocator ServiceLocatorProvider();

For code needing dynamic instantiation of a service locator, configuration code creates a dependency definition for this delegate type:

public IDependencyScope BeginScope()
{
    IContainer child = this.Container.GetNestedContainer();
    var resolver = new StructureMapWebApiDependencyResolver(child);
    var provider = new ServiceLocatorProvider(() => resolver);
    child.Configure(cfg =>
    {
        cfg.For<ServiceLocatorProvider>().Use(provider);
    });
    return new StructureMapWebApiDependencyResolver(child);
}

This pattern is especially useful if an outer dependency has a longer configured lifecycle (static/singleton) but you need a window of shorter lifecycles. For simple instances of reflection-based component resolution, some containers include automatic facilities for creating factories.

√ CONSIDER using auto-factory capabilities of the container, if available.

Auto-factories in StructureMap are available as a separate package, and allow you to create an interface with an automatic implementation:

public interface IPluginFactory {
    IList<IPlugin> GetPlugins();
}
 
For<IPluginFactory>().CreateFactory();

The AutoFactories feature will dynamically create an implementation that defers to the container for instantiating the list of plugins.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Tackling cross-cutting concerns with a mediator pipeline

Tue, 09/09/2014 - 18:17

Originally posted on the Skills Matter website

In most of the projects I’ve worked on in the last several years, I’ve put in place a mediator to manage the delivery of messages to handlers. I’ve covered the motivation behind such a pattern in the past, where it works well and where it doesn’t.

One of the advantages behind the mediator pattern is that it allows the application code to define a pipeline of activities for requests, as opposed to embedding this pipeline in other frameworks such as Rails, node.js, ASP.NET Web API and so on. These frameworks have many other concerns going on besides the very simple “one model in, one model out” pattern that so greatly simplifies conceptualizing the system and realizing more powerful patterns.

As a review, a mediator encapsulates how a series of objects interact. Our mediator looks like:

public interface IMediator
{
    TResponse Send<TResponse>(IRequest<TResponse> request);
    Task<TResponse> SendAsync<TResponse>(IAsyncRequest<TResponse> request);
    void Publish<TNotification>(TNotification notification) where TNotification : INotification;
    Task PublishAsync<TNotification>(TNotification notification) where TNotification : IAsyncNotification;
}

This is from a simple library (MediatR) I created (and borrowed heavily from others) that enables basic message passing. It facilitates loose coupling between how a series of objects interact. And like many OO patterns, it exists because of missing features in the language. In other functional languages, passing messages to handlers is accomplished with features like pattern matching.

Our handler interface represents the ability to take an input, perform work, and return some output:

public interface IRequestHandler<in TRequest, out TResponse>
    where TRequest : IRequest<TResponse>
{
    TResponse Handle(TRequest message);
}

With this simple pattern, we encapsulate the work being done to transform input to output in a single method. Any complexities around this work are encapsulated, and any refactorings are isolated to this one method. As systems become more complex, isolating side-effects becomes critical for maintaining overall speed of delivery and minimizing risk.

We still have the need for cross-cutting concerns, and we’d rather not pollute our handlers with this work.

These surrounding behaviors become implementations of the decorator pattern. Since we have a uniform interface of inputs and outputs, building decorators around cross-cutting concerns becomes trivial.

Pre- and post-request handlers

One common request I see is to do work on the requests coming in, or post-process the request on the way out. We can define some interfaces around this:

public interface IPreRequestHandler<in TRequest> {
    void Handle(TRequest request);
}

public interface IPostRequestHandler<in TRequest, in TResponse> {
    void Handle(TRequest request, TResponse response);
}

With this, we can modify inputs before they arrive to the main handler or modify responses on the way out.

In order to execute these handlers, we just need to define a decorator around our main handler:

public class MediatorPipeline<TRequest, TResponse> 
    : IRequestHandler<TRequest, TResponse> 
    where TRequest : IRequest<TResponse> {

    private readonly IRequestHandler<TRequest, TResponse> _inner;
    private readonly IPreRequestHandler<TRequest>[] _preRequestHandlers;
    private readonly IPostRequestHandler<TRequest, TResponse>[] _postRequestHandlers;

    public MediatorPipeline(
        IRequestHandler<TRequest, TResponse> inner,
        IPreRequestHandler<TRequest>[] preRequestHandlers,
        IPostRequestHandler<TRequest, TResponse>[] postRequestHandlers
        ) {
        _inner = inner;
        _preRequestHandlers = preRequestHandlers;
        _postRequestHandlers = postRequestHandlers;
    }

    public TResponse Handle(TRequest message) {

        foreach (var preRequestHandler in _preRequestHandlers) {
            preRequestHandler.Handle(message);
        }

        var result = _inner.Handle(message);

        foreach (var postRequestHandler in _postRequestHandlers) {
            postRequestHandler.Handle(message, result);
        }

        return result;
    }
}

And if we’re using a modern IoC container (StructureMap in this case), registering our decorator is as simple as:

cfg.For(typeof (IRequestHandler<,>))
   .DecorateAllWith(typeof (MediatorPipeline<,>));

When our mediator builds out the handler, it delegates to our container to do so. Our container builds the inner handler, then surrounds the handler with additional work. If this seems familiar, many modern web frameworks like koa include a similar construct using continuation passing to define a pipeline for requests. However, since our pipeline is defined in our application layer, we don’t have to deal with things like HTTP headers, content negotiation and so on.

Validation

Most validation frameworks I use validate against a type, whether it’s validation with attributes or delegated validation to a handler. With Fluent Validation, we get a very simple interface representing validating an input:

public interface IValidator<in T> {
    ValidationResult Validate(T instance);
}

Fluent Validation defines base classes for validators for a variety of scenarios:

public class CreateCustomerValidator: AbstractValidator<CreateCustomer> {
  public CreateCustomerValidator() {
    RuleFor(customer => customer.Surname).NotEmpty();
    RuleFor(customer => customer.Forename).NotEmpty().WithMessage("Please specify a first name");
    RuleFor(customer => customer.Discount).NotEqual(0).When(customer => customer.HasDiscount);
    RuleFor(customer => customer.Address).Length(20, 250);
    RuleFor(customer => customer.Postcode).Must(BeAValidPostcode).WithMessage("Please specify a valid postcode");
  }

  private bool BeAValidPostcode(string postcode) {
    // custom postcode validating logic goes here
  }
}

We can then plug our validation to the pipeline as occurring before the main work to be done:

public class ValidatorHandler<TRequest, TResponse>
    : IRequestHandler<TRequest, TResponse>
    where TRequest : IRequest<TResponse> {

    private readonly IRequestHandler<TRequest, TResponse> _inner;
    private readonly IValidator<TRequest>[] _validators;
    
    public ValidatorHandler(IRequestHandler<TRequest, TResponse> inner,
        IValidator<TRequest>[] validators) {
        _inner = inner;
        _validators = validators;
    }

   public TResponse Handle(TRequest request) {
        var context = new ValidationContext(message);

        var failures = _validators
            .Select(v => v.Validate(context))
            .SelectMany(result => result.Errors)
            .Where(f => f != null)
            .ToList();

        if (failures.Any()) 
            throw new ValidationException(failures);

        return _inner.Handle(request);
   }
}

In our validation handler, we perform validation against Fluent Validation by loading up all of the matching validators. Because we have generic variance in C#, we can rely on the container to inject all validators for all matching types (base classes and interfaces). Having validators around messages means we can remove validation from our entities, and into contextual actions from a task-oriented UI.

Framework-less pipeline

We can now push a number of concerns into our application code instead of embedded as framework extensions. This includes things like:

  • Validation
  • Pre/post processing
  • Authorization
  • Logging
  • Auditing
  • Event dispatching
  • Notifications
  • Unit of work/transactions

Pretty much anything you’d consider to use a Filter in ASP.NET or Rails that’s more concerned with application-level behavior and not framework/transport specific concerns would work as a decorator in our handlers.

Once we have this approach set up, we can define our application pipeline as a series of decorators around handlers:

var handlerType = cfg.For(typeof (IRequestHandler<,>));

handlerType.DecorateAllWith(typeof (LoggingHandler<,>));
handlerType.DecorateAllWith(typeof (AuthorizationHandler<,>));
handlerType.DecorateAllWith(typeof (ValidatorHandler<,>));
handlerType.DecorateAllWith(typeof (PipelineHandler<,>));

Since this code is not dependent on frameworks or HTTP requests, it’s easy for us to build up a request, send it through the pipeline, and verify a response:

var handler = container.GetInstance<IHandler<CreateCustomer>>();

var request = new CreateCustomer {
    Name = "Bob"
};

var response = handler.Handle(request);

response.CreatedCustomer.Name.ShouldBe(request.Name);

Or if we just want one handler, we can test that one implementation in isolation, it’s really up to us.

By focusing on a uniform interface of one model in, one model out, we can define a series of patterns on top of that single interface for a variety of cross-cutting concerns. Our behaviors become less coupled on a framework and more focused on the real work being done.

All of this would be a bit easier if the underlying language supported this behavior. Since many don’t, we rely instead of translating these functional paradigms to OO patterns with IoC containers containing our glue.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Announcing Austin Code Camp 2014

Tue, 08/26/2014 - 22:35

It’s that time of year again to hold our annual Austin Code Camp, hosted by the Austin .NET User Group:

Austin 2014 Code Camp

We’re at a new location this year at New Horizons Computer Learning Center Austin, as our previous host can no longer host events. Big thanks to St. Edwards PEC for hosting us in the past!

Register for Austin Code Camp

We’ve got links on the site for schedule, registration, sponsorship, location, speaker submissions and more.

Hope to see you there!

And because I know I’m going to get emails…

Charging for Austin Code Camp? Get the pitchforks and torches!

In the past, Austin Code Camp has been a free event with no effective cap on registrations. We could do this because the PEC had a ridiculous amount of space and could accommodate hundreds of people. With free registration, we would see 50% drop-off attending from registrations. Not very fun to plan food with such uncertainty!

This year we have a good amount of space, but not infinite space. We can accommodate the typical number of people that come to our Code Camp (150-175), but for safety reasons we can’t put an unlimited cap on registrations as we’ve done in the past.

Because of this, we’re charging a small fee to reserve a spot. It’s not even enough to cover lunch or a t-shirt or anything, but it is a small enough fee to ensure that we’re fair to those that truly want to come.

Don’t worry though, if you can’t afford the fee, send me an email, and we can work it out.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs