Skip to content

Jimmy Bogard
Syndicate content
Strong opinions, weakly held
Updated: 1 hour 16 min ago

CQRS/MediatR implementation patterns

Thu, 10/27/2016 - 18:36

Early on in the CQRS/ES days, I saw a lot of questions on modeling problems with event sourcing. Specifically, trying to fit every square modeling problem into the round hole of event sourcing. This isn’t anything against event sourcing, but more that I see teams try to apply a single modeling and usage strategy across the board for their entire application.

Usually, these questions were answered a little derisively  – “you shouldn’t use event sourcing if your app is a simple CRUD app”. But that belied the truth – no app I’ve worked with is JUST a DDD app, or JUST a CRUD app, or JUST an event sourcing app. There are pockets of complexity with varying degrees along varying axes. Some areas have query complexity, some have modeling complexity, some have data complexity, some have behavior complexity and so on. We try to choose a single modeling strategy for the entire application, and it doesn’t work. When teams realize this, I typically see people break things out in to bounded contexts or microservices:

image

With this approach, you break your system into individual bounded contexts or microservices, based on the need to choose a single modeling strategy for the entire context/app.

This is completely unnecessary, and counter-productive!

A major aspect of CQRS and MediatR is modeling your application into a series of requests and responses. Commands and queries make up the requests, and results and data are the responses. Just to review, MediatR provides a single interface to send requests to, and routes those requests to in-process handlers. It removes the need for a myriad of service/repository objects for single-purpose request handlers (F# people model these just as functions).

Breaking down our handlers

Usage of MediatR with CQRS is straightforward. You build distinct request classes for every request in your system (these are almost always mapped to user actions), and build a distinct handler for each:

image

Each request and response is distinct, and I generally discourage reuse since my requests route to front-end activities. If the front-end activities are reused (i.e. an approve button on the order details and the orders list), then I can reuse the requests. Otherwise, I don’t reuse.

Since I’ve built isolation between individual requests and responses, I can choose different patterns based on each request:

image

Each request handler can determine the appropriate strategy based on *that request*, isolated from decisions in other handlers. I avoid abstractions that stretch across layers, like repositories and services, as these tend to lock me in to a single strategy for the entire application.

In a single application, your handlers can execute against:

It’s entirely up to you! From the application’s view, everything is still modeled in terms of requests and responses:

image

The application simply doesn’t care about the implementation details of a handler – nor the modeling that went into whatever generated the response. It only cares about the shape of the request and the shape (and implications and guarantees of behavior) of the response.

Now obviously there is some understanding of the behavior of the handler – we expect the side effects of the handler based on the direct or indirect outputs to function correctly. But how they got there is immaterial. It’s how we get to a design that truly focuses on behaviors and not implementation details. Our final picture looks a bit more reasonable:

image

Instead of forcing ourselves to rely on a single pattern across the entire application, we choose the right approach for the context.

Keeping it honest

One last note – it’s easy in this sort of system to devolve into ugly handlers:

image

Driving all our requests through a single mediator pinch point doesn’t mean we absolve ourselves of the responsibility of thinking about our modeling approach. We shouldn’t just pick transaction script for every handler just because it’s easy. We still need that “Refactor” step in TDD, so it’s important to think about our model before we write our handler and pay close attention to code smells after we write it.

Listen to the code in the handler – if you’ve chosen a bad approach, refactor! You’ve got a test that verifies the behavior from the outermost shell – request in, response out, so you have a implementation-agnostic test providing a safety net for refactoring. If there’s too much going on in the handler, push it down into the domain. If it’s better served with a different model altogether, refactor that direction. If the query is gnarly and would better suffice in SQL, rewrite it!

Like any architecture, one built on CQRS and MediatR can be easy to abuse. No architecture prevents bad design. We’ll never escape the need for pull requests and peer reviews and just standard refactoring techniques to improve our designs.

With CQRS and MediatR, the handler isolation supplies the enablement we need to change direction as needed based on each individual context and situation.

Categories: Blogs

Vertical Slice Test Fixtures for MediatR and ASP.NET Core

Mon, 10/24/2016 - 22:40

One of the nicest side effects of using MediatR is that my controllers become quite thin. Here’s a typical controller:

public class CourseController : Controller
{
    private readonly IMediator _mediator;

    public CourseController(IMediator mediator)
    {
        _mediator = mediator;
    }

    public async Task<IActionResult> Index(Index.Query query)
    {
        var model = await _mediator.SendAsync(query);

        return View(model);
    }

    public async Task<IActionResult> Details(Details.Query query)
    {
        var model = await _mediator.SendAsync(query);

        return View(model);
    }

    public ActionResult Create()
    {
        return View();
    }

    [HttpPost]
    [ValidateAntiForgeryToken]
    public IActionResult Create(Create.Command command)
    {
        _mediator.Send(command);

        return this.RedirectToActionJson(nameof(Index));
    }

    public async Task<IActionResult> Edit(Edit.Query query)
    {
        var model = await _mediator.SendAsync(query);

        return View(model);
    }

    [HttpPost]
    [ValidateAntiForgeryToken]
    public async Task<IActionResult> Edit(Edit.Command command)
    {
        await _mediator.SendAsync(command);

        return this.RedirectToActionJson(nameof(Index));
    }

    public async Task<IActionResult> Delete(Delete.Query query)
    {
        var model = await _mediator.SendAsync(query);

        return View(model);
    }

    [HttpPost]
    [ValidateAntiForgeryToken]
    public async Task<IActionResult> Delete(Delete.Command command)
    {
        await _mediator.SendAsync(command);

        return this.RedirectToActionJson(nameof(Index));
    }
}

Unit testing this controller is a tad pointless – I’d only do it if the controller actions were doing something interesting. With MediatR combined with CQRS, my application is modeled as a series of requests and responses, where my requests either represent a command or a query. In an actual HTTP request, I wrap my request in a transaction using an action filter, so the request looks something like:

image

The bulk of the work happens in my handler, of course. Now, in my projects, we have a fairly strict rule that all handlers need a test. But what kind of test should it be? At the very least, we want a test that executes our handlers as they would be used in a normal application. Since my handlers don’t have any real knowledge of the UI, they’re simple DTO-in, DTO-out handlers, I don’t need to worry about UI dependencies like controllers, filters and whatnot.

I do, however, want to build a test that goes just under the UI layer to execute the end-to-end behavior of my system. These are known as “subcutaneous tests”, and provide me with the greatest confidence that one of my vertical slices does indeed work. It’s the first test I write for a feature, and the last test to pass.

I need to make sure that my test properly matches “real world” usage of my system, which means that I’ll execute a series of transactions, one for the setup/execute/verify steps of my test:

image

The final piece is allowing my test to easily run these kinds of tests over and over again. To do so, I’ll combine a few tools at my disposal to ensure my tests run in a repeatable and predictable fashion.

Building the fixture

The baseline for my tests is known as a “fixture”, and what I’ll be building is a known starting state for my tests. There are a number of different environments I can do this in, but the basic idea are:

  • Reset the database before each test using Respawn to provide a known database starting state
  • Provide a fixture class that represents the known application starting state

I’ll show how to do this with xUnit, but the Fixie example is just as easy. First, I’ll need a known starting state for my fixture:

public class ContainerFixture
{
    private static readonly Checkpoint _checkpoint;
    private static readonly IServiceProvider _rootContainer;
    private static readonly IConfigurationRoot _configuration;
    private static readonly IServiceScopeFactory _scopeFactory;

    static ContainerFixture()
    {
        var host = A.Fake<IHostingEnvironment>();

        A.CallTo(() => host.ContentRootPath).Returns(Directory.GetCurrentDirectory());

        var startup = new Startup(host);
        _configuration = startup.Configuration;
        var services = new ServiceCollection();
        startup.ConfigureServices(services);
        _rootContainer = services.BuildServiceProvider();
        _scopeFactory = _rootContainer.GetService<IServiceScopeFactory>();
        _checkpoint = new Checkpoint();
    }

I want to use the exact same startup configuration that I use in my actual application that I do in my tests. It’s important that my tests match as much as possible the runtime configuration of my system. Mismatches here can easily result in false positives in my tests. The only thing I have to fake out are my hosting environment. Unlike the integration testing available for ASP.NET Core, I won’t actually run a test server. I’m just running through the same configuration. I capture some of the output objects as fields, for ease of use later.

Next, on my fixture, I expose a method to reset the database (for later use):

public static void ResetCheckpoint()
{
    _checkpoint.Reset(_configuration["Data:DefaultConnection:ConnectionString"]);
}

I can now create an xUnit behavior to reset the database before every test:

public class ResetDatabaseAttribute : BeforeAfterTestAttribute
{
    public override void Before(MethodInfo methodUnderTest)
    {
        SliceFixture.ResetCheckpoint();
    }
}

With xUnit, I have to decorate every test with this attribute. With Fixie, I don’t. In any case, now that I have my fixture, I can inject it with an IClassFixture interface:

public class EditTests : IClassFixture<SliceFixture>
{
    private readonly SliceFixture _fixture;

    public EditTests(SliceFixture fixture)
    {
        _fixture = fixture;
    }

With my fixture created, and a way to inject it into my tests, I can now use it in my tests.

Building setup/execute/verify fixture seams

In each of these steps, I want to make sure that I execute each of them in a committed transaction. This ensures that my test, as much as possible, matches the real world usage my application. In an actual system, the user clicks around, executing a series of transactions. In the case of editing an entity, the user first looks at a screen of an existing entity, POSTs a form, and then is redirected to a new page where they see the results of their action. I want to mimic this sort of flow in my tests as well.

First, I need a way to execute something against a DbContext as part of a transaction. I’ve already exposed methods on my DbContext to make it easier to manage a transaction, so I just need a way to do this through my fixture. The other thing I need to worry about is that with the built-in DI container with ASP.NET Core, I need to create a scope for scoped dependencies. That’s why I captured out that scope factory earlier. With the scope factory, it’s trivial to create a nice method to execute a scoped action:

public async Task ExecuteScopeAsync(Func<IServiceProvider, Task> action)
{
    using (var scope = _scopeFactory.CreateScope())
    {
        var dbContext = scope.ServiceProvider.GetService<DirectoryContext>();

        try
        {
            dbContext.BeginTransaction();

            await action(scope.ServiceProvider);

            await dbContext.CommitTransactionAsync();
        }
        catch (Exception)
        {
            dbContext.RollbackTransaction();
            throw;
        }
    }
}

public Task ExecuteDbContextAsync(Func<DirectoryContext, Task> action)
{
    return ExecuteScopeAsync(sp => action(sp.GetService<SchoolContext>()));
}

The method takes function that accepts an IServiceProvider and returns a Task (so that your action can be async). For convenience sake, if you just need a DbContext, I also provide an overload that just works with that instance. With this in place, I can build out the setup portion of my test:

[Fact]
[ResetDatabase]
public async Task ShouldEditEmployee()
{
    var employee = new Employee
    {
        Email = "jane@jane.com",
        FirstName = "Jane",
        LastName = "Smith",
        Title = "Director",
        Office = Office.Austin,
        PhoneNumber = "512-555-4321",
        Username = "janesmith",
        HashedPassword = "1234567890"
    };

    await _fixture.ExecuteDbContextAsync(async dbContext =>
    {
        dbContext.Employees.Add(employee);
        await dbContext.SaveChangesAsync();
    });

I build out an entity, and in the context of a scope and transaction, save it out. I’m intentionally NOT reusing those scopes or DbContext objects across the different setup/execute/verify steps of my test, because that’s not what happens in my app! My actual application creates a distinct scope per operation, so I should do that too.

Next, for the execute step, this will involve sending a request to the Mediator. Again, as with my DbContext method, I’ll create a convenience method to make it easy to send a scoped request:

public async Task<TResponse> SendAsync<TResponse>(IAsyncRequest<TResponse> request)
{
    var response = default(TResponse);
    await ExecuteScopeAsync(async sp =>
    {
        var mediator = sp.GetService<IMediator>();

        response = await mediator.SendAsync(request);
    });
    return response;
}

Since my “mediator.SendAsync” executes inside of that scope, with a transaction, I can be confident that when the handler completes it’s pushed the results of that handler all the way down to the database. My test now can send a request fairly easily:

var command = new Edit.Command
{
    Id = employee.Id,
    Email = "jane@jane2.com",
    FirstName = "Jane2",
    LastName = "Smith2",
    Office = Office.Dallas,
    Title = "CEO",
    PhoneNumber = "512-555-9999"
};

await _fixture.SendAsync(command);

Finally, in my verify step, I can use the same scope-isolated ExecuteDbContextAsync method to start a new transaction to do my assertions against:

await _fixture.ExecuteDbContextAsync(async dbContext =>
{
    var found = await dbContext.Employees.FindAsync(employee.Id);

    found.Email.ShouldBe(command.Email);
    found.FirstName.ShouldBe(command.FirstName);
    found.LastName.ShouldBe(command.LastName);
    found.Office.ShouldBe(command.Office);
    found.Title.ShouldBe(command.Title);
    found.PhoneNumber.ShouldBe(command.PhoneNumber);
});

With the setup, execute, and verify steps each in their own isolated transaction and scope, I ensure that my vertical slice test matches as much as possible the flow of actual usage. And again, because I’m using MediatR, my test only knows how to send a request down and verify the result. There’s no coupling whatsoever of my test to the implementation details of the handler. It could use EF, NPoco, Dapper, sprocs, really anything.

Wrapping up

Most integration tests I see wrap the entire test in a transaction or scope of some sort. This can lead to pernicious false positives, as I’ve not made the full round trip that my application makes. With subcutaneous tests executing against vertical slices of my application, I’ve got the closest representation as possible without executing HTTP requests to how my application actually works.

One thing to note – the built-in DI container doesn’t allow you to alter the container after it’s built. With more robust containers like StructureMap, I’d along with that scope allow you to register mock/stub dependencies that only live for that scope (using the magic of nested containers).

If you want to see a full working example using Fixie, check out Contoso University Core.

Categories: Blogs

Contoso University updated to ASP.NET Core

Fri, 10/21/2016 - 17:33

I pushed out a new repository, Contoso University Core, that updated my “how we do MVC” sample app to ASP.NET Core. It’s still on full .NET framework, but I also plan to push out a .NET Core version as well. In this, you can find usages of:

It uses all of the latest packages I’ve built out for the OSS I use, developed for ASP.NET Core applications. Here’s the Startup, for example:

public void ConfigureServices(IServiceCollection services)
{
    // Add framework services.
    services.AddMvc(opt =>
        {
            opt.Conventions.Add(new FeatureConvention());
            opt.Filters.Add(typeof(DbContextTransactionFilter));
            opt.Filters.Add(typeof(ValidatorActionFilter));
            opt.ModelBinderProviders.Insert(0, new EntityModelBinderProvider());
        })
        .AddRazorOptions(options =>
        {
            // {0} - Action Name
            // {1} - Controller Name
            // {2} - Area Name
            // {3} - Feature Name
            // Replace normal view location entirely
            options.ViewLocationFormats.Clear();
            options.ViewLocationFormats.Add("/Features/{3}/{1}/{0}.cshtml");
            options.ViewLocationFormats.Add("/Features/{3}/{0}.cshtml");
            options.ViewLocationFormats.Add("/Features/Shared/{0}.cshtml");
            options.ViewLocationExpanders.Add(new FeatureViewLocationExpander());
        })
        .AddFluentValidation(cfg => { cfg.RegisterValidatorsFromAssemblyContaining<Startup>(); });

    services.AddAutoMapper(typeof(Startup));
    services.AddMediatR(typeof(Startup));
    services.AddScoped(_ => new SchoolContext(Configuration["Data:DefaultConnection:ConnectionString"]));
    services.AddHtmlTags(new TagConventions());
}

Still missing are unit/integration tests, that’s next. Enjoy!

Categories: Blogs

MediatR Pipeline Examples

Thu, 10/13/2016 - 21:02

A while ago, I blogged about using MediatR to build a processing pipeline for requests in the form of commands and queries in your application. MediatR is a library I built (well, extracted from client projects) to help organize my architecture into a CQRS architecture with distinct messages and handlers for every request in your system.

So when processing requests gets more complicated, we often rely on a mediator pipeline to provide a means for these extra behaviors. It doesn’t always show up – I’ll start without one before deciding to add it. I’ve also not built it in directly to MediatR  – because frankly, it’s hard and there are existing tools to do so with modern DI containers. First, let’s look at the simplest pipeline that could possible work:

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
{
    private readonly IRequestHandler<TRequest, TResponse> _inner;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner)
    {
        _inner = inner;
    }

    public TResponse Handle(TRequest message)
    {
        return _inner.Handle(message);
    }
 }

Nothing exciting here, it just calls the inner handler, the real handler. But we have a baseline that we can layer on additional behaviors.

Let’s get something more interesting going!

Contextual Logging and Metrics

Serilog has an interesting feature where it lets you define contexts for logging blocks. With a pipeline, this becomes trivial to add to our application:

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
{
    private readonly IRequestHandler<TRequest, TResponse> _inner;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner)
    {
        _inner = inner;
    }

    public TResponse Handle(TRequest message)
    {
        using (LogContext.PushProperty(LogConstants.MediatRRequestType, typeof(TRequest).FullName))
        {
            return _inner.Handle(message);
        }
    }
 }

In our logs, we’ll now see a logging block right before we enter our handler, and right after we exit. We can do a bit more, what about metrics? Also trivial to add:

using (LogContext.PushProperty(LogConstants.MediatRRequestType, requestType))
using (Metrics.Time(Timers.MediatRRequest))
{
    return _inner.Handle(request);
}

That Time class is just a simple wrapper around the .NET Timer classes, with some configuration checking etc. Those are the easy ones, what about something more interesting?

Validation and Authorization

Often times, we have to share handlers between different applications, so it’s important to have an agnostic means of cross-cutting concerns. Rather than bury our concerns in framework or application-specific extensions (like, say, an action filter), we can instead embed this behavior in our pipeline. First, with validation, we can use a tool like Fluent Validation with validator handlers for a specific type:

public interface IMessageValidator<in T>
{
    IEnumerable<ValidationFailure> Validate(T message);
}

What’s interesting here is that our message validator is contravariant, meaning I can have a validator of a base type work for messages of a derived type. That means we can declare common validators for base types or interfaces that your message inherits/implements. In practice this lets me share common validation amongst multiple messages simply by implementing an interface.

Inside my pipeline, I can execute my validation my taking a dependency on the validators for my message:

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
{
    private readonly IRequestHandler<TRequest, TResponse> _inner;
    private readonly IEnumearble<IMessageValidator<TRequest>> _validators;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner,
        IEnumerable<IMessageValidator<TRequest>> validators)
    {
        _inner = inner;
        _validators = validators;
    }

    public TResponse Handle(TRequest message)
    {
        using (LogContext.PushProperty(LogConstants.MediatRRequestType, requestType))
        using (Metrics.Time(Timers.MediatRRequest))
        {
            var failuers = _validators
                .Select(v => v.Validate(message))
                .SelectMany(result => result.Errors)
                .Where(f => f != null)
                .ToList();
            if (failures.Any())
                throw new ValidationException(failures);
            
            return _inner.Handle(request);
        }
    }
 }

And bundle up all my errors into a potential exception thrown. The downside of this approach is I’m using exceptions to provide control flow, so if this is a problem, I can wrap up my responses into some sort of Result object that contains potential validation failures. In practice it seems fine for the applications we build.

Again, my calling code INTO my handler (the Mediator) has no knowledge of this new behaviors, nor does my handler. I go to one spot to augment and extend behaviors across my entire system. Keep in mind, however, I still place my validators beside my message, handler, view etc. using feature folders.

Authorization is similar, where I define an authorizer of a message:

public interface IMessageAuthorizer {
  void Evaluate<TRequest>(TRequest request) where TRequest : class
}

Then in my pipeline, check authorization:

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
{
    private readonly IRequestHandler<TRequest, TResponse> _inner;
    private readonly IEnumearble<IMessageValidator<TRequest>> _validators;
    private readonly IMessageAuthorizer _authorizer;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner,
        IEnumerable<IMessageValidator<TRequest>> validator,
        IMessageAuthorizor authorizer
        )
    {
        _inner = inner;
        _validators = validators;
        _authorizer = authorizer;
    }

    public TResponse Handle(TRequest message)
    {
        using (LogContext.PushProperty(LogConstants.MediatRRequestType, requestType))
        using (Metrics.Time(Timers.MediatRRequest))
        {
            _securityHandler.Evaluate(message);
            
            var failures = _validators
                .Select(v => v.Validate(message))
                .SelectMany(result => result.Errors)
                .Where(f => f != null)
                .ToList();
            if (failures.Any())
                throw new ValidationException(failures);
            
            return _inner.Handle(request);
        }
    }
 }

The actual implementation of the authorizer will go through a series of security rules, find matching rules, and evaluate them against my request. Some examples of security rules might be:

  • Do any of your roles have permission?
  • Are you part of the ownership team of this resource?
  • Are you assigned to a special group that this resource is associated with?
  • Do you have the correct training to perform this action?
  • Are you in the correct geographic location and/or citizenship?

Things can get pretty complicated, but again, all encapsulated for me inside my pipeline.

Finally, what about potential augmentations or reactions to a request?

Pre/post processing

In addition to some specific processing needs, like logging, metrics, authorization, and validation, there are things I can’t predict one message or group of messages might need. For those, I can build some generic extension points:

public interface IPreRequestHandler<in TRequest>
{
    void Handle(TRequest);
}
public interface IPostRequestHandler<in TRequest, in TResponse>
{
    void Handle(TRequest request, TResponse response);
}
public interface IResponseHandler<in TResponse>
{
    void Handle(TResponse response);
}

Next I update my pipeline to include calls to these extensions (if they exist):

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
{
    private readonly IRequestHandler<TRequest, TResponse> _inner;
    private readonly IEnumearble<IMessageValidator<TRequest>> _validators;
    private readonly IMessageAuthorizer _authorizer;
    private readonly IEnumerable<IPreRequestProcessor<TRequest>> _preProcessors;
    private readonly IEnumerable<IPostRequestProcessor<TRequest, TResponse>> _postProcessors;
    private readonly IEnumerable<IResponseProcessor<TResponse>> _responseProcessors;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner,
        IEnumerable<IMessageValidator<TRequest>> validator,
        IMessageAuthorizor authorizer,
        IEnumerable<IPreRequestProcessor<TRequest>> preProcessors,
        IEnumerable<IPostRequestProcessor<TRequest, TResponse>> postProcessors,
        IEnumerable<IResponseProcessor<TResponse>> responseProcessors
        )
    {
        _inner = inner;
        _validators = validators;
        _authorizer = authorizer;
        _preProcessors = preProcessors;
        _postProcessors = postProcessors;
        _responseProcessors = responseProcessors;
    }

    public TResponse Handle(TRequest message)
    {
        using (LogContext.PushProperty(LogConstants.MediatRRequestType, requestType))
        using (Metrics.Time(Timers.MediatRRequest))
        {
            _securityHandler.Evaluate(message);
            
            foreach (var preProcessor in _preProcessors)
                preProcessor.Handle(request);
            
            var failures = _validators
                .Select(v => v.Validate(message))
                .SelectMany(result => result.Errors)
                .Where(f => f != null)
                .ToList();
            if (failures.Any())
                throw new ValidationException(failures);
            
            var response = _inner.Handle(request);
            
            foreach (var postProcessor in _postProcessors)
                postProcessor.Handle(request, response);
                
            foreach (var responseProcessor in _responseProcessors)
                responseProcessor.Handle(response);
                
            return response;
        }
    }
 }

So what kinds of things might I accomplish here?

  • Supplementing my request with additional information not to be found in the original request (in one case, barcode sequences)
  • Data cleansing or fixing (for example, a scanned barcode needs padded zeroes)
  • Limiting results of paged result models via configuration
  • Notifications based on the response

All sorts of things that I could put inside the handlers, but if I want to apply a general policy across many handlers, can quite easily be accomplished.

Whether you have specific or generic needs, a mediator pipeline can be a great place to apply domain-centric behaviors to all requests, or only matching requests based on generics rules, across your entire application.

Categories: Blogs