Skip to content

Jimmy Bogard
Syndicate content
Strong opinions, weakly held
Updated: 5 hours 52 min ago

End-to-End Hypermedia: Choosing a Media Type

Fri, 05/22/2015 - 17:12

So you’ve decided to make the leap and build a hypermedia-rich API. Hopefully, this decision came from necessity and not boredom, but that’s a post for another day.

At this point, you’re presented with a bit of a problem. You have 3 main options for choosing/designing a media type:

  • Pick a standard
  • Design your own
  • Extend a standard

As much as possible, I’d try to go with a standards-based approach. People with much more time on their hands and much more passion for you have thought about these problems for years, and probably have thought about more scenarios than you’re thinking of right now.

Instead of choosing media types in a vacuum, how would one compare the capabilities and intentions of one media type versus another? One way is simply to look at the design goals of a media type. Another is to objectively measure the level of hypermedia support and sophistication of a media type, with H Factor:

The H Factor of a media-type is a measurement of the level of hypermedia support and sophistication of a media-type. H Factor values can be used to compare and contrast media types in order to aid in selecting the proper media-type(s) for your implementation.

H-Factor looks at two types of support, links and control data, and different factors inside those.

For example, HTML supports:

  • Embedding links
  • Outbound links
  • Templated queries (a FORM with GET)
  • Non-idempotent updates (a FORM with POST)
  • Control data for update requests
  • Control data for interface methods (POST vs GET)
  • Control data for links (link relations – rel attribute)

But doesn’t support:

  • Control data for read requests – links can’t contain accept headers, for example
  • Support for idempotent updates – you have to use XHR for PUT/DELETE

With the quantitative and qualitative aspects factored in with client needs, you’ll have what you need to pick a media type. Unless you’ve already decided that this is all way too complex and run back to POJSOs, which is still perfectly acceptable.

Making the choice

There are a *ton* of popular, widely used, hypermedia-rich media types out there:

And probably a dozen others. At this point, just be warned, you’ll probably spend upwards of a week or so to decide which variant you like best based on your client’s needs. You also don’t need to decide a single media type – you can use collection+json for collections of things, and HAL for single entities if you like.

One other thing I found is no single media type had all the pieces I needed. In my real-world example, I chose collection+json because my client mainly displayed collections of things. Show a table, click an item, then display a single thing with a table of related things. It didn’t need PUT/DELETE support, or some of the other control data. I just needed control data for links and a way to distinguish queries versus forms.

But collection+json didn’t *quite* have all the things I needed, so I wound up extending it for my own purposes, which I’ll go into in the next post.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

End-to-End Hypermedia: Making the Leap

Tue, 05/19/2015 - 17:26

REST, a term that few people understand and fewer know how to implement, has become a blanket term for any sort of Web API. That’s unfortunate, because the underlying foundation of REST has a lot of benefits. So much so that I’ve started talking about regular Web APIs not as “RESTful” but just as a “Web API”. The value of REST for me has come from the hypermedia aspect of REST.

REST and hypermedia aren’t free – they significantly complicate both the building of the server API and the clients. But they are useful in certain scenarios, as I laid out in talking about the value proposition of hypermedia:

  • Native mobile apps
  • Disparate client deployments talking to a single server
  • Clients talking to disparate server deployments

I’ve only put one hypermedia-driven API into production (which, to be frank, is one more than most folks who talk about REST). I’ve attempted to build many other hypermedia APIs, only to find hypermedia was complete overkill.

If your client is deployed at the same time as your server, lives in the same source control repository, hypermedia doesn’t provide much value at all.

Hypermedia is great at decoupling client from server, allowing the client to adjust according to the server. In most apps I build, I happily couple client to server, taking advantage of the metadata I find on the server to build highly intelligent clients:

@using (Html.BeginForm()) 
{
    @Html.AntiForgeryToken()
    
    <div class="form-horizontal">
        <h4>Instructor</h4>
        <hr />
        @Html.ValidationDiv()
        @Html.FormBlock(m => m.LastName)
        @Html.FormBlock(m => m.FirstMidName)
        @Html.FormBlock(m => m.HireDate)
        @Html.FormBlock(m => m.OfficeAssignmentLocation)

In this case, my client is the browser, but my view is intelligently built up so that labels, text inputs, drop downs, checkboxes, date pickers and so on are created using metadata from a variety of sources. I can even employ this mechanism in SPAs, where my templates are pre-rendered using server metadata.

I don’t really build APIs for clients I can’t completely control, so those have completely different considerations. Building an API for public consumption means you want to enable as many clients as possible, balancing coupling with flexibility. The APIs I’ve built for clients I don’t own, I’ve never used hypermedia. It just put too much burden on my clients, so I just left it as plain old JSON objects (POJSONOs).

So if you’ve found yourself in a situation where you’ve convinced yourself you do need hypermedia, primarily based on coupling decisions, you’ll need to do a few things to get a full hypermedia solution end-to-end:

  • Choose a hypermedia-rich media type
  • Build the server API
  • Build the client consumer

In the next few posts, I’ll walk through end-to-end hypermedia from my experiences of shipping a hypermedia API server and a client consumer.

 

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

CQRS with MediatR and AutoMapper

Tue, 05/05/2015 - 17:15

CQRS is a simple pattern – two objects for command/queries where once there was one. These days just about every system I build utilizes CQRS, as it’s a natural progression from refactoring your apps around the patterns arising from reads and writes. I’ve been refactoring a Microsoft sample app to techniques I use on my apps (mainly because I want something public to look at for new projects) at my github.

Remember, CQRS is not an architecture, it’s a pattern, which makes it very easy to introduce into your applications. You can use CQRS in some, most, or all of your application and it’s easy to move towards or away from.

Even in simple apps, I like to keep my read models separate from my write models, mainly because the demands for each are drastically different. Since CQRS is just a pattern, we can introduce it just by refactoring.

First, let’s look at refactoring a complex GET/read scenario.

Read model

My initial controller action for a complex read is…well, complex:

public ViewResult Index(string sortOrder, string currentFilter, string searchString, int? page)
{
    ViewBag.CurrentSort = sortOrder;
    ViewBag.NameSortParm = String.IsNullOrEmpty(sortOrder) ? "name_desc" : "";
    ViewBag.DateSortParm = sortOrder == "Date" ? "date_desc" : "Date";

    if (searchString != null)
    {
        page = 1;
    }
    else
    {
        searchString = currentFilter;
    }

    ViewBag.CurrentFilter = searchString;

    var students = from s in db.Students
                   select s;
    if (!String.IsNullOrEmpty(searchString))
    {
        students = students.Where(s => s.LastName.ToUpper().Contains(searchString.ToUpper())
                               || s.FirstMidName.ToUpper().Contains(searchString.ToUpper()));
    }
    switch (sortOrder)
    {
        case "name_desc":
            students = students.OrderByDescending(s => s.LastName);
            break;
        case "Date":
            students = students.OrderBy(s => s.EnrollmentDate);
            break;
        case "date_desc":
            students = students.OrderByDescending(s => s.EnrollmentDate);
            break;
        default:  // Name ascending 
            students = students.OrderBy(s => s.LastName);
            break;
    }

    int pageSize = 3;
    int pageNumber = (page ?? 1);
    return View(students.ToPagedList(pageNumber, pageSize));
}

In order to derive our read models, I center around building query objects and result objects. The Query model represents the “inputs” to the query and the Result model represents the “outputs” from the query. This also fits very well into the “one-model-in-one-model-out” concept I use in my apps these days.

Looking at our controller, the inputs are pretty obvious – it’s the parameters to the controller action!

public class Index
{
    public class Query : IRequest<Result>
    {
        public string SortOrder { get; set; }
        public string CurrentFilter { get; set; }
        public string SearchString { get; set; }
        public int? Page { get; set; }
    }

To make my life easier with this pattern, I’ll use MediatR as a simple means of providing a way to have “one model in goes to something to get one model out” without creating bloated service layers. Uniform interfaces are great!

The next piece I need are the output – the result. I take *all* the results, including those “ViewBag” pieces as the Result object from my query:

public class Result
{
    public string CurrentSort { get; set; }
    public string NameSortParm { get; set; }
    public string DateSortParm { get; set; }
    public string CurrentFilter { get; set; }
    public string SearchString { get; set; }

    public IPagedList<Model> Results { get; set; }
}

public class Model
{
    public int ID { get; set; }
    [Display(Name = "First Name")]
    public string FirstMidName { get; set; }
    public string LastName { get; set; }
    public DateTime EnrollmentDate { get; set; }
}

Finally, I take the inside part of that controller action and place it in a handler that takes in a Query and returns a Result:

public class QueryHandler : IRequestHandler<Query, Result>
{
    private readonly SchoolContext _db;

    public QueryHandler(SchoolContext db)
    {
        _db = db;
    }

    public Result Handle(Query message)
    {
        var model = new Result
        {
            CurrentSort = message.SortOrder,
            NameSortParm = String.IsNullOrEmpty(message.SortOrder) ? "name_desc" : "",
            DateSortParm = message.SortOrder == "Date" ? "date_desc" : "Date",
        };

        if (message.SearchString != null)
        {
            message.Page = 1;
        }
        else
        {
            message.SearchString = message.CurrentFilter;
        }

        model.CurrentFilter = message.SearchString;
        model.SearchString = message.SearchString;

        var students = from s in _db.Students
                       select s;
        if (!String.IsNullOrEmpty(message.SearchString))
        {
            students = students.Where(s => s.LastName.Contains(message.SearchString)
                                           || s.FirstMidName.Contains(message.SearchString));
        }
        switch (message.SortOrder)
        {
            case "name_desc":
                students = students.OrderByDescending(s => s.LastName);
                break;
            case "Date":
                students = students.OrderBy(s => s.EnrollmentDate);
                break;
            case "date_desc":
                students = students.OrderByDescending(s => s.EnrollmentDate);
                break;
            default: // Name ascending 
                students = students.OrderBy(s => s.LastName);
                break;
        }

        int pageSize = 3;
        int pageNumber = (message.Page ?? 1);
        model.Results = students.ProjectToPagedList<Model>(pageNumber, pageSize);

        return model;
    }
}

My handler now completely encapsulates the work to take the input and build the output, making it very easy to test the logic of my system. I can refactor the contents of this handler as much as I want and the external interface remains input/output. In fact, if I wanted to make this a view or stored procedure, my input/output and tests don’t change at all!

One slight change was to switch to AutoMapper projection at the bottom with the ProjectToPagedList method:

public static class MapperExtensions
{
    public static async Task<List<TDestination>> ProjectToListAsync<TDestination>(this IQueryable queryable)
    {
        return await queryable.ProjectTo<TDestination>().DecompileAsync().ToListAsync();
    }

    public static IQueryable<TDestination> ProjectToQueryable<TDestination>(this IQueryable queryable)
    {
        return queryable.ProjectTo<TDestination>().Decompile();
    }

    public static IPagedList<TDestination> ProjectToPagedList<TDestination>(this IQueryable queryable, int pageNumber, int pageSize)
    {
        return queryable.ProjectTo<TDestination>().Decompile().ToPagedList(pageNumber, pageSize);
    }

    public static async Task<TDestination> ProjectToSingleOrDefaultAsync<TDestination>(this IQueryable queryable)
    {
        return await queryable.ProjectTo<TDestination>().DecompileAsync().SingleOrDefaultAsync();
    }
}

I build a few helper methods to project from a queryable to my read model. The AutoMapper projections completely bypass my write model and craft a query that only reads in the information I need for this screen:

exec sp_executesql N'SELECT 
    [Project1].[C1] AS [C1], 
    [Project1].[ID] AS [ID], 
    [Project1].[LastName] AS [LastName], 
    [Project1].[FirstName] AS [FirstName], 
    [Project1].[EnrollmentDate] AS [EnrollmentDate]
    FROM ( SELECT 
        [Extent1].[ID] AS [ID], 
        [Extent1].[LastName] AS [LastName], 
        [Extent1].[FirstName] AS [FirstName], 
        [Extent1].[EnrollmentDate] AS [EnrollmentDate], 
        ''0X0X'' AS [C1]
        FROM [dbo].[Person] AS [Extent1]
        WHERE ([Extent1].[Discriminator] = N''Student'') AND ((( CAST(CHARINDEX(UPPER(@p__linq__0), UPPER([Extent1].[LastName])) AS int)) > 0) OR (( CAST(CHARINDEX(UPPER(@p__linq__1), UPPER([Extent1].[FirstName])) AS int)) > 0))
    )  AS [Project1]
    ORDER BY [Project1].[LastName] ASC
    OFFSET 3 ROWS FETCH NEXT 3 ROWS ONLY ',N'@p__linq__0 nvarchar(4000),@p__linq__1 nvarchar(4000)',@p__linq__0=N'a',@p__linq__1=N'a'

Some folks prefer to create SQL Views for their “Read” models, but that seems like a lot of work. AutoMapper projections have the same concept of a SQL View, except it’s defined as a projected C# class instead of a SQL statement with joins. The result is the same, except I now define the projection once (in my read model) instead of twice in my model and in a SQL view.

My controller action becomes quite a bit slimmed down as a result:

public ViewResult Index(Index.Query query)
{
    var model = _mediator.Send(query);

    return View(model);
}

Slimmed down to the point where my controller action is really just a placeholder for defining a route (though helpful when my actions do more interesting things like Toastr popups etc).

Now that we’ve handled reads, what about writes?

Write models

Write models tend to be a bit easier, however, many of my write models tend to have a read component to them. The page with the form of data is still a GET, even if it’s followed by a POST. This means that I have some duality between GET/POST actions, and they’re a bit intertwined. That’s OK – I can still handle that with MediatR. First, let’s look at what we’re trying to refactor:

public ActionResult Edit(int? id)
{
    if (id == null)
    {
        return new HttpStatusCodeResult(HttpStatusCode.BadRequest);
    }
    Student student = db.Students.Find(id);
    if (student == null)
    {
        return HttpNotFound();
    }
    return View(student);
}

[HttpPost]
[ValidateAntiForgeryToken]
public ActionResult Edit([Bind(Include = "ID, LastName, FirstMidName, EnrollmentDate")]Student student)
{
    try
    {
        if (ModelState.IsValid)
        {
            db.Entry(student).State = EntityState.Modified;
            db.SaveChanges();
            return RedirectToAction("Index");
        }
    }
    catch (RetryLimitExceededException /* dex */)
    {
        //Log the error (uncomment dex variable name and add a line here to write a log.
        ModelState.AddModelError("", "Unable to save changes. Try again, and if the problem persists see your system administrator.");
    }
    return View(student);
}

Ah, the notorious “Bind” attribute with magical request binding to entities. Let’s not do that. First, I need to build the models for the GET side, knowing that the result is going to be my command. I create an input for the GET action with the POST model being my result:

public class Query : IAsyncRequest<Command>
{
    public int? Id { get; set; }
}

public class QueryValidator : AbstractValidator<Query>
{
    public QueryValidator()
    {
        RuleFor(m => m.Id).NotNull();
    }
}

public class Command : IAsyncRequest
{
    public int ID { get; set; }
    public string LastName { get; set; }

    [Display(Name = "First Name")]
    public string FirstMidName { get; set; }

    public DateTime? EnrollmentDate { get; set; }
}

One nice side effect of building around queries/commands is it’s easy to layer on tools like FluentValidation. The command itself is based on exactly what information is needed to process the command, and nothing more. My views are built around this model, projected from the database as needed:

public class QueryHandler : IAsyncRequestHandler<Query, Command>
{
    private readonly SchoolContext _db;

    public QueryHandler(SchoolContext db)
    {
        _db = db;
    }

    public async Task<Command> Handle(Query message)
    {
        return await _db.Students
            .Where(s => s.ID == message.Id)
            .ProjectToSingleOrDefaultAsync<Command>();
    }
}

Again, I skip the write model and go straight to SQL to project to my write model’s read side.

Finally, for the POST, I just need to build out the handler for the command:

public class CommandHandler : AsyncRequestHandler<Command>
{
    private readonly SchoolContext _db;

    public CommandHandler(SchoolContext db)
    {
        _db = db;
    }

    protected override async Task HandleCore(Command message)
    {
        var student = await _db.Students.FindAsync(message.ID);

        Mapper.Map(message, student);
    }
}

Okay so this command handler is very, very simple. Simple enough I can use AutoMapper to map values back in. Most of the time in my systems, they’re not so simple. Approving invoices, notifying downstream systems, keeping invariants satisfied. Unfortunately Contoso University is a simple application, but I could have something more complex like updating course credits:

public class CommandHandler : IAsyncRequestHandler<Command, int>
{
    private readonly SchoolContext _db;
 
    public CommandHandler(SchoolContext db)
    {
        _db = db;
    }
 
    public async Task<int> HandleCore(Command message)
    {
        var rowsAffected = await _db.Database
            .ExecuteSqlCommandAsync("UPDATE Course SET Credits = Credits * {0}", message.Multiplier);
 
        return rowsAffected;
    }
}

I have no idea why I’d need to do this action, but you get the idea. However complex my write side becomes, it’s scoped to this. In fact, I can often refactor my domain model to handle its own command handling:

public class CommandHandler : AsyncRequestHandler<Command>
{
    private readonly SchoolContext _db;
 
    public CommandHandler(SchoolContext db)
    {
        _db = db;
    }
 
    protected override async Task HandleCore(Command message)
    {
        var student = await _db.Students.FindAsync(message.ID);
 
        student.Handle(message);
    }
}

All the logic in processing the command is inside my domain model, fully encapsulated and unit-testable. My handler just acts as a means to get the domain model out of the persistent store. The advantage to my command handler now is that I can refactor towards a fully encapsulated, behavioral domain model without changing anything else in my application. My controller is none the wiser:

public async Task<ActionResult> Edit(Edit.Query query)
{
    var model = await _mediator.SendAsync(query);

    return View(model);
}

[HttpPost]
[ValidateAntiForgeryToken]
public async Task<ActionResult> Edit(Edit.Command command)
{
    await _mediator.SendAsync(command);

    return this.RedirectToActionJson(c => c.Index(null));
}

That’s why I don’t worry too much about behavioral models up front – it’s just a refactoring exercise when I see code smells pop up inside my command handlers. When command handlers get gnarly (NOT BEFORE), just push the behavior down to the domain objects as needed using decades-old refactoring techniques.

That’s it! MediatR and AutoMapper together to help refactor towards CQRS, encapsulating logic and behavior together into command/query objects and handlers. Our domain model on the read side merely becomes a means to derive projections, just as Views are means to build SQL projections. We have a common interface with one model in, one model out to center around and any cross-cutting concerns like validation can be defined around those models.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Saga Implementation Patterns: Singleton

Fri, 04/17/2015 - 17:38

NServiceBus sagas are great tools for managing asynchronous business processes. We use them all the time for dealing with long-running transactions, integration, and even places we just want to have a little more control over a process.

Occasionally we have a process where we really only need one instance of that process running at a time. In our case, it was a process to manage periodic updates from an external system. In the past, I’ve used Quartz with NServiceBus to perform job scheduling, but for processes where I want to include a little more information about what’s been processed, I can’t extend the Quartz jobs as easily as NServiceBus saga data. NServiceBus also provides a scheduler for simple jobs but they don’t have persistent data, which for a periodic process you might want to keep.

Regardless of why you’d want only one saga entity around, with a singleton saga you run into the issue of a Start message arriving more than once. You have two options here:

  1. Create a correlation ID that is well known
  2. Force a creation of only one saga at a time

I didn’t really like the first option, since it requires whomever starts to the saga to provide some bogus correlation ID, and never ever change that ID. I don’t like things that I could potentially screw up, so I prefer the second option. First, we create our saga and saga entity:

public class SingletonSaga : Saga<SingletonData>,
    IAmStartedByMessages<StartSingletonSaga>,
    IHandleTimeouts<SagaTimeout>
{
    protected override void ConfigureHowToFindSaga(
    	SagaPropertyMapper<SingletonData> mapper)
    {
    	// no-op
    }

    public void Handle(StartSingletonSaga message)
    {
        if (Data.HasStarted)
        {
            return;
        }

        Data.HasStarted = true;
        
        // Do work like request a timeout
        RequestTimeout(TimeSpan.FromSeconds(30), new SagaTimeout());
    }
    
    public void Timeout(SagaTimeout state)
    {
    	// Send message or whatever work
    }
}

Our saga entity has a property “HasStarted” that’s just used to track that we’ve already started. Our process in this case is a periodic timeout and we don’t want two sets of timeouts going. We leave the message/saga correlation piece empty, as we’re going to force NServiceBus to only ever create one saga:

public class SingletonSagaFinder
    : IFindSagas<SingletonSagaData>.Using<StartSingletonSaga>
{
    public NHibernateStorageContext StorageContext { get; set; }

    public SingletonSagaData FindBy(StartSingletonSaga message)
    {
        return StorageContext.Session
            .QueryOver<SingletonSagaData>()
            .SingleOrDefault();
    }
}

With our custom saga finder we only ever return the one saga entity from persistent storage, or nothing. This combined with our logic for not kicking off any first-time logic in our StartSingletonSaga handler ensures we only ever do the first-time logic once.

That’s it! NServiceBus sagas are handy because of their simplicity and flexibility, and implementing something a singleton saga is just about as simple as it gets.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Clean Tests: Database Persistence

Tue, 04/07/2015 - 18:13

Other posts in this series:

A couple of posts ago, I walked through my preferred solution of isolating database state using intelligent database wiping with Respawn. Inside a test, we still need to worry about persisting items.

This is where things can get a bit tricky. We have to worry about transactions, connections, ORMs (maybe), lazy loading, first-level caches and more. When it comes to figuring out which direction to go in setting up a test environment, I tend to default to matching production behavior. Too many times I’ve been burned by bizarre test behavior, only to find my test fixture/environment doesn’t match against any plausible or possible production scenario. It’s one thing to simply and isolate, it’s another to operate in a bizzaro world.

In production environments, I deal with a single unit of work per request, whether that request is a command in a thick client app, a web API call, or a server-side MVC request. The world is built up and torn down on every request, creating a lovely stateless environment.

The kicker is that I often need to deal with ORMs, and barring that, some sort of unit of work mechanism even if it’s a PetaPoco DB object. When I set up state, I want nothing shared between that setup part and the Execute step of my test:

Each of these steps is isolated from the other. With my apps, the Execute step is easy to put inside an isolated unit of work since I’m using MediatR, so I’ll just need to worry about Setup and Verify.

I want something flexible that works with different styles of tests and not have something implicit like a Before/After hook in my tests. It needs to be completely obvious “these things are in a unit of work”. Luckily, I have a good hook to do so with that Fixture object I use to have a central point of my test setup.

Setup

At the setup portion of my tests, I’m generally only saving things. In that case, I can just create a helper method in my test fixture to build up a DbContext (in the case of Entity Framework) and save some things:

public void Txn(Action<MyContext> action)
{
    var dbContext = new MyContext();
    DbContextTransaction txn  = null;
    try
    {
        txn = dbContext.Database.BeginTransaction();
        action(dbContext);
        dbContext.SaveChanges();
        txn.Commit();
    }
    catch (Exception)
    {
        txn?.Rollback();
        throw;
    }
}

We create our context, open a transaction, perform whatever action and commit/rollback our transaction. With this method, we now have a simple way to perform any action in an isolated transaction without our test needing to worry about the semantics of transactions, change tracking and the like. We can create a convenience method to save a set of entities:

public void Save(params object[] entities)
{
    Txn(dbContext =>
    {
        foreach (var entity in entities)
        {
            var entry = dbContext.ChangeTracker
                .Entries()
                .FirstOrDefault(entityEntry => entityEntry.Entity == entity);

            if (entry == null)
            {
                dbContext.Set(entity.GetType()).Add(entity);
            }
        }
    });
}

And finally in our tests:

public InvoiceApprovalTests(Invoice invoice,
    [Fake] IApprovalService mockService,
    IInvoiceApprover invoiceApprover,
    SlowTestFixture fixture)
{
    fixture.Save(invoice);

We still have our entities to be used in our tests, but they’re now detached and isolated from any ORMs. When we get to Verify, we’ll look at reloading these entities. But first, let’s look at Execute.

Execute

As I mentioned earlier, for most of the apps I build today requests are funneled through MediatR. This provides a nice uniform interface and jumping off point for any additional behaviors/extensions. A side benefit are the Execute step in my tests are usually just a Send call (unless it’s unit tests against the domain model directly).

In production, there’s a context set up, a transaction started, request made and sent down to MediatR. Some of these steps, however, are embedded in extension points of the environment, and even if extracted out, they’re started from extension points. Take for example transactions, I hook these up using filters/modules. To use that exact execution path I would need to stand up a dummy server.

That’s a little much, but I can at least do the same things I was doing before. I like to treat the Fixture as the fixture for Execute, and isolate Setup and Verify. If I do this, then I just need a little helper method to send a request and get a response, all inside a transaction:

public TResult Send<TResult>(IRequest<TResult> message)
{
    var context = Container.GetInstance<MyContext>();
    var mediator = Container.GetInstance<IMediator>();
    DbContextTransaction txn = null;
    TResult result;

    try
    {
        txn = context.Database.BeginTransaction();
        result = mediator.Send(message);
        txn.Commit();
    }
    catch (Exception)
    {
        txn?.Rollback();
        throw;
    }

    return result;
}

It looks very similar to the “Txn” method I build earlier, except I’m treating the child container as part of my context and retrieving all items from it including any ORM class. Sending a request like this ensures that when I’m done with Send in my test method, everything is completely done and persisted:

public InvoiceApprovalTests(Invoice invoice, 
    [Fake] IApprovalService mockService,
    SlowTestFixture fixture)
{
    fixture.Save(invoice);

    A.CallTo(() => mockService.CheckApproval(invoice.Id)).Returns(true);

    fixture.Send(new ApproveInvoiceCommand {InvoiceId = invoice.Id});

My class under test now routes through this handler:

public class ApproveInvoiceHandler : RequestHandler<ApproveInvoiceCommand>
{
    private readonly MyContext _context;
    private readonly IInvoiceApprover _invoiceApprover;

    public ApproveInvoiceHandler(MyContext context,
        IInvoiceApprover invoiceApprover)
    {
        _context = context;
        _invoiceApprover = invoiceApprover;
    }

    protected override void HandleCore(ApproveInvoiceCommand message)
    {
        var invoice = _context.Set<Invoice>().Find(message.InvoiceId);

        _invoiceApprover.Approve(invoice);
    }
}

With my Execute built around a uniform interface with reliable, repeatable results, all that’s left is the Verify step.

Verify

Failures around Verify typically arise because I’m verifying against in-memory objects that haven’t been rehydrated. A test might pass or fail because I’m asserting against the result from a method, but in actuality a user makes a POST, something mutates, and a subsequent GET retrieves the new information. I want to reliably recreate this flow in my tests, but not go through all the hoops of making requests. I need to make a fresh request to the database, bypassing any caches, in-memory objects and the like.

One way to do this is to reload an item:

public void Reload<TEntity, TIdentity>(
    ref TEntity entity,
    TIdentity id)
    where TEntity : class
{
    TEntity e = entity;

    Txn(ctx => e = ctx.Set<TEntity>().Find(id));

    entity = e;
}

I pass in an entity I want to reload, and a means to get the item’s ID. Inside a transaction and fresh DbContext, I reload the entity and set it as the ref parameter in my method. In my test, I can then use this reloaded entity as what I assert against:

public InvoiceApprovalTests(Invoice invoice, 
    [Fake] IApprovalService mockService,
    SlowTestFixture fixture)
{
    fixture.Save(invoice);

    var invoiceId = invoice.Id;

    A.CallTo(() => mockService.CheckApproval(invoiceId)).Returns(true);

    fixture.Send(new ApproveInvoiceCommand {InvoiceId = invoiceId});

    fixture.Reload(ref invoice, invoiceId);

    _invoice = invoice;
}

In this case, I tend to prefer the “ref” argument rather than something like “foo = fixture.Reload(foo, foo.Id)”, but I might be in the minority here.

With these patterns in place, I can rest assured that my Setup, Execute and Verify are appropriately isolated and match production usage as much as possible. When my tests match reality, I’m far less likely to get myself in trouble with false positives/negatives and I can have much greater confidence that my tests actually reduce bugs.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs