Skip to content

Jimmy Bogard
Syndicate content
Strong opinions, weakly held
Updated: 8 hours 27 min ago

Integrating AutoMapper with ASP.NET Core DI

Wed, 07/20/2016 - 18:30

Part of the release of ASP.NET Core is a new DI framework that’s completely integrated with the ASP.NET pipeline. Previous ASP.NET frameworks either had no DI or used service location in various formats to resolve dependencies. One of the nice things about a completely integrated container (not just a means to resolve dependencies, but to register them as well), means it’s much easier to develop plugins for the framework that bridge your OSS project and the ASP.NET Core app. I already did this with MediatR and HtmlTags, but wanted to walk through how I did this with AutoMapper.

Before I got started, I wanted to understand what the pain points of integrating AutoMapper with an application are. The biggest one seems to be the Initialize call, most systems I work with use AutoMapper Profiles to define configuration (instead of one ginormous Initialize block). If you have a lot of these, you don’t want to have a bunch of AddProfile calls in your Initialize method, you want them to be discovered. So first off, solving the Profile discovery problem.

Next is deciding between the static versus instance way of using AutoMapper. It turns out that most everyone really wants to use the static way of AutoMapper, but this can pose a problem in certain scenarios. If you’re building a resolver, you’re often building one with dependencies on things like a DbContext or ISession, an ORM/data access thingy:

public class LatestMemberResolver : IValueResolver<object, object, User> {
  privat readonly AppContext _dbContext;
  public LatestMemberResolver(AppContext dbContext) {
    _dbContext = dbContext;
  }
  
  public User Resolve(object source, object destination, User destMember, ResolutionContext context) {
    return _dbContext.Users.OrderByDescending(u => u.SignUpDate).FirstOrDefault();
  }
}

With the new DI framework, the DbContext would be a scoped dependency, meaning you’d get one of those per request. But how would AutoMapper know how to resolve the value resolver correctly?

The easiest way is to also scope an IMapper to a request, as its constructor takes a function to build value resolvers, type converters, and member value resolvers:

IMapper mapper 
  = new Mapper(Mapper.Configuration, t => ServiceLocator.Resolve(t));

The caveat is you have to use an IMapper instance, not the Mapper static method. There’s a way to pass in the constructor function to a Mapper.Map call, but you have to pass it in *every single time*, and thus not so useful:

Mapper.Map<User, UserModel>(user, 
  opt => opt.ConstructServicesUsing(t => ServiceLocator.Resolve(t)));

Finally, if you’re using AutoMapper projections, you’d like to stick with the static initialization. Since the projection piece is an extension method, there’s no way to resolve dependencies other than passing them in, or service location. With static initialization, I know exactly where to go to look for AutoMapper configuration. Instance-based, you have to pass in your configuration to every single ProjectTo call.

In short, I want static initialization for configuration, but instance-based usage of mapping. Call Mapper.Initialize, but create mapper instances from the static configuration.

Initializating the container and AutoMapper

Before I worry about configuring the container (the IServiceCollection object), I need to initialize AutoMapper. I’ll assume that you’re using Profiles, and I’ll simply scan through a list of assemblies for anything that is a Profile:

private static void AddAutoMapperClasses(IServiceCollection services, IEnumerable<Assembly> assembliesToScan)
{
    assembliesToScan = assembliesToScan as Assembly[] ?? assembliesToScan.ToArray();

    var allTypes = assembliesToScan.SelectMany(a => a.ExportedTypes).ToArray();

    var profiles =
    allTypes
        .Where(t => typeof(Profile).GetTypeInfo().IsAssignableFrom(t.GetTypeInfo()))
        .Where(t => !t.GetTypeInfo().IsAbstract);

    Mapper.Initialize(cfg =>
    {
        foreach (var profile in profiles)
        {
            cfg.AddProfile(profile);
        }
    });

The assembly list can come from a list of assemblies or types passed in to mark assemblies, or I can just look at what assemblies are loaded in the current DependencyContext (the thing ASP.NET Core populates with discovered assemblies):

public static void AddAutoMapper(this IServiceCollection services)
{
    services.AddAutoMapper(DependencyContext.Default);
}

public static void AddAutoMapper(this IServiceCollection services, DependencyContext dependencyContext)
{
    services.AddAutoMapper(dependencyContext.RuntimeLibraries
        .SelectMany(lib => lib.GetDefaultAssemblyNames(dependencyContext).Select(Assembly.Load)));
}

Next, I need to add all value resolvers, type converters, and member value resolvers to the container. Not every value resolver etc. might need to be initialized by the container, and if you don’t pass in a constructor function it won’t use a container, but this is just a safeguard just in case something needs to resolve these AutoMapper service classes:

var openTypes = new[]
{
    typeof(IValueResolver<,,>),
    typeof(IMemberValueResolver<,,,>),
    typeof(ITypeConverter<,>)
};
foreach (var openType in openTypes)
{
    foreach (var type in allTypes
        .Where(t => t.GetTypeInfo().IsClass)
        .Where(t => !t.GetTypeInfo().IsAbstract)
        .Where(t => t.ImplementsGenericInterface(openType)))
    {
        services.AddTransient(type);
    }
}

I loop through every class and see if it implements the open generic interfaces I’m interested in, and if so, registers them as transient in the container. The “ImplementsGenericInterface” doesn’t exist in the BCL, but it probably should :) .

Finally, I register the mapper configuration and mapper instances in the container:

services.AddSingleton(Mapper.Configuration);
services.AddScoped<IMapper>(sp => 
  new Mapper(sp.GetRequiredService<IConfigurationProvider>(), sp.GetService));

While the configuration is static, every IMapper instance is scoped to a request, passing in the constructor function from the service provider. This means that AutoMapper will get the correct scoped instances to build its value resolvers, type converters etc.

With that in place, it’s now trivial to add AutoMapper to an ASP.NET Core application. After I create my Profiles that contain my AutoMapper configuration, I instruct the container to add AutoMapper (now released as a NuGet package from the AutoMapper.Extensions.Microsoft.DependencyInjection package):

public void ConfigureServices(IServiceCollection services)
{
    // Add framework services.
    services.AddMvc();

    services.AddAutoMapper();

And as long as I make sure and add this after the MVC services are registered, it correctly loads up all the found assemblies and initializes AutoMapper. If not, I can always instruct the initialization to look in specific types/assemblies for Profiles. I can then use AutoMapper statically or instance-based in a controller:

public class UserController {
  private readonly IMapper _mapper;
  private readonly AppContext _dbContext;
  public UserController(IMapper mapper, AppContext dbContext) {
    _mapper = mapper;
    _dbContext = dbContext;
  }
  
  public IActionResult Index() {
    var users = dbContext.Users
      .ProjectTo<UserIndexModel>()
      .ToList();
      
    return View(users);
  }
  
  public IActionResult Show(int id) {
    var user = dbContext.Users.Where(u => u.Id == id).Single();
    var model = _mapper.Map<User, UserIndexModel>(user);
    
    return View(model);
  }
}

The projections use the static configuration, while the instance-based uses any potential injected services. Just about as simple as it can get!

Other containers

While the new AutoMapper extensions package is specific to ASP.NET Core DI, it’s also how I would initialize and register AutoMapper with any container. Previously, I would lean on DI containers for assembly scanning purposes, finding all Profile classes, but this had the unfortunate side effect that Profiles could themselves have dependencies – a very bad idea! With the pattern above, it should be easy to extend to any other DI container.

Categories: Blogs

MediatR Extensions for Microsoft Dependency Injection Released

Tue, 07/19/2016 - 21:07

To help those building applications using the new Microsoft DI libraries (used in Orleans, ASP.NET Core, etc.), I pushed out a helper package to register all of your MediatR handlers into the container.

MediatR.Extensions.Microsoft.DependencyInjection

To use, just add the AddMediatR method to wherever you have your service configuration at startup:

public void ConfigureServices(IServiceCollection services)
{
  services.AddMvc();

  services.AddMediatR(typeof(Startup));
}

You can either pass in the assemblies where your handlers are, or you can pass in Type objects from assemblies where those handlers reside. The extension will add the IMediator interface to your services, all handlers, and the correct delegate factories to load up handlers. Then in your controller, you can just use an IMediator dependency:

public class HomeController : Controller
{
  private readonly IMediator _mediator;

  public HomeController(IMediator mediator)
  {
    _mediator = mediator;
  }
  public IActionResult Index()
  {
    var pong = _mediator.Send(new Ping {Value = "Ping"});
    return View(pong);
  }
}

And you’re good to go. Enjoy!

Categories: Blogs

HtmlTags 4.1 Released for ASP.NET 4 and ASP.NET Core

Mon, 07/18/2016 - 20:20

One of the libraries that I use on most projects (but probably don’t talk about it much) is now updated for the latest ASP.NET Core MVC. In order to do so, I broke out the classic ASP.NET and ASP.NET Core pieces into separate NuGet packages:

Since ASP.NET Core supports DI from the start, it’s quite a bit easier to integrate HtmlTags into your ASP.NET Core application. To enable HtmlTags, you can call AddHtmlTags in the method used to configure services in your startup (typically where you’d have the AddMvc method):

services.AddHtmlTags(reg =>
{
    reg.Labels.IfPropertyIs<bool>()
       .ModifyWith(er => er.CurrentTag.Text(er.CurrentTag.Text() + "?"));
});

The AddHtmlTags method takes a configuration callback, a params array of HtmlConventionRegistry objects, or an entire HtmlConventionLibrary. The one with the configuration callback includes some sensible defaults, meaning you can pretty much immediately use it in your views.

The HtmlTags.AspNetCore package includes extensions directly for IHtmlHelper, so you can use it in your Razor views quite easily:

@Html.Label(m => m.FirstName)
@Html.Input(m => m.FirstName)
@Html.Tag(m => m.FirstName, "Validator")

@Html.Display(m => m.Title)

Since I’m hooked in to the DI pipeline, you can make tag builders that pull in a DbContext and populate a list of radio buttons or drop down items from a table (for example). And since it’s all object-based, your tag conventions are easily testable, unlike the tag helpers which are solely string based.

Enjoy!

Categories: Blogs

AutoMapper 5.0 Released

Thu, 07/07/2016 - 17:42

Release notes:

Today I pushed out AutoMapper 5.0.1, the culmination of about 9 months of work from myself and many others to build a better, faster AutoMapper. Technically I pushed out a 5.0.0 package last week, but it turns out that almost nobody really pulls down beta packages to submit bugs so this package fixes the bugs reported from the 5.0.0 drop :)

The last 4.x release introduced an instance-based configuration model for AutoMapper, and with 5.0, we’re able to take advantage of that model to focus on speed. So how much faster? In our benchmarks, 20-50x faster. Compared to hand-rolled mappings, we’re still around 8-10x slower, mostly because we’re taking care of null references, providing diagnostics, good exception messages and more.

To get there, we’ve converted the runtime mappings to a single compiled expression, making it as blazing fast as we can. There’s still some micro-optimizations possible, which we’ll look at for the next dot release, but the gains so far have been substantial. Since compiled expressions give you zero stack trace if there’s a problem, we made sure to preserve all of the great diagnostic/error information to figure out how things went awry.

We’ve also expanded many of the configuration options, and tightened the focus. Originally, AutoMapper would do things like keep track of every single mapped object during mapping, which made mapping insanely slow. Instead, we’re putting the controls back into the developer’s hands of exactly when to use what feature, and our expression builder builds the exact mapping plan based on how you’ve configured your mappings.

This did mean some breaking changes to the API, so to help ease the transition, I’ve included a 5.0 upgrade guide in the wiki.

Enjoy!

Categories: Blogs

AutoMapper 5.0 speed increases

Fri, 06/24/2016 - 23:43

Just an update on the work we’ve been doing to speed up AutoMapper. I’ve captured times to map some common scenarios (1M mappings). Time is in seconds:

  Flattening Ctor Complex Deep Native 0.0148 0.0060 0.9615 0.2070 5.0 0.2203 0.1791 2.5272 1.4054 4.2.1 4.3989 1.5608 134.39 29.023 3.3.1 4.7785 1.3384 72.812 34.485 2.2.1 5.1175 1.7855 122.0081 35.863 1.1.0.118 6.7143 n/a 29.222 38.852

The complex mappings had the biggest variation, but across the board AutoMapper is *much* faster than previous versions. Sometimes 20x faster, 50x in others. It’s been a ton of work to get here, mainly from the change in having a single configuration step that let us build execution plans that exactly target your configuration. We now build up an expression tree for the mapping plan based on the configuration, instead of evaluating the same rules over and over again.

We *could* get marginally faster than this, but that would require us sacrificing diagnostic information or not handling nulls etc. Still, not too shabby, and in the same ballpark as the other mappers (faster than some, marginally slower than others) out there. With this release, I think we can officially stop labeling AutoMapper as “slow” ;)

Look for the 5.0 release to drop with the release of .NET Core next week!

Categories: Blogs

10 Lessons from a Long Running DDD Project – Part 2

Mon, 06/20/2016 - 21:04

In Part 1 of this 2-part series, I walked through some lessons learned from the first incarnation of our project. The original project I’d still qualify as a success, in that it was delivered on-time, within budget, and is still under active development today. But we learned a lot of lessons from that project, and were lucky enough to have another crack at it so to speak when we started a new project, in the almost exact domain, but this time the constraints were quite a bit different.

In the first project, we targeted everyone that could possibly be involved with the overall process. This wound up to be a dozen state agencies and countless other groups and sub-groups. Quite a lot of contention in the model (also a great reason why you can never have a single master data model for an entire enterprise). We felt good about the software itself – it was modular and easy to extend, but the domain model itself just couldn’t satisfy all the users involved, only really a subset.

The second project targeted only a single aspect of the original overall legal process – the prosecution agency. Targeting just a single group, actually a single agency, brought tremendous benefits for us.

Lesson 6: Cohesiveness brings greater clarity and deeper insight

Our initial conversations in the second project were somewhat colored by our first project. We started with an assumption that the core focus, the core domain would be at least the same as the monolith, but maybe a different view of it. We were wrong.

In the new version of the app, the entire focus of the system revolves around “cases”. I know, crazy that an app built for the day-to-day functions of a prosecution agency focuses centrally on a case:

image

Once we settled on the core domain, the possibilities then greatly opened up for modeling around that concept. Because the first app only tangentially dealt with cases (there wasn’t even a “Case” in the original model), it was more or less an impedance mismatch for its users in the prosecution agency. It was a bit humbling to hear the feedback from the prosecutors about the first project.

But in the second project, because our core domain was focused, we could spend much more time modeling workflows and behaviors that fit what the prosecution agency actually needed.

Lesson 7: Be flexible where you need to, rigid in others

Although we were able to come to a consensus amongst prosecution agencies about what a case was, what the key things you could DO with a case were and the like, we couldn’t get any consensus about how a case should be managed.

This makes a lot of sense – the state has legal reporting requirements and the courts have a ton of procedural rules, but internal to an agency, they’re free to manage the work any way they wanted to.

In the first system, roles were baked in to the system, causing a lot of confusion for counties where one person wore many different hats. In the new system, permissions were hard-coded against tasks, but not roles:

image

The Permission here is an enum, and we tied permissions to tasks like “Approve Case” and “Add Evidence” and “Submit Disposition” etc. Those were directly tied to actions in our application, and you couldn’t add new permissions without modifying the code.

Roles (or groups, whatever) were not hardcoded, and left completely up to each agency how they liked to organize their work and decide who can do what.

With DDD it’s important to model both the rigid and flexible, they’re equally important in the overall model you build.

Lesson 8: Sometimes you need to invent a model

While we were able to model quite well the actions one can perform with an individual case, it was immediately apparent when visiting different county agencies that their workflows varied significantly inside their departments.

This meant we couldn’t do things like implement a workflow internal to a case itself – everyone’s workflow was different. The only thing we could really embed were procedural/legal rules in our behaviors, but everything else was up for grabs. But we still wanted to manage workflows for everyone.

In this case, we needed to build consensus for a model that didn’t really exist in each county in isolation. If we focused on a single county, we could have baked the rules about how a case is managed into their individual system. But since we were building a system across counties, we needed to build a model that satisfied all agencies:

image

In this model, we explicitly built a configurable workflow, with states and transitions and security roles around who could perform those transitions. While no individual county had this model, it was the meta-model we found while looking across all counties.

Lesson 9: Don’t blindly follow pattern advice

In the new app, I performed an experiment. I would only add tools, patterns, and libraries when the need presented itself but no sooner. This meant I didn’t add a repository, unit of work, services, really anything until an actual pain surfaced. Most of the DDD books these days have prescriptive guidance about what your domain model should look like, how you should do repositories and so on, but I wanted to see if I could simply arrive at these patterns by code smells and refactoring.

The funny thing is, I never did. We left out those patterns, and we never found a need to put them back in. Instead, we drove our usage around CQRS and the mediator pattern (something I’ve used for years but finally extracted our internal usage into MediatR. Instead, our controllers were pretty uniform in their appearance:

image

And the handlers themselves (as I’ve blogged about many times) were tightly focused on a single action, with no need to abstract anything:

image

I’ve extended this to other areas of development too, like front-end development. It’s actually kinda crazy how far you can get without jQuery these days, if you just use lodash and the DOM.

Lesson 10: Microservices and anti-corruption layers are your friend

There is a downside to going to bounded contexts and away from the “majestic monolith”, and that’s integration. Now that we have an application solely dealing with one agency, we have to communicate between different applications.

This turned out to be a bit easier than we thought, however. This domain existed well before computers, so the interfaces between the prosecution and external parties/agencies/systems was very well established.

This was also the section of the book skipped the most, around anti-corruption layers and bounded contexts. We had to crack open that section of the book, dust it off, smell the smell of pages never before read, and figure out how we should tackle integration.

We’ve quite a bit of experience in this area it turns out, so it was really just a matter of deciding for each 3rd party what kind of integration would work best.

image

For some 3rd parties, we could create an entirely separate app with no integration. Some needed a special app that performed the translation and anti-corruption layer, and some needed an entirely separately deployed app that communicated to our system via hypermedia-rich REST APIs.

Regardless, we never felt we had to build a single solution for all involved. We instead picked the right integration for the job, with an eye of not reinventing things as we went.

Conclusion

In both cases, I’d say both our systems were successful, since they shipped and are both being used and extended to this day. With the more tightly focused domain in the second system we were able to achieve that “greater insight” that the DDD book talks about.

In case anyone wonders, I intentionally did not talk about actors or event sourcing in this series – both things we’ve done and shipped, but found the applicability to be limited to inside a bounded context (or even more typically, a corner of a bounded context). Another post for another day!

Categories: Blogs

10 Lessons from a Long Running DDD Project – Part 1

Mon, 06/13/2016 - 18:14

Round about 7 years ago, I was part of a very large project which rooted its design and architecture around domain-driven design concepts. I’ve blogged a lot about that experience (and others), but one interesting aspect of the experience is we were afforded more or less a do-over, with a new system in a very similar domain. I presented this topic at NDC Oslo (recorded, I’ll post when available).

I had a lot of lessons learned from the code perspective, where things like AutoMapper, MediatR, Respawn and more came out of it. Feature folders, CQRS, conventional HTML with HtmlTags were used as well. But beyond just the code pieces were the broader architectural patterns that we more or less ignored in the first DDD system. We had a number of lessons learned, and quite a few were from decisions made very early in the project.

Lesson 1: Bounded contexts are a thing

Very early on in the first project, we laid out the personas for our application. This was also when Agile and Scrum were really starting to be used in the large, so we were all about using user stories, personas and the like.

We put all the personas on giant post-it notes on the wall. There was a problem. They didn’t fit. There were so many personas, we couldn’t look at all of them at one.

So we color coded them and divided them up based on lines of communication, reporting, agency, whatever made sense

image

Well, it turned out that those colors (just faked above) were perfect borders for bounded contexts. Also, it turns out that 72 personas for a single application is way, way too many.

Lesson 2: Ubiquitous language should be…ubiquitous

One of the side effects of cramming too many personas into one application is that we got to the point where some of the core domain objects had very generic names in order to have a name that everyone agreed upon.

We had a “Person” object, and everyone agreed what “person” meant. Unfortunately, this was only a name that the product owners agreed upon, no one else that would ever use the system would understand what that term meant. It was the lowest common denominator between all the different contexts, and in order to mean something to everyone, it could not contain behavior that applied to anyone.

When you have very generic names for core models that aren’t actually used by any domain expert, you have something worse than an anemic domain model – a generic domain model.

Lesson 3: Core domain needs consensus

We talked to various domain experts in many groups, and all had a very different perspective on what the core domain of the system was. Not what it should be, but what it was. For one group, it was the part that replaced a paper form, another it was the kids the system was intending to help, another it was bringing those kids to trial and another the outcome of those cases. Each has wildly different motivations and workflows, and even different metrics on which they are measured.

Beyond that, we had directly opposed motivations. While one group was focused on keeping kids out of jail, another was managing cases to put them in jail! With such different views, it was quite difficult to build a system that met the needs of both. Even to the point where the conduits to use were completely out of touch with the basic workflow of each group. Unsurprisingly, one group had to win, so the focus of the application was seen mostly through the lens of a single group.

Lesson 4: Ubiquitous language needs consensus

A slight variation on lesson 2, we had a core entity on our model where at least the name meant something to everyone in the working group. However, that something again varied wildly from group to group.

For one group, the term was in reference to a paper form filed. Another, something as part of a case. Another, an event with a specific legal outcome. And another, it was just something a kid had done wrong and we needed to move past. I’m simplifying and paraphrasing of course, but even in this system, a legal one, there were very explicit legal definitions about what things meant at certain times, and reporting requirements. Effectively we had created one master document that everyone went to to make changes. It wouldn’t work in the real world, and it was very difficult to work in ours.

Lesson 5: Structural patterns are the least important part of DDD

Early on we spent a *ton* of time on getting the design right of the DDD building blocks: entities, aggregates, value objects, repositories, services, and more. But of all the things that would lead to the success or failure of the project, or even just slowing us down/making us go faster, these patterns were by far the least important.

That’s not to say that they weren’t valuable, they just didn’t have a large contribution to the success of the project. For the vast majority of the domain, it only needed very dumb CRUD objects. For a dozen or so very particular cases, we needed highly behavioral, encapsulated domain objects. Optimizing your entire system for the complexity of 10% really doesn’t make much sense, which is why in subsequent systems we’ve moved towards a more CQRS model, where each command or query has complete control of how to model the work.

With commands and queries, we can use pretty much whatever system we want – from straight up SQL to event sourcing. In this system, because we focused on the patterns and layers, we pigeonholed ourselves into a singular pattern, system-wide.

Next up – lessons learned from the new system that offered us a do-over!

Categories: Blogs

Launching ASP.NET Core 1.0 course

Wed, 06/08/2016 - 01:02

This is a bit of a different post for me. I obviously blog and speak a lot about how I build apps at Headspring, and one question I get quite often is “can you make some courses on Pluralsight about these topics?” Years ago I co-wrote the MVC in Action books, all on my own time. I set out to do the same and create some videos, but life more or less got in the way, and I never was able to publish anything.

So rather than go through a 3rd-party learning platform, which would have to go through an approval process, I’m building out courseware for Headspring. It’s focused on how we build MVC applications, but on the new ASP.NET Core 1.0 platform. And rather than trying to do it on my own time, which means it’ll never happen, it’ll be through Headspring, which means that it will happen :)

The idea behind the course is that I’ll walk through how we build applications with ASP.NET Core 1.0, using our toolbelt of AutoMapper, MediatR, Fixie, HtmlTags and more, providing a complete end-to-end guide on both features on the new platform, and how to use them effectively.

I’ll post some more here at http://hdspr.ng/Project11, just to get things started. Or if you want to go ahead and sign up for the course directly, we’ll have the full series here http://11xengineering.com/courses/11x-asp-net-core-web-app-development/.

Either way, I hope you enjoy!

Categories: Blogs