Skip to content

Jimmy Bogard
Syndicate content
Strong opinions, weakly held
Updated: 5 hours 54 min ago

AutoMapper 5.0 speed increases

Fri, 06/24/2016 - 23:43

Just an update on the work we’ve been doing to speed up AutoMapper. I’ve captured times to map some common scenarios (1M mappings). Time is in seconds:

  Flattening Ctor Complex Deep Native 0.0148 0.0060 0.9615 0.2070 5.0 0.2203 0.1791 2.5272 1.4054 4.2.1 4.3989 1.5608 134.39 29.023 3.3.1 4.7785 1.3384 72.812 34.485 2.2.1 5.1175 1.7855 122.0081 35.863 1.1.0.118 6.7143 n/a 29.222 38.852

The complex mappings had the biggest variation, but across the board AutoMapper is *much* faster than previous versions. Sometimes 20x faster, 50x in others. It’s been a ton of work to get here, mainly from the change in having a single configuration step that let us build execution plans that exactly target your configuration. We now build up an expression tree for the mapping plan based on the configuration, instead of evaluating the same rules over and over again.

We *could* get marginally faster than this, but that would require us sacrificing diagnostic information or not handling nulls etc. Still, not too shabby, and in the same ballpark as the other mappers (faster than some, marginally slower than others) out there. With this release, I think we can officially stop labeling AutoMapper as “slow” ;)

Look for the 5.0 release to drop with the release of .NET Core next week!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

10 Lessons from a Long Running DDD Project – Part 2

Mon, 06/20/2016 - 21:04

In Part 1 of this 2-part series, I walked through some lessons learned from the first incarnation of our project. The original project I’d still qualify as a success, in that it was delivered on-time, within budget, and is still under active development today. But we learned a lot of lessons from that project, and were lucky enough to have another crack at it so to speak when we started a new project, in the almost exact domain, but this time the constraints were quite a bit different.

In the first project, we targeted everyone that could possibly be involved with the overall process. This wound up to be a dozen state agencies and countless other groups and sub-groups. Quite a lot of contention in the model (also a great reason why you can never have a single master data model for an entire enterprise). We felt good about the software itself – it was modular and easy to extend, but the domain model itself just couldn’t satisfy all the users involved, only really a subset.

The second project targeted only a single aspect of the original overall legal process – the prosecution agency. Targeting just a single group, actually a single agency, brought tremendous benefits for us.

Lesson 6: Cohesiveness brings greater clarity and deeper insight

Our initial conversations in the second project were somewhat colored by our first project. We started with an assumption that the core focus, the core domain would be at least the same as the monolith, but maybe a different view of it. We were wrong.

In the new version of the app, the entire focus of the system revolves around “cases”. I know, crazy that an app built for the day-to-day functions of a prosecution agency focuses centrally on a case:

image

Once we settled on the core domain, the possibilities then greatly opened up for modeling around that concept. Because the first app only tangentially dealt with cases (there wasn’t even a “Case” in the original model), it was more or less an impedance mismatch for its users in the prosecution agency. It was a bit humbling to hear the feedback from the prosecutors about the first project.

But in the second project, because our core domain was focused, we could spend much more time modeling workflows and behaviors that fit what the prosecution agency actually needed.

Lesson 7: Be flexible where you need to, rigid in others

Although we were able to come to a consensus amongst prosecution agencies about what a case was, what the key things you could DO with a case were and the like, we couldn’t get any consensus about how a case should be managed.

This makes a lot of sense – the state has legal reporting requirements and the courts have a ton of procedural rules, but internal to an agency, they’re free to manage the work any way they wanted to.

In the first system, roles were baked in to the system, causing a lot of confusion for counties where one person wore many different hats. In the new system, permissions were hard-coded against tasks, but not roles:

image

The Permission here is an enum, and we tied permissions to tasks like “Approve Case” and “Add Evidence” and “Submit Disposition” etc. Those were directly tied to actions in our application, and you couldn’t add new permissions without modifying the code.

Roles (or groups, whatever) were not hardcoded, and left completely up to each agency how they liked to organize their work and decide who can do what.

With DDD it’s important to model both the rigid and flexible, they’re equally important in the overall model you build.

Lesson 8: Sometimes you need to invent a model

While we were able to model quite well the actions one can perform with an individual case, it was immediately apparent when visiting different county agencies that their workflows varied significantly inside their departments.

This meant we couldn’t do things like implement a workflow internal to a case itself – everyone’s workflow was different. The only thing we could really embed were procedural/legal rules in our behaviors, but everything else was up for grabs. But we still wanted to manage workflows for everyone.

In this case, we needed to build consensus for a model that didn’t really exist in each county in isolation. If we focused on a single county, we could have baked the rules about how a case is managed into their individual system. But since we were building a system across counties, we needed to build a model that satisfied all agencies:

image

In this model, we explicitly built a configurable workflow, with states and transitions and security roles around who could perform those transitions. While no individual county had this model, it was the meta-model we found while looking across all counties.

Lesson 9: Don’t blindly follow pattern advice

In the new app, I performed an experiment. I would only add tools, patterns, and libraries when the need presented itself but no sooner. This meant I didn’t add a repository, unit of work, services, really anything until an actual pain surfaced. Most of the DDD books these days have prescriptive guidance about what your domain model should look like, how you should do repositories and so on, but I wanted to see if I could simply arrive at these patterns by code smells and refactoring.

The funny thing is, I never did. We left out those patterns, and we never found a need to put them back in. Instead, we drove our usage around CQRS and the mediator pattern (something I’ve used for years but finally extracted our internal usage into MediatR. Instead, our controllers were pretty uniform in their appearance:

image

And the handlers themselves (as I’ve blogged about many times) were tightly focused on a single action, with no need to abstract anything:

image

I’ve extended this to other areas of development too, like front-end development. It’s actually kinda crazy how far you can get without jQuery these days, if you just use lodash and the DOM.

Lesson 10: Microservices and anti-corruption layers are your friend

There is a downside to going to bounded contexts and away from the “majestic monolith”, and that’s integration. Now that we have an application solely dealing with one agency, we have to communicate between different applications.

This turned out to be a bit easier than we thought, however. This domain existed well before computers, so the interfaces between the prosecution and external parties/agencies/systems was very well established.

This was also the section of the book skipped the most, around anti-corruption layers and bounded contexts. We had to crack open that section of the book, dust it off, smell the smell of pages never before read, and figure out how we should tackle integration.

We’ve quite a bit of experience in this area it turns out, so it was really just a matter of deciding for each 3rd party what kind of integration would work best.

image

For some 3rd parties, we could create an entirely separate app with no integration. Some needed a special app that performed the translation and anti-corruption layer, and some needed an entirely separately deployed app that communicated to our system via hypermedia-rich REST APIs.

Regardless, we never felt we had to build a single solution for all involved. We instead picked the right integration for the job, with an eye of not reinventing things as we went.

Conclusion

In both cases, I’d say both our systems were successful, since they shipped and are both being used and extended to this day. With the more tightly focused domain in the second system we were able to achieve that “greater insight” that the DDD book talks about.

In case anyone wonders, I intentionally did not talk about actors or event sourcing in this series – both things we’ve done and shipped, but found the applicability to be limited to inside a bounded context (or even more typically, a corner of a bounded context). Another post for another day!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

10 Lessons from a Long Running DDD Project – Part 1

Mon, 06/13/2016 - 18:14

Round about 7 years ago, I was part of a very large project which rooted its design and architecture around domain-driven design concepts. I’ve blogged a lot about that experience (and others), but one interesting aspect of the experience is we were afforded more or less a do-over, with a new system in a very similar domain. I presented this topic at NDC Oslo (recorded, I’ll post when available).

I had a lot of lessons learned from the code perspective, where things like AutoMapper, MediatR, Respawn and more came out of it. Feature folders, CQRS, conventional HTML with HtmlTags were used as well. But beyond just the code pieces were the broader architectural patterns that we more or less ignored in the first DDD system. We had a number of lessons learned, and quite a few were from decisions made very early in the project.

Lesson 1: Bounded contexts are a thing

Very early on in the first project, we laid out the personas for our application. This was also when Agile and Scrum were really starting to be used in the large, so we were all about using user stories, personas and the like.

We put all the personas on giant post-it notes on the wall. There was a problem. They didn’t fit. There were so many personas, we couldn’t look at all of them at one.

So we color coded them and divided them up based on lines of communication, reporting, agency, whatever made sense

image

Well, it turned out that those colors (just faked above) were perfect borders for bounded contexts. Also, it turns out that 72 personas for a single application is way, way too many.

Lesson 2: Ubiquitous language should be…ubiquitous

One of the side effects of cramming too many personas into one application is that we got to the point where some of the core domain objects had very generic names in order to have a name that everyone agreed upon.

We had a “Person” object, and everyone agreed what “person” meant. Unfortunately, this was only a name that the product owners agreed upon, no one else that would ever use the system would understand what that term meant. It was the lowest common denominator between all the different contexts, and in order to mean something to everyone, it could not contain behavior that applied to anyone.

When you have very generic names for core models that aren’t actually used by any domain expert, you have something worse than an anemic domain model – a generic domain model.

Lesson 3: Core domain needs consensus

We talked to various domain experts in many groups, and all had a very different perspective on what the core domain of the system was. Not what it should be, but what it was. For one group, it was the part that replaced a paper form, another it was the kids the system was intending to help, another it was bringing those kids to trial and another the outcome of those cases. Each has wildly different motivations and workflows, and even different metrics on which they are measured.

Beyond that, we had directly opposed motivations. While one group was focused on keeping kids out of jail, another was managing cases to put them in jail! With such different views, it was quite difficult to build a system that met the needs of both. Even to the point where the conduits to use were completely out of touch with the basic workflow of each group. Unsurprisingly, one group had to win, so the focus of the application was seen mostly through the lens of a single group.

Lesson 4: Ubiquitous language needs consensus

A slight variation on lesson 2, we had a core entity on our model where at least the name meant something to everyone in the working group. However, that something again varied wildly from group to group.

For one group, the term was in reference to a paper form filed. Another, something as part of a case. Another, an event with a specific legal outcome. And another, it was just something a kid had done wrong and we needed to move past. I’m simplifying and paraphrasing of course, but even in this system, a legal one, there were very explicit legal definitions about what things meant at certain times, and reporting requirements. Effectively we had created one master document that everyone went to to make changes. It wouldn’t work in the real world, and it was very difficult to work in ours.

Lesson 5: Structural patterns are the least important part of DDD

Early on we spent a *ton* of time on getting the design right of the DDD building blocks: entities, aggregates, value objects, repositories, services, and more. But of all the things that would lead to the success or failure of the project, or even just slowing us down/making us go faster, these patterns were by far the least important.

That’s not to say that they weren’t valuable, they just didn’t have a large contribution to the success of the project. For the vast majority of the domain, it only needed very dumb CRUD objects. For a dozen or so very particular cases, we needed highly behavioral, encapsulated domain objects. Optimizing your entire system for the complexity of 10% really doesn’t make much sense, which is why in subsequent systems we’ve moved towards a more CQRS model, where each command or query has complete control of how to model the work.

With commands and queries, we can use pretty much whatever system we want – from straight up SQL to event sourcing. In this system, because we focused on the patterns and layers, we pigeonholed ourselves into a singular pattern, system-wide.

Next up – lessons learned from the new system that offered us a do-over!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Launching ASP.NET Core 1.0 course

Wed, 06/08/2016 - 01:02

This is a bit of a different post for me. I obviously blog and speak a lot about how I build apps at Headspring, and one question I get quite often is “can you make some courses on Pluralsight about these topics?” Years ago I co-wrote the MVC in Action books, all on my own time. I set out to do the same and create some videos, but life more or less got in the way, and I never was able to publish anything.

So rather than go through a 3rd-party learning platform, which would have to go through an approval process, I’m building out courseware for Headspring. It’s focused on how we build MVC applications, but on the new ASP.NET Core 1.0 platform. And rather than trying to do it on my own time, which means it’ll never happen, it’ll be through Headspring, which means that it will happen :)

The idea behind the course is that I’ll walk through how we build applications with ASP.NET Core 1.0, using our toolbelt of AutoMapper, MediatR, Fixie, HtmlTags and more, providing a complete end-to-end guide on both features on the new platform, and how to use them effectively.

I’ll post some more here at http://hdspr.ng/Project11, just to get things started. Or if you want to go ahead and sign up for the course directly, we’ll have the full series here http://11xengineering.com/courses/11x-asp-net-core-web-app-development/.

Either way, I hope you enjoy!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

What Microservices Is Not

Fri, 06/03/2016 - 16:52

From what the term “Service” does not imply, from “What is a service (2016 edition)”:

  • “Cloud”
  • “Server”
  • “ESB”
  • “API”
  • XML
  • JSON
  • REST
  • HTTP
  • SOAP
  • WSDL
  • Swagger
  • Docker
  • Mesos
  • Svc Fabric
  • Zookeeper
  • Kubernetes
  • SQL
  • NoSQL
  • MQTT
  • AMQP
  • Scale
  • Reliability
  • “Stateless”
  • “Stateful”
  • OAuth2
  • OpenID
  • X509
  • Java
  • Node
  • C#
  • OOP
  • DDD
  • etc. pp.

We can apply a similar list to Microservices, where the term does not imply any technology. That’s difficult these days because so much marketecture conflates “Microservices” with some specific tool or product. “Simplify microservice-based application development and lifecycle management with Azure Service Fabric”. Well, you certainly don’t need PaaS to do microservices. And small is small enough when it’s not too big to manage, and no more. Not pizza metrics or lines of code.

So microservices does not imply:

  • Docker/containers
  • Azure/AWS
  • Serverless
  • Feature flags
  • Gitflow
  • NoSQL
  • Node.js
  • No more than 20 lines of code in deployed service
  • Service Fabric
  • AWS Lambda

Instead, focus more on the characteristics of a microservice:

  • Focused around a business domain
  • Technology agnostic API
  • Small
  • Autonomous
  • Autonomous
  • Autonomous

Most of the other descriptions or prescriptions around microservices are really just a side-effect of autonomy, but those technologies prescribed certainly aren’t a requirement to build a robust, scalable service.

My suggestion – go back to the DDD book, read the Building Microservices book. Just like DDD wasn’t about entities and repositories, microservices isn’t about Docker. And once you do get the concepts, then come back to the practitioners to see how they’re building applications with microservices, and see if those tools might be a great fit. Just don’t cargo-cult microservices like so many did before with DDD and SOA.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

CQRS and REST: the perfect match

Wed, 06/01/2016 - 22:02

In many of my applications, the UI and API gravitate towards task-oriented UIs. Instead of “editing an invoice”, I “approve an invoice”, with specialized models, behaviors and screens just for accomplishing that task. But what happens when we move from a server-side application to one more distributed, to be accessed via an API?

In a previous post, I talked about the difference between entities, resources, and representations. It turns out that by removing the constraint around entities and resources, it opens the door to REST APIs that more closely match how we’d build the UI if it were a completely server-side application.

With a server side application, taking the example of invoices, I’d likely have a page to view invoices:

GET /invoices

This page would return the table of invoices, with links to view invoice details (or perhaps buttons to approve them). If I viewed invoice details, I’d click a link to view a page of invoice details:

GET /invoices/684

Because I prefer task-based UIs, this page would include links to specific activities you could request to perform. You might have an Approve link, a Deny link, comments, modifications etc. All of these are different actions one could take with an invoice. To approve an invoice, I’d click the link to see a page or modal:

GET /invoices/684/approve

The URLs aren’t important here, I could be on some crazy CMS that makes my URLs “GET /fizzbuzzcms/action.aspx?actionName=approve&entityId=684”, the important thing is it’s a distinct URL, therefore a distinct resource and a specific representation.

To actually approve the invoice, I fill in some information (perhaps some comments or something) and click “Approve” to submit the form:

POST /invoices/684/approve

The server will examine my form post, validate it, authorize the action, and if successful, will return a 3xx response:

HTTP/1.1 303 See Other
Location: /invoices/684

The POST, instead of creating a new resource, returned back with a response of “yeah I got it, see this other resource over here”. This is called the “Post-Redirect-Get” pattern. And it’s REST.

CQRS and REST

Not surprisingly, we can model our REST API exactly as we did our HTML-based web app. Though technically, our web app was already RESTful, it just served HTML as its representation.

Back to our API, let’s design a CQRS-centric set of resources. First, the collection resource:

GET /invoices

HTTP/1.1 200 OK
[
  {
    "id": 684,
    "invoiceNumber": "38042-L-275-684",
    "customerName": "Jon Smith",
    "orderTotal": 58.85,
    "href": "/invoices/684"
  },
  {
    "id": 688,
    "invoiceNumber": "33453-L-275-688",
    "customerName": "Maggie Smith",
    "orderTotal": 863.88,
    "href": "/invoices/688"
  }
]

I’m intentionally not using any established media type, just to illustrate the basics. No HAL or Siren or JSON-API etc.

Just like the HTML page, my collection resource could join in 20 tables to build out this representation, since we’ve already established there’s no connection between entities/tables and resources.

In my client, I can then follow the link to see more details about the invoice (or, alternatively, included links directly to actions). Following the details link:

GET /invoices/684

HTTP/1.1 200 OK
{
  "id": 684,
  "invoiceNumber": "38042-L-275-684",
  "customerName": "Jon Smith",
  "orderTotal": 58.85,
  "shippingAddress": "123 Anywhere"
  "lineItems": [ ]
  "href": "/invoices/684",
  "links": [
    { "rel": "approve", "prompt": "Approve", "href": "invoices/684/approve" },
    { "rel": "reject", "prompt": "Reject", "href": "invoices/684/reject" }
  ]
}

I now include links to additional resources, which in the CQRS world, those additional resources are commands. And just like our HTML version of things, these resources can return hypermedia controls, or, in the case of a modal dialog, I could have embedded the hypermedia controls inside the original response. Let’s go with the non-modal example:

GET /invoices/684/approve

HTTP/1.1 200 OK
{
  "invoiceNumber": "38042-L-275-684",
  "customerName": "Jon Smith",
  "orderTotal": 58.85,
  "href": "/invoices/684/approve",
  "fields": [
    { "type": "textarea", "optional": true, "name": "comments" }
  ],
  "prompt": "Approve"
}

In my command resource, I include enough information to instruct clients how to build a response (given they have SOME knowledge of our protocol). I even include some display information, as I would have in my HTML version. I have an array of fields, only one in my case, with enough information to instruct something to render it if necessary. I could then POST information up, perhaps with my JSON structure or form encoded if I liked, then get a response:

POST /invoices/684/approve
comments=I love lamp

HTTP/1.1 303 See Other
Location: /invoices/684

Or, I could have my command return an immediate response and have its own data, because maybe approving an invoice kicks off its own workflow:

POST /invoices/684/approve
comments=I love lamp

HTTP/1.1 201 Created
Location: /invoices/684/approve/3506
{
  "id": 3506,
  "href": "/invoices/684/approve/3506",
  "status": "pending"
}

In that example I could follow the location or the body to the approve resource. Or maybe this is an asynchronous command, and approval acceptance doesn’t happen immediately and I want to model that explicitly:

POST /invoices/684/approve
comments=I love lamp

HTTP/1.1 202 Accepted
Location: /invoices/684/approve/3506
Retry-After: 120

I’ve received your approval request, and I’ve accepted it, but it’s not created yet so try this URL after 2 minutes. Or maybe approval is its own dedicated resource under an invoice, therefore I can only have one approval at a time, and my operation is idempotent. Then I can use PUT:

PUT /invoices/684/approve
comments=I love lamp

HTTP/1.1 201 Created
Location: /invoices/684/approve

If I do this, my resource is stored in that URL so I can then do a GET on that URL to see the status of the approval, and an invoice only gets one approval. Remember, PUT is idempotent and I’m operating under the resource identified by the URL. So PUT is only reserved for when the client can apply the request to that resource, not to some other one.

In a nutshell, because I can create a CQRS application with plain HTML, it’s trivial to create a CQRS-based REST API. All I need to do is follow the same design guidelines on responses, pay attention to the HTTP protocol semantics, and I’ve created an API that’s both RESTful and CQRSful.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

My OSS CI/CD Pipeline

Tue, 05/24/2016 - 23:39

As far back as I’ve been doing open source, I’ve borrowed other project’s build scripts. Because build scripts are almost always committed with source control, you get to see not only other projects’ code, but how they build, test and package their code as well.

And with any long-lived project, I’ve changed the build process for my projects more times than I care to count. AutoMapper, that I can remember, started off on NAnt (yes, it’s that old).

These days, I try to make the pipeline as simple as possible, and AppVeyor has been a big help with that goal.

The CI Pipeline

For my OSS projects, all work, including my own, goes through a branch and pull request. Some source control hosts allow you to enforce this behavior, including GitHub. I tend to leave this off on OSS, since it’s usually only me that has commit rights to the main project.

All of my OSS projects are now on the soon-to-be-defunct project.json, and all are either strictly project.json or a mix of main project being project.json and others being regular .csproj. Taking MediatR as the example, it’s entirely project.json, while AutoMapper has a mix for testing purposes.

Regardless, I still rely on a build script to execute a build that happens both on the local dev machine and the server. For MediatR, I opted for just a plain PowerShell script that I borrowed from projects online.  The build script really represents my build pipeline in its entirety, and it’s important for me that this build script actually live as part of my source code and not tied up in a build server. Its steps are:

  • Clean
  • Initialize
  • Build
  • Test
  • Package

Not very exciting, and similar to many other pipelines I’ve seen (in fact, I borrow a lot of ideas from Maven, which has a predefined pipeline).

The script for me then looks pretty straightfoward:

# Clean
if(Test-Path .\artifacts) { Remove-Item .\artifacts -Force -Recurse }

# Initialize
EnsurePsbuildInstalled

exec { & dotnet restore }

# Build
Invoke-MSBuild

# Test
exec { & dotnet test .\test\MediatR.Tests -c Release }

# Package
$revision = @{ $true = $env:APPVEYOR_BUILD_NUMBER; $false = 1 }[$env:APPVEYOR_BUILD_NUMBER -ne $NULL];
$revision = "{0:D4}" -f [convert]::ToInt32($revision, 10)

exec { & dotnet pack .\src\MediatR -c Release -o .\artifacts --version-suffix=$revision }

Cleaning is just removing an artifacts folder, where I put completed packages. Initialization is installing required PowerShell modules and running a “dotnet restore” on the root solution.

Building is just msbuild on the solution, executed through a PS module, which MSbuild defers to the dotnet CLI as needed. Testing is the “dotnet test” command against xUnit. Finally, packaging is “dotnet pack”, passing in a special version number I get from AppVeyor.

As part of my builds, I include the incremental build number in my packages. Because of how SemVer works, I need to make sure the build number is alphabetical, so I prefix a build number with leading zeroes, and “57” becomes “0057”.

In my project.json file, I’ve set the version up so that the version from the build gets substituted at package time. But my project.json file determines the major/minor/revision:

{
  "version": "4.0.0-beta-*",
  "authors": [ "Jeremy D. Miller", "Joshua Flanagan", "Josh Arnold" ],
  "packOptions": {
    "owners": [ "Jeremy D. Miller", "Jimmy Bogard" ],
    "licenseUrl": "https://github.com/HtmlTags/htmltags/raw/master/license.txt",
    "projectUrl": "https://github.com/HtmlTags/htmltags",
    "iconUrl": "https://raw.githubusercontent.com/HtmlTags/htmltags/master/logo/FubuHtml_256.png",
    "tags": [ "html", "ASP.NET MVC" ]
  },
}

This build script is used not just locally, but on the server as well. That way I can ensure I’m running the exact same build process reproducibly in both places.

The CD Pipeline

The next part is interesting, as I’m using AppVeyor to be my CI/CD pipeline, depending on how it detects changes. My goal with my CI/CD pipeline are:

  • Pull requests each build, and I can see their status inside GitHub
  • Pull requests do not push a package (but can create a package)
  • Merges to master push packages to MyGet
  • Tags to master push packages to NuGet

I used to push *all* packages to NuGet, but what wound up happening is I would have to move slower with changes because things would just “show up” on people before I had a chance to fully think through the changes to the public.

I still have pre-release packages, but these are a bit more thought-out than I have had in the past.

Finally, because I’m using AppVeyor, my entire build configuration lives in an “appveyor.yml” file that lives with my source control. Here’s MediatR’s:

version: '{build}'
pull_requests:
  do_not_increment_build_number: true
branches:
  only:
  - master
nuget:
  disable_publish_on_pr: true
build_script:
- ps: .\Build.ps1
test: off
artifacts:
- path: .\artifacts\**\*.nupkg
  name: NuGet
deploy:
- provider: NuGet
  server: https://www.myget.org/F/mediatr-ci/api/v2/package
  api_key:
    secure: zKeQZmIv9PaozHQRJmrlRHN+jMCI64Uvzmb/vwePdXFR5CUNEHalnZdOCg0vrh8t
  skip_symbols: true
  on:
    branch: master
- provider: NuGet
  name: production
  api_key:
    secure: t3blEIQiDIYjjWhOSTTtrcAnzJkmSi+0zYPxC1v4RDzm6oI/gIpD6ZtrOGsYu2jE
  on:
    branch: master
    appveyor_repo_tag: true

First, the build version I set to be just the build number. Because my project.json file drives the package/assembly version, I don’t need anything more complicated here. I also don’t want any other branches built, just master and pull requests. This makes sure that I can still create branches/PRs inside the same repository without having to be forced to use a second repository.

The build/test/artifacts should be self-explanatory, I want everything flowing through the build script so I don’t want AppVeyor discovering and trying to figure things out itself. Explicit is better.

Finally, the deployments. I want every package to go to MyGet, but only tagged commits to go to NuGet. The first deploy configuration is the MyGet configuration, that deploys only on master to my MyGet configuration (with user-specific encrypted API key). The second is the NuGet configuration, to only deploy if AppVeyor sees a tag.

For public releases, I:

  • Update the project.json as necessary, potentially removing the asterisk for the version
  • Commit, tag the commit, and push both the commit and the tag.

With this setup, the MyGet feed contains *all* packages, not just the CI ones. The NuGet feed is then just a “curated” feed of packages of more official packages.

The last part of a release, publicizing it, is a little bit more work. I still like GitHub releases but haven’t found yet a great way of automating a “tag the commit and create a release” process. Instead, I use the tool GitHubReleaseNotes to create the markdown of a release based on the tags I apply to my issues for a release. Finally, I’ll make sure that I update any documentation in the wiki for a release.

I like where I’ve ended up so far, and there’s always room for improvement, but it’s a far cry from when I used to have to manually package and push to NuGet.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

AutoMapper 5.0 Beta released

Wed, 05/18/2016 - 20:21

This week marks a huge milestone in AutoMapper-land, the beta release of the 5.0 work we’ve been doing over the last many, many months.

In the previous release, 4.2.1, I obsoleted much of the dynamic configuration API in favor of an explicit configuration step. That means you only get to use “Mapper.Initialize” or “new MapperConfiguration”. You can still use a static Mapper.Map call, or create a new Mapper object “new Mapper(configuration)”. The last 4.x release really paved the way to have a static and instance API that were lockstep with each other.

In previous versions of AutoMapper, you could call “Mapper.CreateMap” anywhere in your code. This made two things difficult: performance optimization and dependent configuration. You could get all sorts of weird bugs if you called the configuration in the “wrong” order.

But that’s gone. In AutoMapper 5.0, the configuration is broken into two steps:

  1. Gather configuration
  2. Apply configuration in the correct order

By applying the configuration in the correct order, we can ensure that there’s no order dependency in your configuration, we handle all of that for you. It seems silly in hindsight, but at this point the API inside of AutoMapper is strictly segregated between “DSL API”, “Configuration” and “Execution”. By separating all of these into individual steps, we were able to do something rather interesting.

With AutoMapper 5.0, we are able to build execution plans based on type map configuration to explicitly map based on exactly your configuration. In previous versions, we would have to re-assess decisions every single time we mapped, resulting in huge performance hits. Things like “do you have a condition configured” and so on.

A strict separation meant we could overhaul the entire execution engine, so that each map is a precisely built expression tree only containing the mapping logic you’ve configured. The end result is a 10X performance boost in speed, but without sacrificing all of the runtime exception logic that makes AutoMapper so useful.

One problem with raw expression trees is that if there’s an exception, you’re left with no stack trace. When we built up our execution plan in an expression tree, we made sure to keep those good parts of capturing context when there’s a problem so that you know exactly which property in exactly which point in the mapping had a problem.

Along with the performance issues, we tightened up quite a bit of the API, making configuration consistent throughout. Additionally, a couple of added benefits moving to expressions:

  • ITypeConverter and IValueResolver are both generic, making it very straightforward to build custom resolvers
  • Supporting open generic type converters with one or two parameters

Overall, it’s been a release full of things I’ve wanted to tackle for years but never quite got the design right. Special thanks to TylerCarlson1 and lbargaoanu, both of whom passed the 100 commit mark to AutoMapper.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Entities aren’t resources, resources aren’t representations

Thu, 05/12/2016 - 18:29

One of the easy mistakes in building a REST API is trying to take your rows out of the database and expose them directly as JSON. Such technology exists, where you can directly expose stored procedures as SOAP web services, or protocols like OData.  You can also just expose your entities directly as JSON through a web service, but you’re missing out on some big distinctions of resources and representations.

So first, what is a resource? That’s actually quite easy – a resource is anything with a URL. The converse is not true, a URL is not a resource, mainly because the URL is the Uniform Resource Locator. If you can locate a resource, then it exists. And Fielding was pleased.

The representation is a bit different – it describes the current state of the resource (when requested).

So what does this have to do with entities? Well, nothing. There is no relationship between entities and resources. And that’s a good thing, it’s exactly how the web works.

When you navigate to a web page, an online retailer, and look at a list of products, what is the resource? From the client’s perspective, we don’t know. We have no idea of the implementation details of the resource, other than a way to locate it and request a representation. Again, Fielding was pleased, as this decoupled us from the implementation details of the resource.

So where does this leave us when building APIs?

A decoupled API

In APIs that serve the client according to their needs, the resource often includes details from many different data sources, combined together to build a representation to the end user. A REST API builds all this together:

image

In hypermedia APIs I build, the resource state combines multiple entities together from one or more data sources into the state of the resource. This plus links, forms a queries makes a “resource”, though it’s still a bit more abstract than that. Finally I build out the representation, perhaps JSON API, HAL, Siren etc.

If I made my entities equivalent to resources, I’m likely forcing clients to make many round trips to all the entities they need, but even more, I’m directly exposing the implementation details of my service to the world. Perhaps I’m removing some fields here and there, but in reality this is only one step above handing clients a connection string to my database. Sure, it’s easy and convenient to expose entities directly as resources and representations, but there’s some serious coupling that comes with that choice that you must accept.

In my experience, if I approach the API design strictly from the client perspective, on what they’re trying to achieve and why, it’s actually quite rare that I arrive at an API design that is simply database CRUD exposed as HTTP. And Fielding was pleased, as this design most closely matches the everyday use of the web. Makes sense. That’s what REST is about – a set of architectural constraints describing basically, the web.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs