Skip to content


AutoMapper 3.2.0 released

Jimmy Bogard - 17 hours 3 min ago

Full release notes on the GitHub site

Big features/improvements:

  • LINQ queryable extensions greatly improved
    • ICollection supported
    • MaxDepth supported
    • Custom MapFrom expressions supported (including aggregations)
    • Inherited mapping configuration applied
    • Windows Universal Apps supported
  • Fixed NuGet package to not have DLL in project
  • iOS confirmed to work
  • ReverseMap ignores both directions (only one Ignore() or IgnoreMap attribute needed)
  • Pre conditions on member mappings (called before resolving anything)
  • Exposing ResolutionContext everywhere, including current mapping engine instance

A lot of small improvements, too. I’ve ensured that every new extension to the public API includes code documentation. The toughest part of this release was coming up with a good solution to the multi-platform support and MSBuild’s refusal to copy indirect references to all projects.

As always, if you find any issues with this release, please report over on GitHub.


Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Testing on the Toilet: Test Behaviors, Not Methods

Google Testing Blog - Mon, 04/14/2014 - 23:25
by Erik Kuefler

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

After writing a method, it's easy to write just one test that verifies everything the method does. But it can be harmful to think that tests and public methods should have a 1:1 relationship. What we really want to test are behaviors, where a single method can exhibit many behaviors, and a single behavior sometimes spans across multiple methods.

Let's take a look at a bad test that verifies an entire method:

@Test public void testProcessTransaction() {
User user = newUserWithBalance(;
new Transaction("Pile of Beanie Babies", dollars(3)));
assertContains("You bought a Pile of Beanie Babies", ui.getText());
assertEquals(1, user.getEmails().size());
assertEquals("Your balance is low", user.getEmails().get(0).getSubject());

Displaying the name of the purchased item and sending an email about the balance being low are two separate behaviors, but this test looks at both of those behaviors together just because they happen to be triggered by the same method. Tests like this very often become massive and difficult to maintain over time as additional behaviors keep getting added in—eventually it will be very hard to tell which parts of the input are responsible for which assertions. The fact that the test's name is a direct mirror of the method's name is a bad sign.

It's a much better idea to use separate tests to verify separate behaviors:

@Test public void testProcessTransaction_displaysNotification() {
new User(), new Transaction("Pile of Beanie Babies"));
assertContains("You bought a Pile of Beanie Babies", ui.getText());
@Test public void testProcessTransaction_sendsEmailWhenBalanceIsLow() {
User user = newUserWithBalance(;
new Transaction(dollars(3)));
assertEquals(1, user.getEmails().size());
assertEquals("Your balance is low", user.getEmails().get(0).getSubject());

Now, when someone adds a new behavior, they will write a new test for that behavior. Each test will remain focused and easy to understand, no matter how many behaviors are added. This will make your tests more resilient since adding new behaviors is unlikely to break the existing tests, and clearer since each test contains code to exercise only one behavior.

Categories: Blogs

Selenium For Pythonistas

Testing TV - Mon, 04/14/2014 - 22:01
If you’re a full-stack Python developer who wants to incorporate functional tests into your toolkit, this presentation is aimed at helping you understand the why and when of Selenium. Learn about writing robust and maintainable tests for your web apps that can be run in any continuous integration setup. Video producer:
Categories: Blogs

Agile Lagging to Leading Metric Path

Even in an Agile environment there is a benefit to applying measures to understand progress.  It can be tempting to apply the same iron triangle metrics (based on cost, schedule, and scope) that may have been used in a more traditional mindset to Agile projects and initiatives.  Instead, I suggest removing all of those metrics and start with a clean slate.  On the clean slate, consider your outcomes.

An Agile mindset asks that you consider outcome instead of output as a measure of success.  This means we should first start with understanding our desired outcomes for an initiative or project.  Within a business context of building products, one measure of success an increase in revenue. Having a customer revenue metric helps you understand whether the products being built are increasing revenue upon release. While capturing revenue is a good starting point, it is a “lagging” indicator meaning you don’t recognize the evidence of revenue movement until after the release is in production and has been in the marketplace for a period of time.
To supplement lagging measures, you need corresponding leading measures and indicators that provide you with visibility during development to gauge if you are moving the product into a position of increased revenue. I call this framework the Lagging to Leading Metric Path.  This visibility is important because it provides input for making decisions as you move forward. Making the right decision leads to improved results. As you consider measures, think about how they help you gain visibility and information for decisions in building a product that helps you lead toward an increase in revenue.
For a hopeful increase in customer revenue, what leading metrics can we put in place to ensure we are moving in the right direction?  Let’s say in this case that increased revenue is the hopeful lagging metric based on expected customer sales.  Examples of leading measures or indicators to achieve an outcome of this lagging metric for increased customer revenue include:
  • Customers attending Sprint Review: a leading metric where you capture how many customers are actually attending the sprint review and how much feedback they give. This indicates engagement and interest. 
  • Customer satisfaction from Sprint Review: a leading metric is capturing customer satisfaction from the functionality they viewed within the sprint review.  This indicates levels of satisfaction with the functionality as the product is being built. 
  • Customer satisfaction of product usage: an indicator of the most recent release highlighting a level of satisfaction on the usage of the current product including commentary.   

When applying Agile to product development, the outcome that matters most are often represented by lagging metrics.  Therefore you will need leading indicators to ensure you are moving in the right direction, to provide visibility, and to help you with decision-making.   Within your own context, consider constructing a lagging to leading metric path so that you know you are moving in the right direction during your Agile journey.

Note: the lagging to leading metric path really isn't specific to Agile and I would suggest applying this to an initiative or project aligning with any mindset, process, method, or practice of delivering value.

To read more about establishing an Agile Lagging to Leading Metric Path and Agile Measures of Success, consider reading Chapter 14 of Being Agile
Categories: Blogs

Variable Testers

James Bach's Blog - Sun, 04/13/2014 - 23:55

I once heard a vice president of software engineering tell his people that they needed to formalize their work. That day, I was an unpaid consultant in the building to give a free seminar, so I had even less restraint than normal about arguing with the guy. I raised my hand, “I don’t think you can mean that, sir. Formality is about sameness. Are you really concerned that your people are working in different ways? It seems to me that what you ought to be concerned about is effectiveness. In other words, get the job done. If the work is done a different way every time, but each time done well, would you really have a problem with that? For that matter, do you actually know how your folks work?”

This was years ago. I’m wracking my brain, but I can’t remember specifically how the executive responded. All I remember is that he didn’t reply with anything very specific and did not seem pleased to be corrected by some stranger who came to give a talk.

Oh well, it had to be done.

I have occasionally heard the concern by managers that testers are variable in their work; that some testers are better than others; and that this variability is a problem. But variability is not a problem in and of itself. When you drive a car, there are different cars on the road each day, and you have to make different patterns of turning the wheel and pushing the brake. So what?

The weird thing is how utterly obvious this is. Think about managers, designers, programmers, product owners… think about ANYONE in engineering. We are all variable. Complaining about testers being variable– as if that were a special case– seems bizarre to me… unless…

I suppose there are two things that come to mind which might explain it:

1) Maybe they mean “testers vary between satisfying me and not satisfying me, unlike other people, who always satisfy me.” To examine this we would discover what their expectations are. Maybe they are reasonable or maybe they are not. Maybe a better system for training and leading testers is needed.

2) Maybe they mean “testing is a strictly formal process that by its nature should not vary.” This is a typical belief by people who know nothing about testing. What they need is to have testing explained or demonstrated to them by someone who knows what he’s doing.






Categories: Blogs

Exploratory Tumbling

Hiccupps - James Thomas - Sat, 04/12/2014 - 07:43
A short questionnaire:

1. Do you ever find yourself navigating unfamiliar territory in search of areas that return some value?
2. Are you a bacterium?

If your answers were (no, no) or (yes, yes) feel free to stop reading now.

I was listening to a podcast, The Biology of Freedom, from the BBC's Discovery programme this week. Towards the end they talk about how cells move around seeking food using a kind of targeted random walk.

It's called chemotaxis:[a bacterium's] movement will look like ... relatively straight swims interrupted by random tumbles that reorient [it] ... By repeatedly evaluating their course ... bacteria can direct their motion to find favorable locations with high concentrations of attractantsA short questionnaire:

1. Would you be interested in a heuristic that might help guide your exploration?
2. Are you a tester?

If your answers are (yes, yes) there might be the germ of an idea for you here.Image:
Categories: Blogs

Community Update 2014-04-11–#webdev, #aspnet, #dotnet, #python for #visualstudio, #azure #webjobs

Decaying Code - Maxime Rouiller - Fri, 04/11/2014 - 23:08


Must Read/Watch

Roslyn (.NET Compiler Platform) As Open Source - Leaning Into Windows (

Banking Example Again | Greg Young's Blog on (

The story behind the wallpaper we'll never forget ( – The history behind the Windows XP wallpaper.

Web Development

How to make an Object inherit from a Class in JavaScript – Max Schmitt (

How to follow the Google webmaster guidelines (

Coherent Labs » Announcing Unreal Engine 4 and CRYENGINE integration ( – Unreal engine with HTML5 and JavaScript

Implementing Private and Protected Members in JavaScript — Philip Walton (

ColorBrewer: Color Advice for Maps (

Debugging Asynchronous JavaScript with Chrome DevTools - HTML5 Rocks (

jQuery Conf Video: Understanding Scope in JavaScript - Quick Left Boulder Colorado (

Improve your JavaScript with Web Essentials and JSHint (

Offline.js – Handle your users losing their internet connection like a pro (


Python Tools for Visual Studio 2.1 Beta (


Prototype Members vs Static Members vs Instance Members (and Dependency Injection) (

Micro-JSON - a JSON parser for the .Net Micro Framework (

Async Processing in EF6 and the Microsoft .NET Framework 4.5 -- Visual Studio Magazine (

A new search experience on the Gallery (


Intellisense for JSON Schema in the JSON Editor (

Windows Azure

Azure WebJobs 104 - Hosting and testing WebJobs in .NET with the WebJobs SDK with Pranav Rastogi (

Categories: Blogs

Community Update 2014-04-09 – #thinktecture new preview, #angularjs, #heartbleed, #javascript, #nancyfx and more

Decaying Code - Maxime Rouiller - Thu, 04/10/2014 - 04:00

So for those who are wondering, there’s this big thing about “heartbleed”. The “too long didn’t read version” is that it only impact OpenSSL and therefor, IIS is not affected.

The bad thing is, it put a dent in the trust we had over open source code. What about the rest of the projects we rely on? Questions, questions…

Enjoy the reading!

Must Read

Heartbleed Hotel: The biggest Internet fuckup of all time | Brad's Blog (

Bedrock | Infrequently Noted ( – Very nicely written article about JavaScript.

Thinktecture Special

Announcing Thinktecture IdentityServer v3 – Preview 1 | on (

Introducing Thinktecture IdentityManager | brockallen on (


EF Code First Migrations Deployment to an Azure Cloud Service (

NuGet Package of the Week: Humanizer makes .NET data types more human - Scott Hanselman (

Image Resizer for Windows Explorer Shell Extension (

xUnit – dynamically skipping tests for different test-environments | danielwertheim on (


Visual Studio extensions for web developers (

Nadeem Afana's blog · ASP.NET MVC 5 Internationalization · Date and Time (

Opt in and opt out from ASP.NET Web API Help Page - StrathWeb (

Bootstrapping AngularJS Applications with Server-Side Data from ASP.NET MVC & Razor – Marius Schulz (

Token Authentication · NancyFx/Nancy Wiki · GitHub (

Windows Azure

Introducing the Microsoft Azure Management Libraries - Jeff Wilcox (

Categories: Blogs

Announcing IdentityServer v3 and IdentityManager v1 – From @BrockLAllen

Decaying Code - Maxime Rouiller - Wed, 04/09/2014 - 17:35

So Brock Allen just announced that the Preview for its famous identity servers are hitting another milestone. Of course it’s only preview 1 but it’s worth it.

IdentityServer is the best free OAuth/OpenId there is right now. The v3 is bringing us full OWIN support with smaller deployable endpoints, default login screen for any logins and more. This also means, deployable out of IIS.

As for IdentityManager, it’s a reboot of the ASP.NET Configuration found in Visual Studio since forever. Better interface and 100% compatible with the latest ASP.NET Identity bits.

So if you are looking for an OAuth server, checkout IdentityServer v2 (while waiting for v3 to go full release) and secure your endpoints.

Here are some links that were given by Brock:

IdentityServer v3

Blog post:

Introduction video:

Samples video:

Extending IdentityServer v3 video: IdentityManager v1 – Identity/User Management for MembershipReboot and ASP.NET Identity

Blog post:

Introduction video:

Categories: Blogs

Using AutoMapper to perform LINQ aggregations

Jimmy Bogard - Tue, 04/08/2014 - 18:41

In the last post I showed how AutoMapper and its LINQ projection can prevent SELECT N+1 problems and other lazy loading problems. That was pretty cool, but wait, there’s more! What about complex aggregation? LINQ can support all sorts of interesting queries, that when done in memory, could result in really inefficient code.

Let’s start small, what if in our model of courses and instructors, we wanted to display the number of courses an instructor teaches and the number of students in a class. This is easy to do in the view:

@foreach (var item in Model.Instructors)
<!-- later down -->
@foreach (var item in Model.Courses)
    <tr class="@selectedRow">

But at runtime this will result in another SELECT for each row to count the items:


We could eager fetch those rows ahead of time, but this is also less efficient than just performing a SQL correlated subquery to calculate that SUM. With AutoMapper, we can just declare this property on our ViewModel class:

public class CourseModel
    public int CourseID { get; set; }
    public string Title { get; set; }
    public string DepartmentName { get; set; }
    public int EnrollmentsCount { get; set; }

AutoMapper can recognize extension methods, and automatically looks for System.Linq extension methods. The underlying expression created looks something like this:

courses =
    from i in db.Instructors
    from c in i.Courses
    where i.ID == id
    select new InstructorIndexData.CourseModel
        CourseID = c.CourseID,
        DepartmentName = c.Department.Name,
        Title = c.Title,
        EnrollmentsCount = c.Enrollments.Count()

LINQ providers can recognize that aggregation and use it to alter the underlying query. Here’s what that looks like in SQL Profiler:

    [Project1].[CourseID] AS [CourseID], 
    [Project1].[Title] AS [Title], 
    [Project1].[Name] AS [Name], 
        COUNT(1) AS [A1]
        FROM [dbo].[Enrollment] AS [Extent5]
        WHERE [Project1].[CourseID] = [Extent5].[CourseID]) AS [C1]
    FROM --etc etc etc

That’s pretty cool, just create the property with the right name on your view model and you’ll get an optimized query built for doing an aggregation.

But wait, there’s more! What about more complex operations? It turns out that we can do whatever we like in MapFrom as long as the query provider supports it.

Complex aggregations

Let’s do something more complex. How about counting the number of students whose name starts with the letter “A”? First, let’s create a property on our view model to hold this information:

public class CourseModel
    public int CourseID { get; set; }
    public string Title { get; set; }
    public string DepartmentName { get; set; }
    public int EnrollmentsCount { get; set; }
    public int EnrollmentsStartingWithA { get; set; }

Because AutoMapper can’t infer what the heck this property means, since there’s no match on the source type even including extension methods, we’ll need to create a custom mapping projection using MapFrom:

cfg.CreateMap<Course, InstructorIndexData.CourseModel>()
    .ForMember(m => m.EnrollmentsStartingWithA, opt => opt.MapFrom(
        c => c.Enrollments.Where(e => e.Student.LastName.StartsWith("A")).Count()

At this point, I need to make sure I select the overloads for the aggregation methods that are supported by my LINQ query provider. There’s another overload of Count() that takes a predicate to filter items, but it’s not supported. Instead, I need to chain a Where then Count. The SQL generated is now efficient:

    [Project2].[CourseID] AS [CourseID], 
    [Project2].[Title] AS [Title], 
    [Project2].[Name] AS [Name], 
    [Project2].[C1] AS [C1], 
        COUNT(1) AS [A1]
        FROM  [dbo].[Enrollment] AS [Extent6]
        INNER JOIN [dbo].[Person] AS [Extent7]
            ON ([Extent7].[Discriminator] = N''Student'')
            AND ([Extent6].[StudentID] = [Extent7].[ID])
        WHERE ([Project2].[CourseID] = [Extent6].[CourseID])
            AND ([Extent7].[LastName] LIKE N''A%'')) AS [C2]

This is a lot easier than me pulling back all students and looping through them in memory. I can go pretty crazy here, but as long as those query operators are supported by your LINQ provider, AutoMapper will pass through your MapFrom expression to the final outputted Select expression. Here’s the equivalent Select LINQ projection for the above example:

courses =
    from i in db.Instructors
    from c in i.Courses
    where i.ID == id
    select new InstructorIndexData.CourseModel
        CourseID = c.CourseID,
        DepartmentName = c.Department.Name,
        Title = c.Title,
        EnrollmentsCount = c.Enrollments.Count(),
        EnrollmentsStartingWithA = c.Enrollments
            .Where(e => e.Student.LastName.StartsWith("A")).Count()

As long as you can LINQ it, AutoMapper can build it. This combined with preventing lazy loading problems is a compelling reason to go the view model/AutoMapper route, since we can rely on the power of our underlying LINQ provider to build out the correct, efficient SQL query better than we can. That, I think, is wicked awesome.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Testers should learn to code?

Dorothy Graham Blog - Tue, 04/08/2014 - 17:10
It seems to be the "perceived wisdom" these days that if testers want to have a job in the future, they should learn to write code. Organisations are recruiting "developers in test" rather than testers. Using test automation tools (directly) requires programming skills, so the testers should acquire them, right?

I don't agree, and I think this is a dangerous attitude for testing in general.

Here's a story of two testers:

  • Les has degree in Computer Science, started out in a traditional test team, and now works in a multi-disciplinary agile team. Les is a person who likes to turn a hand to whatever needs doing, and enjoys a technical challenge. Les is very happy to write code, and has recently starting coding for a recently acquired test automation tool, making sure that good programming practices are applied to the testware and test code. Les is very happy as a developer-tester.
  • Fran came into testing through the business. Started out being a user who was more interested in any new release from IT than the other users, so became the “first user”. Got drawn into the user acceptance test group and enjoyed testing – found things that the technical people missed, due to a good business background. With training in testing techniques, Fran became a really good tester, providing great value to the organization. Probably saved them hundreds of thousand pounds a year by advising on new development and testing from a user perspective. Fran never wanted anything to do with code.

What will happen when the CEO hears: “Testers should learn to code”? Les’s job is secure, but what about Fran? I suspect that Fran is already feeling less valued by the organisation and is worried about job security, in spite of having provided a great service for years as an excellent software tester.

So what’s wrong with testers who write code?
  • -       absolutely nothing
  • -       for testers who want to code, who enjoy it, who are good at it
  • -       for testers in agile teams

Why is this a dangerous attitude for testing in general?
  • -       it reads as “all testers should write code” and is taken as that by managers who are looking to get rid of people
  • -       not all testers will be good at it or want to become developers (maybe that's why they went into testing)
  • -       it implies that “the only good tester is one who can write code”
  • -       it devalues testing skills (now want coders, not [good] testers. In fact, if coders can test, why do we need specialist testers anyway?)
  • -       tester-developers may "go native" and be pushed into development, so we lose more testing skills
  • -       it's not right to force good testers out of our industry
So I say, let's stand up for testing skills, and for non-developer testers!

Categories: Blogs

Practical Unit Tests for Video Games

Testing TV - Tue, 04/08/2014 - 16:36
Learn how unit testing can be successfully applied to video game development, and the benefits and drawbacks of this technique. You will see real-life examples of common unit test anti-patterns, understand the problems caused by these anti-patterns, and how to avoid them. Unit testing is common in the wider software world, but it has not […]
Categories: Blogs

Community Update 2014-04-04– #bldwin Day 3, #aspconf on async with #aspnet, #nodejs and #WindowsAzure

Decaying Code - Maxime Rouiller - Sat, 04/05/2014 - 04:45
Categories: Blogs

Community Update 2014-04-03 – #bldwin Day 2 Special– #roslyn going #oss, #windowsazure, #nodejs and #dotnet going native

Decaying Code - Maxime Rouiller - Fri, 04/04/2014 - 04:00

As always, BUILD 2014 videos might not be there when you click on them today. They should be there as soon as possible.

As a sidenote, Roslyn is open source. My jaw is on the floor. Wow. Impressive.


BUILD 2014

BUILD: Day 1 Keynote Summary ( – If you still haven’t watched it, I would. However, if you’d rather have the summary, here it is.

Building Enterprise and SaaS Web Apps and Web APIs using Azure Active Directory for Sign In (

Visual Studio and .NET Overview (

The Present and Future of .NET in a World of Devices and Services (

Thinking for Programmers (

Building Web APIs for Mobile Apps Using ASP.NET Web API 2.1 (

Puppet and Azure: Bringing DevOps to the Enterprise (

Powerful Mobile Apps with Mobile Services and ASP.NET Web API (

.NET Community & Open Source (

Web Development

Querying An In-Memory Array Of JavaScript Objects In NodeJS (


Announcing new web features in Visual Studio 2013 Update 2 RC (

Announcing .NET Native Preview ( – Compiling .NET to native? HERESY (dipped in chocolate)!

.NET Compiler Platform ("Roslyn") - Home ( – Roslyn. C# compiler? Open-source? Get me a chicken with teeth!

Microsoft “Roslyn” CTP (

Visual Studio 2013 Update 2 RC Downloads - Release Candidates (RC), Betas, and Previews (

Using AutoMapper to prevent SELECT N+1 problems | Jimmy Bogard's Blog (


Denis Huvelle: Tips for ASP.NET MVC 4: lowercase URLs (

State of Microsoft Security: ASP.NET Identity 2.0 (

I’m throwing in the towel in FubuMVC | The Shade Tree Developer on (

Windows Azure

Available Now: Preview of Project “Orleans” – Cloud Services at Scale (

Adapting The Azure Queue API For Node.js (

Packages, tools and more

Mexedge Stylesheet Extension ( – Allows you to visualize your CSS files as a tree view

Voice Commands extension ( – Voice commands for Visual Studio

Architecture and Methodology

The end of ORM - Gumtree Dev Team (

Must have books

Designing Evolvable Web APIs with ASP.NET: Glenn Block, Pablo Cibraro, Pedro Felix, Howard Dierking, Darrel Miller: 9781449337711: Books (

Categories: Blogs

Roslyn End-User Preview – What is it and what is possible? – #build2014 #bldwin version

Decaying Code - Maxime Rouiller - Thu, 04/03/2014 - 23:40
What is Roslyn?

Roslyn is a complete new compiler for .NET. However, it’s more than just a simple compiler. We called it earlier a “compiler as a service”. Now they call it the compiler platform.

What’s new?

Well it ain’t your run of the mill compiler. It doesn’t just take code and outputs machine code (or IL for .NET).

This compiler allows you to participate in the compilation of your software itself and tell the compiler what to do with it. Scenarios like Aspect Oriented Programming becomes relatively trivial and doesn’t require the use of plugins or post-build event.

You have a cool refactoring that you want to implement in a very specific way? Want to convert properties with certain attributes to code blocks? Just code it. Roslyn allows you to integrate your refactoring within Visual Studio directly and share it with everyone. One specific scenario would be to code company code guidelines directly within a VSIX that you deploy on every developer’s machine. This allow developers to all have the same rules as for what the company is concerned. This could definitely give an edge to a company that want to standardise code quality directly at the source.


Basically, it comes with three type of API. Features, workspaces (solution, projects, files) as well as the compiler APIs.

Features are based around refactoring and fixing code. Those are high level functionality that are tightly link to Visual Studio. Workspace API relate to code formatting, finding references, etc. They are also linked to Visual Studio. Compiler API relates to Syntax trees, emitting code, analysing flows of code… they are the most low-level API related to the compiler and are also the most interesting.

  What’s coming?

Well… technically you now have access to the C# compiler with an Apache 2 licence.

Here’s what is now possible…

Simple scenario #1 – Creating a new refactoring

Using the Roslyn SDK, I create a new Visual C# > Roslyn > Code Refactoring.

The default template reverse the string of a type. So I press F5, create a new project, create a class and do ALT + . on that class.

I now have an additional refactoring option which brings my class “ThisTest” to “tseTsihT” with live preview. This is nothing but it’s a refactoring that you are 100% in control with which doesn’t require external tools.

I know. This refactoring is useless. If something is valuable for you, you can simply implement it or wait for someone in the community to develop it.

Simple Scenario #2 – Flagging improperly named fields

So let’s say we want to flag any field that uses the old “m_something” convention. Doing this is as simple as the following code:

/// <summary>
/// This is used to identify where problems are.
/// </summary>
[ExportDiagnosticAnalyzer("NoMUnderscore", LanguageNames.CSharp)]
internal class FieldsDoNotStartWithMUnderscore : ISyntaxNodeAnalyzer<SyntaxKind>
    public const string RemoveMDiagnosticId = "NoMUnderscore";
    public static readonly DiagnosticDescriptor RemoveMUnderscoreRule = new DiagnosticDescriptor(RemoveMDiagnosticId,
                                                                                         "Remove m_",
                                                                                         "Invalid name. Field name must not start with m_",

    public ImmutableArray<SyntaxKind> SyntaxKindsOfInterest { get { return ImmutableArray.Create(SyntaxKind.FieldDeclaration); } }

    public ImmutableArray<DiagnosticDescriptor> SupportedDiagnostics { get { return ImmutableArray.Create(RemoveMUnderscoreRule); } }

    private static bool CanHaveTheMRemoved(FieldDeclarationSyntax fieldDeclaration, SemanticModel semanticModel)
        if (!fieldDeclaration.Modifiers.Any(SyntaxKind.PrivateKeyword))
            return false;

        var token = fieldDeclaration.Declaration.GetLastToken();
        return token.Text.StartsWith("m_");

    public void AnalyzeNode(SyntaxNode node, SemanticModel semanticModel, Action<Diagnostic> addDiagnostic, CancellationToken cancellationToken)
        if (CanHaveTheMRemoved((FieldDeclarationSyntax)node, semanticModel))
            addDiagnostic(Diagnostic.Create(RemoveMUnderscoreRule, node.GetLocation()));

/// <summary>
/// this is used to integrate within Visual Studio refactor capabilities
/// </summary>
[ExportCodeFixProvider("NoMUnderscore", LanguageNames.CSharp)]
internal class CodeFixProvider : ICodeFixProvider
    public IEnumerable<string> GetFixableDiagnosticIds()
        return new[] { FieldsDoNotStartWithMUnderscore.RemoveMDiagnosticId };

    public async Task<IEnumerable<CodeAction>> GetFixesAsync(Document document, TextSpan span, IEnumerable<Diagnostic> diagnostics, CancellationToken cancellationToken)
        var root = await document.GetSyntaxRootAsync(cancellationToken);
        var diagnosticSpan = diagnostics.First().Location.SourceSpan;
        var declaration = root.FindToken(diagnosticSpan.Start).Parent.AncestorsAndSelf().OfType<FieldDeclarationSyntax>().First();
        return new[] { CodeAction.Create(FieldsDoNotStartWithMUnderscore.RemoveMUnderscoreRule.Description, c => RemoveMAsync(document, declaration, c)) };

    private async Task<Document> RemoveMAsync(Document document, FieldDeclarationSyntax fieldDeclaration, CancellationToken cancellationToken)
        var nameToken = fieldDeclaration.Declaration.GetLastToken();
        var newNameToken = SyntaxFactory.Identifier(nameToken.Text.Replace("m_", ""));

        var variableDeclarationSyntax = fieldDeclaration.Declaration.ReplaceToken(nameToken, newNameToken);

        var newLocal = fieldDeclaration.WithDeclaration(variableDeclarationSyntax);

        var formattedLocal = newLocal.WithAdditionalAnnotations(Formatter.Annotation);

        var originalRoot = await document.GetSyntaxRootAsync(cancellationToken);
        var newSyntaxRoot = originalRoot.ReplaceNode(fieldDeclaration, formattedLocal);

        return document.WithSyntaxRoot(newSyntaxRoot);

This require quite a bit of code. However, it is possible to regroup and refactor common operation on certain elements (fields, constructors, etc.) to reduce the amount of code.

Of course, this code is not production ready and not unit tested. Do not take it as is. It is full of bug and is not ready for any type of environment. This is only to show what is possible.


This of course is just the beginning. I’ve just showed you what is possible. It took me less than an hour to prepare those two examples. With more time, it could be possible to create some very complex scenarios of very high quality.

We’re living in a crazy world right now. We’re getting more and more control over the code that we write. The possibilities that are opening up when we can interact to something as low-level as the compiler are just breathtaking.

For me, it’s tools that are built for the developers, for our needs and of course… fun to use.


Get Roslyn Now

Roslyn Roadmap

Language Features implementation status

Roslyn Sample and Walkthrough

Categories: Blogs