Skip to content

Blogs

AutoMapper support for Xamarin Unified (64 bit)

Jimmy Bogard - Fri, 01/30/2015 - 16:00

I pushed out a prerelease package of AutoMapper for Xamarin Unified, including 64-bit support for iOS.

http://www.nuget.org/packages/AutoMapper/

If you’ve had issues with Xamarin on 64-bit iOS, removing an adding the AutoMapper NuGet package reference should do the trick.

And yes, I verified this on a 64-bit device, not the simulator that is full of false hope and broken dreams.

Enjoy!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

A1QA Interview

The Social Tester - Fri, 01/30/2015 - 12:07

This week I was interviewed by A1QA for their blog. You can read the interview here. I spoke about whether testers need to learn to code, why the product you work on influences how you feel about testing, why recruiting...
Read more

The post A1QA Interview appeared first on The Social Tester.

Categories: Blogs

Two new Online Courses: Effective Total Command and FinalBuilder Training Videos

ISerializable - Roy Osherove's Blog - Fri, 01/30/2015 - 08:50

I’m happy to report two new video courses that I just finished recording and publishing two new courses:

Both of these are tools that I rave about in almost any new windows based dev shop I go to. Most people have at least heard of Total Commander, but never bothered to find out why people like it, even though it looks ugly. But almost nobody heard about FinalBuilder, but those that did, always have this small glitter in their eye, because they know what I mean when I say you can automate pretty much 99% of your problems away with this stuff.

Both of these together (coupled with everything search engine and systeinternals tools) are a godsend to anyone who hates repeating themselves. 

I hope you find these courses helpful. 

Categories: Blogs

Time to jump - a test jump

Stefan Thelenius about Software Testing - Thu, 01/29/2015 - 22:16

For almost a year now I have been a half-time member in two agile teams. We did a split when the original team became too big (>10).

Working on sprints in parallel has been a challenge, but suits me well since I usually like to have a lot of (test) threads ongoing. When needed, I do one hour test sessions for better focus and test depth.

The test focus in my teams has become very strong during this time, so I have felt for a while that my skills probably will make more use in other teams within the company.

James Bach has a great post about a testing role called Test jumper. When I read it now I realize it is very close to what I do nowadays.

It is time to jump...

Reminder: Don't miss early bird offer for Lets Test 2015

My session is about testability and I will present some of my SUT's  awesome built-in testability features - don't miss!




Categories: Blogs

Clean Tests: A Primer

Jimmy Bogard - Thu, 01/29/2015 - 16:25

Over the course of my career, I’ve an opportunity to work with a number of long lived codebases. Ones that I’ve been a part of since commit one and continue on for six or seven years. Over that time, I’ll see how my opinions on writing tests have changed throughout the years. It’s gone from mid 2000s mock-heavy TDD, to story-driven BDD (I even wrote an ill-advised framework, NBehave), to context/spec BDD. I looked at more exotic testing frameworks, such as MSpec and NSpec.

One advantage I see in working with codebases for many years is that certain truths start to arise that normally you wouldn’t catch if you only work with a codebase for a few months. And one of the biggest truths to arise is that simple beats clever. Looking at my tests, especially in long-lived codebases, the ability for me to understand behavior in a test quickly and easily is the most important aspect of my tests.

Unfortunately, this has meant that for most of the projects I’ve worked with, I’ve had to fight against testing frameworks more than work with them. Convoluted test hierarchies, insufficient extensibility, breaking changes and pipelines are some of the problems I’ve had to deal with over the years.

That is, until an enterprising coworker Patrick Lioi started authoring a testing framework that (inadvertently) addressed all of my concerns and frustrations with testing frameworks.

In short, I wanted a testing framework that:

  • Was low, low ceremony
  • Allowed me to work with different styles of tests
  • Favored composition over inheritance
  • Actually looked like code I was writing in production
  • Allowed me to control lifecycle, soup to nuts

Testing frameworks are opinionated, but normally not in a good way. I wanted to work with a testing framework whose opinions were that it should be up to you to decide what good tests are. Because what I’ve found is that testing frameworks don’t keep up with my opinions, nor are they flexible in the vectors in which my opinions change.

That’s why for every project I’ve been on in the last 18 months or so, I’ve used Fixie as my test framework of choice. I want tests as clean as this:

using Should;

public class CalculatorTests
{
    public void ShouldAdd()
    {
        var calculator = new Calculator();
        calculator.Add(2, 3).ShouldEqual(5);
    }

    public void ShouldSubtract()
    {
        var calculator = new Calculator();
        calculator.Subtract(5, 3).ShouldEqual(2);
    }
}

I don’t want gimmicks, I don’t want clever, I want code that actually matches what I do. I don’t want inheritance, I don’t want restrictions on fixtures, I want to code my test how I code everything else. I want to build different rules based on different test organization patterns:

public class ApproveInvoiceTests {
    private Invoice _invoice;
    private CommandResult _result;
    
    public ApproveInvoiceTests(TestContext context) {
        var invoice = new Invoice("John Doe", 30m);
        
        context.Save(invoice);
        
        var message = new ApproveInvoice(invoice.Id);
        
        _result = context.Send(invoice);
        
        _invoice = context.Reload(invoice);
    }
    
    public void ShouldApproveInvoice() {
        _invoice.Status.ShouldEqual(InvoiceStatus.Approved);
    }
    
    public void ShouldRaiseApprovedEvent() {
        _result.Events.OfType<InvoiceApproved>().Count().ShouldEqual(1);
    }
}

Fixie gives me this, while none others can completely match its flexibility. Fixie’s philosophy is that assertions shouldn’t be a part of your test framework. Executing tests is what a test framework should provide out of the box, but test discovery, pipelines and customization should be completely up to you.

In the next few posts, I’ll detail how I like to use Fixie to build clean tests, where I’ve stopped fighting the framework and I take control of my tests.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Very Short Blog Posts (23) &#8211; No Certification? No Problem!

DevelopSense Blog - Wed, 01/28/2015 - 10:14
Another testing meetup, and another remark from a tester that hiring managers and recruiters won’t call her for an interview unless she has an ISEB or ISTQB certification. “They filter résumés based on whether you have the certification!” Actually, people probably go to even less effort than that; they more likely get a machine to […]
Categories: Blogs

Testing on the Toilet: Change-Detector Tests Considered Harmful

Google Testing Blog - Wed, 01/28/2015 - 02:43
by Alex Eagle

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.


You have just finished refactoring some code without modifying its behavior. Then you run the tests before committing and… a bunch of unit tests are failing. While fixing the tests, you get a sense that you are wasting time by mechanically applying the same transformation to many tests. Maybe you introduced a parameter in a method, and now must update 100 callers of that method in tests to pass an empty string.

What does it look like to write tests mechanically? Here is an absurd but obvious way:
// Production code:
def abs(i: Int)
return (i < 0) ? i * -1 : i

// Test code:
for (line: String in File(prod_source).read_lines())
switch (line.number)
1: assert line.content equals def abs(i: Int)
2: assert line.content equals return (i < 0) ? i * -1 : i

That test is clearly not useful: it contains an exact copy of the code under test and acts like a checksum. A correct or incorrect program is equally likely to pass a test that is a derivative of the code under test. No one is really writing tests like that, but how different is it from this next example?
// Production code:
def process(w: Work)
firstPart.process(w)
secondPart.process(w)

// Test code:
part1 = mock(FirstPart)
part2 = mock(SecondPart)
w = Work()
Processor(part1, part2).process(w)
verify_in_order
was_called part1.process(w)
was_called part2.process(w)

It is tempting to write a test like this because it requires little thought and will run quickly. This is a change-detector test—it is a transformation of the same information in the code under test—and it breaks in response to any change to the production code, without verifying correct behavior of either the original or modified production code.

Change detectors provide negative value, since the tests do not catch any defects, and the added maintenance cost slows down development. These tests should be re-written or deleted.

Categories: Blogs

Exploratory Automated Tests

Testing TV - Tue, 01/27/2015 - 21:15
When most managers think of automated tests they picture automating what the manual testers do in running the tests. Sometimes this is what we desire, but it isn’t the most powerful way to use test automation. This tutorial is about extending your reach to do testing that cannot be done manually. Few organizations are working […]
Categories: Blogs

New Relic vs. AppDynamics

My Load Test - Mon, 01/26/2015 - 10:36
In recent years, IT projects seem to have stopped asking “which APM solution should we buy”, and have started asking “should we buy New Relic or AppDynamics?” Given the speed at which these two companies are innovating, the many product comparisons available on the web quickly become outdated. This comparison is a little more “high […]
Categories: Blogs

Agile Success Measures for your Agile Transformation

I often get asked, “How do I know when my company is Agile? ” While I have various answers, it led me to construct an Agile measurement framework that helps you guide your Agile transformation toward success.      I start by asking, “What outcomes an organization would like to see when they go Agile?”  Agile asks that you consider your outcome instead of output as a measure of success.  I would suggest that being Agile should only occur if your outcome is some type of better business results.  In other words, Agile shouldn't be the outcome of being Agile.  The good news is that organizations are looking for better business results.  This could be in the form of shorter lead times, reduced whip, or an increase in revenue.  Sometimes it can be all three. It is important to understand that outcomes are lagging metrics.   Now that we have highlighted the importance of outcomes, let’s add two ingredients to give us perspective and help us build the framework.

For the first ingredient, I will take a page from the book Being Agile in Chapter 2 “Crossing the Agile Chasm”.  When we discuss Agile adoption, we are talking about a change to the organizational culture.  This is because adopting Agile is more than learning skills or understanding a procedure.  It is about adopting a set of values and principles that require change in people’s behavior and the culture of an organization.  This mindset and culture change involves the most time for an organization to adjust.  According to Paul S. Adler and Aaron Shenhar, “Adapting your Technological Base: The Organizational Challenge”, a culture change is measured in years.   
For the second ingredient, I will take a page out of the article Agile Lagging to Leading Metric Path.  This article highlights that an Agile lagging to leading metric path recommends that for every outcome (aka, lagging indicator), you supplement it with corresponding leading indicators that provide you with visibility during an Agile transformation.  Capturing the leading indicators helps you steer toward a successful Agile transformation.  The leading indicators are effectively feedback loops that help you understand if you are leading toward your outcome.
Now that we have the two key ingredients, the goal is constructing an Agile lagging to leading metric path that recognizes that change takes time and provides us with feedback to adapt toward a more successful Agile transformation.   Lets start with the outcome.  For my Agile transformation, the key outcome is that we are seeing better business results for our products, translated into increased revenue for our business.  From this, I need to consider what leading indicators help guide me toward better business results.   From my Agile transformation experience, I will suggest that the two broad leading indicators are adopting Agile mechanics and embracing the Agile mindset.   Here is an illustration of the suggested framework.
This illustrates several conventions.  The first is that from an Agile perspective, in order to get to better business results, we must educate folks on the Agile mechanics and Agile mindset.  As we do this, we gain feedback so that we can adapt the Agile journey to ensure a success Agile transformation and achieve the better business results we are looking for.  The second is that applying Agile mechanics tends to be easier and takes less time since it only involves learning skills and understanding procedures.  Adopting an Agile mindset takes more time since it requires changes in people’s behavior and the adaption of the organizational culture.  The end result (outcome or lagging metric) is that we hope to see better business results by first implementing Agile mechanics and adapting to an Agile mindset. 
The last task at hand is to create measures within each indicator to gauge progress.  For the Agile mechanics, capturing a training metric is helpful.  In order for people to mechanically adopt Agile, they need some form of education in their role (e.g., Scrum Master, Product Owner, etc.) and education in the procedure (e.g., Scrum, Kanban, etc.).  Then you can assess if the mechanics are being applied.  If education doesn’t occur or the mechanics aren’t be followed, how do you expect to do Agile? 

As for the Agile mindset indicator, you can assess if the Scrum Master is exemplifying servant leadership, you can gauge if management are allowing for self-organization, you can access if the team believes in the Agile values and principles, and you can determine if the product owner and organization are adapting to delivering early and often.  If the behaviors behind the Agile mindset are not occurring, how do you expect to be Agile?   This is why they are all leading indicators to getting better business results. 
I hope this article highlights helps you establish a framework to help you more effectively measure “How do we know when we are Agile?”.  Many end their journey with adopting Agile mechanics without adapting their culture toward an Agile mindset.  Stopping at the mechanics is why many organizations fail at Agile.  This article also highlights that if you are looking for the business benefits that Agile can bring, then establishing an Agile measurement framework based on lagging to leading indicators can help guide you achieve a more successful Agile transformation. 
Categories: Blogs

Very Short Blog Posts (22): &#8220;That wouldn&#8217;t be practical&#8221;

DevelopSense Blog - Sun, 01/25/2015 - 01:46
I have this conversation rather often. A test manager asks, “We’ve got a development project coming up that is expected to take six months. How do I provide an estimate for how long it will take to test it?” My answer would be “Six months.” Testing begins as soon as someone has an idea for […]
Categories: Blogs

A new level of testing?

Yesterday I saw this awesome video of Lars Andersen: a new level of archery. It is going viral on the web being watched over 11 million times within 48 hours. Now watch this movie carefully…

The first time I watched this movie, I was impressed. Having tried archery several times, I know how hard it is to do. Remember Legolas from the Lord of Rings movie? I thought that was “only” a movie and his shooting speed was greatly exaggerated. But it turns out Lars Andersen is faster than Legolas. My colleague Sander send me an email telling me about the movie I just watched saying this was an excellent example of craftsmanship, something we have been discussing earlier this week. So I watched the movie again…

Also read what Lars has to say in the comments on YouTube and make sure you read his press release.

This movie is exemplar for the importance of practice and skills! This movie explains archery in a way a context-driven tester would explain his testing…

0:06 These skills have been long since been forgotten. But master archer Lars Andersen is trying to reinvent was has been lost…

Skills are the backbone of everything being done well. So in testing skills are essential too. I’ll come back to that later on. And the word reinvent triggers me as well. Every tester should reinvent his own testing. Only by going very deep, understand every single bit, practice and practice more, you will truly know how to be an excellent tester.

0:32 This is the best type of shooting and there is nothing beyond it in power or accuracy. Using this technique Larsen set several speed shooting records and he shoots more than twice as fast as his closest competitors…

Excellent testers are faster and better. Last week I heard professor Chris Verhoef speak about skills in IT and he mentioned that he has seen a factor 200 in productivity difference between excellent programmers and bad programmers (he called them “Timber Smurf” or “Knutselsmurf” in Dutch).

0:42 … being able to shoot fast is only one of the benefits of the method

Faster testing! Isn’t that what we are after?

0:55 Surprisingly the quiver turned out to be useless when it comes to moving fast. The back quiver was a Hollywood Myth…

The back quiver is a Hollywood myth. It looks cool and may look handy on first sight, since you can put a lot of arrows in it. Doesn’t this sound like certificates and document-heavy test approaches? The certificates looks good on your resume and the artifacts look convenient to help you structure your testing… But turn out to be worthless when it comes to test fast.

1:03 Why? Because modern archers do not move. They stand still firing at a target board.

I see a parallel here with old school testing: testers had a lot of time to prepare in the waterfall projects. The basic assumption was that target wasn’t moving, so it was like shooting at a target board.  Although the target proved always to be moving, the testing methods are designed for target boards.

1:27 Placing the arrow left around the bow is not good while you are in motion. By placing your hand on the left side, your hand is on the wrong side of the string. So you need several movements before you can actually shoot..

Making a ton of documentation before starting to test is like several movements before you can actually test.

1:35 From studying old pictures of archers, Lars discovered that some historical archers held their arrow on the right side of the bow. This means that the arrow can be fired in one single motion. Both faster and better!

Research and study is what is lacking in testing for many. There is much we can learn from the past, but also from social science, measurement, designing experiments, etc.

1:56 If he wanted to learn to shoot like the master archers of old, he had to unlearn what he had learned…

Learning new stuff, learning how to use heuristics and train real skills, needs testers to unlearn APPLYING techniques.

2:07: When archery was simpler and more natural, exactly like throwing a ball. In essence making archery as simple as possible. It’s harder how to learn to shoot this way, but it gives more options and ultimately it is also more fun.

It is hard to learn and it takes a lot of practice to learn to do stuff in the most efficient en effective way. Context-driven testing sounds difficult, but in essence it makes testing as simple as possible. That means it becomes harder to learn because it removes all the methodical stuff that slows us down. These instrumental approaches trying to put everything in a recipe so it can be applied by people who do not want to practice, make testing slow and ineffective.

2:21 A war archer must have total control over his bow in all situations! He must be able to handle his bow and arrows in a controlled way, under the most varied of circumstances.

Lesson 272 in the book Lessons Learned in Software Testing: “If you can get a black belt in only two weeks, avoid fights”. You have to learn and practice a lot to have total control! That is what we mean by excellent testing: being able to do testing in a controlled way, under the most varied of circumstances. Doesn’t that sound like Rapid Software Testing? RST is the skill of testing any software, any time, under any conditions, such that your work stands up to scrutiny. This is how RST differs from normal software testing.

2:36 … master archers can shoot the bow with both hands. And still hit the target. So he began practicing…

Being able to do the same thing in different ways is a big advantage. Also in testing we should learn to test in as many different ways as possible.

3:15 perhaps more importantly: modern slow archery has led people to believe that war archers only shot at long distances. However, Lars found that they can shoot at any distance. Even up close. This does require the ability to fire fast though.

Modern slow testing has led to believe that professional testers always need test cases. However, some testers found that they could work without heavyweight test documentation and test cases. Even on very complex or critical systems also in a regulated environment. This does require the ability to test fast though.

 a new level of archery3:34 In the beginning archers probably drew arrows from quivers or belts. But since then they started holding the arrows in the bow hand. And later in the draw hand. Taking it to this third level. That of holding the arrows in the bow hand, requires immense practice and skill and only professional archers, hunters and so on would have had the time for it. … and the only reason Lars is able to do it, is he has been years of practicing intensely.

Practice, practice, practice. And this really makes the difference. I hear people say that context-driven is not for everybody. We have to accept that some testing professional only want to work 9 to 5. This makes me mad!

I think professional excellence can and should be for everyone! And sure you need to put a lot of work in it! Compare it to football (or any other good thing you want to be in like solving crossword puzzles, drawing, chess or … archery). It takes a lot of practice to play football in the Premiership or the Champions League. I am convinced that anyone can be a professional football player. But it doesn’t come easily. It demands a lot of effort in learning, drive (intrinsic motivation, passion), the right mindset and choosing the right mentors/teachers. Talent maybe helps, and perhaps you need some talent to be the very best, like Lionel Messie … But dedication, learning and practice will take you a long way. We are professionals! So that subset of testers who do not want to practice and work hard, in football they will soon end up on the bench,  won’t get a new contract and soon disappear to the amateurs.

 a new level of archery4:06 The hard part is not how to hold the arrows, but learning how to handle them properly. And draw and fire in one single motion not matter what methods is used.

Diversity has been key in context-driven testing for many years. As testers we need to learn how to properly use many different skills, approaches, techniques, heuristics…

4:12 It works in all positions and while in motion…

… so we can use then in all situations even when we are under great pressure, we have to deal with huge complexity, confusion, changes, new insights and half answers. 

5:17 While speed is important, hitting the target is essential.

Fast testing is great, doing the right thing, like hitting the target is essential. Context-driven testers know how to analyze and model their context to determine what the problem is that needs to be solved. Knowing the context is essential to do the right things to discover the status of the product and any threats to its value effectively, so that ultimately our clients can make informed decisions about it. Context analysis and modelling are some of the essential skills for testers!

There are probably more parallels to testing. Please let me know if you see any more.

 

”Many people have accused me of being fake or have theories on how there’s cheating involved. I’ve always found it fascinating how human it is, to want to disbelieve anything that goes against our world view – even when it’s about something as relatively neutral as archery.” (Lars Andersen)
Categories: Blogs

The Wrought Idea

Hiccupps - James Thomas - Sat, 01/24/2015 - 11:18

So the other day I bleeted about how I like to write to help me collect my thoughts and how that feels like a dialogue through the page.

Somewhat ironically, you might think, I hadn't intended that action to be more than jotting down the realisation I'd just had.  But, of course, as soon as it was out there I began to challenge it, and by proxy myself.

Here's a sample:
  • "When I need to think through an issue, I write." Really? Always?
  • Does getting the ideas down free mental resource for inspection of the ideas? 
  • Does making it concrete mean that it's easier to spot inconsistency? I know people who are adept at maintaining multiple views of a thing. When a different angle of attack is used a different kind of defence is made. The defences are not compatible, but because they are never seen together, this can be overlooked.
  • Why didn't I talk about pictures? I draw a lot too.
  • I recalled that James Lyndsay mentioned the other day that he makes a point of writing down his hypotheses during exploratory testing. If he fails to do that he feels he does a worse job.
  • What about giving some examples - could I make a draft, list the challenges, show the new draft and repeat?
  • I just read a great piece on George Carlin where he says "So I’m drawn to something and start writing about it ... and that’s when the real ideas pounce out, and new ideas, and new thoughts and images, and then bing, ba-bam ba-boom, that’s the creative part."
  • Haven't I been in this area before?
And so I write and right until my thought is wrought.Image: https://flic.kr/p/aNMhL4
Categories: Blogs

Lies, Damned Lies, and Code Coverage

Sustainable Test-Driven Development - Wed, 01/21/2015 - 20:58
Download the Podcast As unit testing has gained a strong foothold in many development organizations, many teams are now laboring under a code coverage requirement.  75% - 80% of the code, typically, must be covered by unit tests.  Most popular Integrated Development Environments (IDE’s) include tools for measuring this percentage, often as part of their testing framework. Let’s ask a
Categories: Blogs

Free Software Tests Are Better Than Free Bananas

Testing TV - Wed, 01/21/2015 - 18:39
There is growing interest in leveraging data mining and machine learning techniques in the analysis, maintenance and testing of software systems. This talk discusses how Google uses such techniques to automatically mine system invariants, uses those invariants in monitoring our systems in real-time and alerts engineers of any potential production problems within minutes. The talk […]
Categories: Blogs

Writing up a storm

Agile Testing with Lisa Crispin - Wed, 01/21/2015 - 16:08

Since publishing More Agile Testing with Janet Gregory, I’ve enjoyed time for writing new articles and participating in interviews. Please see my Articles page for links to these. I’d love to hear your feedback on any of these. Have you tried any of the practices or ideas discussed in the articles or interviews?

The post Writing up a storm appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

TDD and Defects

Sustainable Test-Driven Development - Tue, 01/20/2015 - 23:56
We've said all along that TDD is not really about "testing" but rather about creating an executable form of specification that drives development forward.  This is true, and important, but it does not mean that TDD does not have a relationship to testing.  One interesting issue where there is significant synergy is in our relationship to defects. Two important issues we'll focus on are: when/how
Categories: Blogs

Integrating MediatR with Web API

Jimmy Bogard - Tue, 01/20/2015 - 19:25

One of the design goals I had in mind with MediatR was to limit the 3rd party dependencies (and work) needed to integrate MediatR. To do so, I only take a dependency on CommonServiceLocator. In MediatR, I need to resolve instances of request/notification handlers. Rather than build my own factory class that others would need to implement, I lean on CSL to define this interface:

public interface IServiceLocator : IServiceProvider
{
    object GetInstance(Type serviceType);
    object GetInstance(Type serviceType, string key);
    IEnumerable<object> GetAllInstances(Type serviceType);
    TService GetInstance<TService>();
    TService GetInstance<TService>(string key);
    IEnumerable<TService> GetAllInstances<TService>();
}

But that wasn’t quite enough. I also wanted to support child/nested containers, which meant I didn’t want a single instance of the IServiceLocator. Typically, when you want a component’s lifetime decided by a consumer, you depend on Func<Foo>. It turns out though that CSL already defines a delegate to provide a service locator, aptly named ServiceLocatorProvider:

public delegate IServiceLocator ServiceLocatorProvider();

In resolving handlers, I execute the delegate to get an instance of an IServiceLocatorProvider and off we go. I much prefer this approach than defining my own yet-another-factory-interface for people to implement. Just not worth it. As a consumer, you will need to supply this delegate to the mediator.

I’ll show an example using StructureMap. The first thing I do is add a NuGet dependency to the Web API IoC shim for StructureMap:

Install-Package StructureMap.WebApi2

This will also bring in the CommonServiceLocator dependency and some files to shim with Web API:

image

I have the basic building blocks for what I need in order to have a Web API project using StructureMap. The next piece is to configure the DefaultRegistry to include handlers in scanning:

public DefaultRegistry() {
    Scan(
        scan => {
            scan.TheCallingAssembly();
            scan.AssemblyContainingType<PingHandler>();
            scan.WithDefaultConventions();
			scan.With(new ControllerConvention());
            scan.AddAllTypesOf(typeof(IRequestHandler<,>));
            scan.AddAllTypesOf(typeof(IAsyncRequestHandler<,>));
            scan.AddAllTypesOf(typeof(INotificationHandler<>));
            scan.AddAllTypesOf(typeof(IAsyncNotificationHandler<>));
        });
    For<IMediator>().Use<Mediator>();
}

This is pretty much the same code you’d find in any of the samples in the MediatR project. The final piece is to hook up the dependency resolver delegate, ServiceLocatorProvider. Since most/all containers have implementations of the IServiceLocator, it’s really about finding the place where the underlying code creates one of these IServiceLocator implementations and supplies it to the infrastructure. In my case, there’s the Web API IDependencyResolver implementation:

public IDependencyScope BeginScope()
{
    IContainer child = this.Container.GetNestedContainer();
    return new StructureMapWebApiDependencyResolver(child);
}

I modify this to use the current nested container and attach the resolver to this:

public IDependencyScope BeginScope()
{
    var resolver = new StructureMapWebApiDependencyResolver(CurrentNestedContainer);

    ServiceLocatorProvider provider = () => resolver;

    CurrentNestedContainer.Configure(cfg => cfg.For<ServiceLocatorProvider>().Use(provider));
    
    return resolver;
}

This is also the location where I’ll attach per-request dependencies (NHibernate, EF etc.). Finally, I can use a mediator in a controller:

public class ValuesController : ApiController
{
    private readonly IMediator _mediator;

    public ValuesController(IMediator mediator)
    {
        _mediator = mediator;
    }

    // GET api/values
    public IEnumerable<string> Get()
    {
        var result = _mediator.Send(new Ping());

        return new string[] { result.Message };
    }

That’s pretty much it. How you need to configure the mediator in your application might be different, but the gist of the means is to configure the ServiceLocatorProvider delegate dependency to return the “thing that the framework uses for IServiceLocator”. What that is depends on your context, and unfortunately changes based on every framework out there.

In my example above, I’m preferring to configure the IServiceLocator instance to be the same instance as the IDependencyScope instance, so that any handler instantiated is from the same composition root/nested container as whatever instantiated my controller.

See, containers are easy, right?

(crickets)

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Welcome Max Guernsey

Sustainable Test-Driven Development - Tue, 01/20/2015 - 00:56
Max has joined Net Objectives, as some of you may know, as a trainer, coach, and mentor.  We've been friends with Max for a long while, and he has been a contributor to this blog and to the progress of our thinking in general. So, we're adding him to the official authorship here and when (if ever :)) we get this thing written, he will be co-author with Amir and I. I know this has been terribly
Categories: Blogs

State of the Art

Hiccupps - James Thomas - Fri, 01/16/2015 - 08:17
A trend is better than a snapshot, right?

That's Joel Montvelisky, introducing the State of Testing Survey 2015.

I'm certainly in favour of data and I'd agree that a trend can be better than a snapshot. But if you want to know the state of some system right now for the investigation you're performing right now and you've no reason to think that right now is related to back then, then perhaps right now you'll take the snapshot, right?

Openness and openness to challenge was one of the things I liked most about the previous, inaugural, survey. In the discussion between Jerry Weinberg and Fiona Charles about the results (transcript here) Weinberg's opening remarks include:
We need to be careful on how we interpret this data [...] One way to look at the survey is that it’s giving information about what information we should be getting. I'm looking forward to seeing what was learned.
Image: https://flic.kr/p/oq5E3x
Categories: Blogs