Skip to content

Feed aggregator

JUnit Testing: Getting Started and Getting the Most out of It

Sauce Labs - Tue, 07/26/2016 - 16:30

If you’re a Java developer, you probably know and love JUnit. It’s the go-to tool of choice for unit testing (and, as we will see below, other types of testing as well) for Java apps.

In fact, JUnit is so popular that it’s the most commonly included external library on Java projects on GitHub, according to a 2013 analysis. No other Java testing framework comes close in popularity to JUnit.

But while JUnit is widely used, are all of the projects that deploy it getting the most out of it? Probably not. Here’s a look at what you should be doing to use JUnit to maximal effect.

JUnit Basics

First, though, let’s go over the basics of JUnit, just in case you haven’t used it before.


JUnit supports any platform on which Java runs, and it’s pretty simple to install. Simply grab the junit.jar and hamcrest-core.jar files from GitHub and place them in your test class path.

Next, add a dependency like the following to junit:junit in the scope test:


Basic Usage

With JUnit installed, you can begin writing tests. This process has three main steps.

First, create a class, which should look something like this:

package junitfaq;
import org.junit.*;
import static org.junit.Assert.*;
import java.util.*;
public class SimpleTest {</pre>

Second, write a test method, such as:

   public void testEmptyCollection() {
      Collection collection = new ArrayList();

… and third, run the test! You can do that from the console with:

java org.junit.runner.JUnitCore junitfaq.SimpleTest

There’s lots more you can do, of course. For all the nitty-gritty details of writing JUnit tests, check out the API documentation.

Getting the Most out of JUnit

Now you know the basics of JUnit. But if you want to run it in production, there are some pointers to keep in mind in order to maximize testing performance and flexibility. Here are the two big ones:

  • Use parallel testing, which speeds up your testing enormously. Unfortunately, JUnit doesn’t have a parallel testing option built-in. However, there’s a Sauce Labs article dedicated to JUnit parallel testing, which explains how to do it using the Sauce OnDemand plugin.
  • Despite the tool’s name, JUnit’s functionality is not strictly limited to unit testing. You can also do integration and acceptance tests using JUnit, as explained here.

If you use Eclipse for your Java development, you may also want to check out Denis Golovin’s tips for making JUnit tests run faster in Eclipse. Most of his ideas involve tweaks to the Eclipse environment rather than JUnit-specific changes, but anything that makes Eclipse faster is a win in my book.

And of course, don’t forget Sauce Labs’ guide to testing best practices. They’re also not JUnit-specific, but they apply to JUnit testing, and they’re good to know whether you use JUnit or not.

Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, Agile infrastructure and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO.

Categories: Companies

Say No to (More) Selenium Tests

Software Testing Magazine - Tue, 07/26/2016 - 14:00
How many times do we test the same things at multiple layers, multiple levels, adding time to the build process and testing cycle, delaying the feedback? We know what to test and how to test, but what is the right place to test it? This talk will demonstrate how we, as QA’s, can identify which tests can be classified as unit tests, integration tests, and functional tests. Using a case study, we will see how each component can be tested as part of unit testing; the integration of different parts and the functioning of a software system as a whole, and how functional tests fit into this big picture. We will then bring all these tests together to understand and build the testing pyramid and how it enables us to build the right testing framework with fewer Selenium functional tests. Video producer:
Categories: Communities

Join me for Jenkins World 2016

Jenkins World, September 13-15 at the Santa Clara Convention Center (SCCC), takes our 6th annual community user conference to a whole new level. It will be one big party for everything Jenkins, from users to developers, from the community to vendors. There will be more of what people always loved in past user conferences, such as technical sessions from users and developers, the Ask the Experts booth and plugin development workshop, and even more has been added, such as Jenkins training pre-conference, workshops and the opportunity to get certified for free. Jenkins World is a not-to-be-missed. For me, the best part of Jenkins World is the opportunity to meet...
Categories: Open Source

3 top performances with 'Ideas Worth Sharing' from TEDx Wilmington

HP LoadRunner and Performance Center Blog - Tue, 07/26/2016 - 01:58



How is TEDx working in your local community to boost performance and adoption of technology startup and innovation? In this blog you will find key results summarized and where to learn more for you.




Categories: Companies

Results: Uncovering the value of Performance Engineering to Technology Team(s)

HP LoadRunner and Performance Center Blog - Mon, 07/25/2016 - 21:18

2016-7-12_Icon-PE And Value To Tech Team.jpg

What value can you bring to your Technology Team(s) with Performance Engineering? Checkout the replay of this webinar, so you can learn from an Expert Panel their perspective, which you can use today.  Read more now... 

Categories: Companies

TestingWhiz Version 5.1 Released

Software Testing Magazine - Mon, 07/25/2016 - 12:00
TestingWhiz, a test automation solutions provider has announced the release of Version 5.1 of its tool. This version includes of enhancements and updates that would complement the previous version. TestingWhiz Version 5.1 has some new features to make test automation easier and effective: * IOS Mobile Native & Web for Simulators: With version 5.1, users will get support to execute testing of iOS native app and iOS web app testing on simulators. * Content Verification: A new operation will be available to handle content verification scenarios while testing applications. * Object Resolution with CSS Path: Added a feature of Object Resolution with CSS path to enhance object identification at runtime. * Eclipse RCP Upgrade: TestingWhiz 5.1 will come upgraded with the latest version of Eclipse RCP for better stability & security. * Object handling: A capability to define own objects and also change existing locators will also be seen in version 5.1. Besides the above updates, TestingWhiz version 5.1 comes with several enhancements, UI improvements, and bug fixes to provide seamless testing experience to users.
Categories: Communities

Stepping out of my comfort zone

Agile Testing with Lisa Crispin - Mon, 07/25/2016 - 00:27

The 30 days of testing challenges are energizing me! Day 23’s challenge is to help someone test better. I’m going to combine that one with stepping out of my comfort zone on day 14 by sharing what I learned here. It may help you test better!

Recently, my awesome teammate Chad Wagner and I were trying to reproduce a problem found by another teammate.  Chad and I are testers on the Pivotal Tracker team. One of the developers on our team reported that he was hitting the backspace button while editing text in a story while another project member made an update, and it caused his backspace button to act as a browser back button. He lost his text changes, which was annoying. In trying to reproduce this, we found that whenever focus goes outside the text field, the backspace button indeed acts as a browser back button. But was that what happened in this case? It was hard to be sure what element has focus.

Chad wanted a way to see what element is in focus at any given time to help with trying to repro this issue. He found a :focus psuedo class in CSS that seemed helpful. He also found a bookmarklet to inject new CSS rules from Paul Irish. With help from a developer teammate and our Product Owner, Chad made the following bookmarklet:

javascript:(function()%7Bvar newcss%3D”:focus { outline:5px dashed red !important} .honeypot:focus { opacity:1 !important; width: 10px !important; height: 10px !important; outline:5px dashed red !important}”%3Bif(“%5Cv”%3D%3D”v”)%7Bdocument.createStyleSheet().cssText%3Dnewcss%7Delse%7Bvar tag%3Ddocument.createElement(“style”)%3Btag.type%3D”text/css”%3Bdocument.getElementsByTagName(“head”)%5B0%5D.appendChild(tag)%3Btag%5B(typeof”string”)%3F”innerText”:”innerHTML”%5D%3Dnewcss%7D%7D)()%3B

Red highlighting shows focus is currently in the Description field

Red highlighting shows focus is currently in the Description field

This bookmarklet puts red highlighting around whatever field, button or link on which your browser session has focus, as shown in the example.

What does this have to do with my comfort zone?

Chad is always trying new things and dragging me out of my comfort zone. He told me about the bookmarklet. I didn’t even know what a bookmarklet is, I had to start searching around. Chad sent me the code for the bookmarklet, and I tried unsuccessfully to use the bookmarklet. I was working from home that day, so we got on Zoom and Chad showed me how to use this. I read the blog posts (listed above) that he had found.

These fancy tools tend to scare me, because I’m afraid I won’t understand them. And indeed, I do not understand this very well. So we need to find time so that Chad can pair with me and explain more about this bookmarklet. My understanding is that this could work on any web page, but I haven’t been able to get it to work with another one.  So this will be getting me out of my comfort zone again soon.

Can you try it?

If being able to see what has focus on your web page would help you test better, maybe you can try this out, and if you can get it to work, maybe you can help me. Day 24’s challenge is to connect with someone new, so let’s connect! And when I learn more, which I’ll try to do tomorrow, I’ll update this post.

Team effort FTW Story for built-in tool

Story for built-in tool

Our PO who helped Chad get this bookmarklet working thinks it’s such a good idea that he added and prioritized a story in our backlog to allow users to enable a mode to show what has focus in Tracker. The team thinks this is a cool idea, and it will be done soon. So I won’t have to worry about the bookmarklet for that, but I still want to learn more about how I can use CSS rules and bookmarklets to help with testing.

The post Stepping out of my comfort zone appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

The power of load testing IoT: requirements and tips

HP LoadRunner and Performance Center Blog - Sat, 07/23/2016 - 00:52


With the growth of IoT, it is vital to make sure your network and servicers can handle the load with cloud load testing.

Keep reading to learn how to generate massive amounts of load to emulate IoT traffic.

Categories: Companies

Testing Talk Interview Series- JetBrains’ YouTrack

PractiTest - Fri, 07/22/2016 - 16:00

Testing Talk logo

You track logo

YouTrack is a Geeky Bug tracker , designed to save developers and development teams’ time. 1. Tell us about yourself. Think about 2 interesting things that we want to hear about you.

My name is Valerie Andrianova, and I am a product marketing manager at JetBrains, specifically working on YouTrack, our issue tracker, and on Hub,  our user management system to connect all the systems together.

My background… I have a technological education.  My specialty is mathematics and theoretical mechanics, but surprisingly I am not a developer

Categories: Companies

Latest TeamPulse R1 2015 Release

Telerik TestStudio - Fri, 07/22/2016 - 03:46
TeamPulse R1 2015 is here. With this release, we aimed to simplify the project management experience even further and to make TeamPulse users even more productive. Some of the new features we are launching include cross-project relation, SVN check-in integration, new reports and enhancements to existing reports. 2015-07-30T15:00:00Z 2016-07-22T01:36:00Z Fani Kondova
Categories: Companies

User Experience Monitoring on Hybrid Applications

What are hybrid applications? Google says: “Hybrid applications are web applications (or web pages) in the native browser, such as UIWebView in iOS and WebView in Android (not Safari or Chrome). Hybrid apps are developed using HTML, CSS and Javascript, and then wrapped in a native application using platforms like Cordova.” I don’t think I could […]

The post User Experience Monitoring on Hybrid Applications appeared first on about:performance.

Categories: Companies

Go, Ape

Hiccupps - James Thomas - Thu, 07/21/2016 - 22:44

A couple of years ago I read The One Minute Manager by Ken Blanchard on the recommendation of a tester on my team. As The One Line Reviewer I might write that it's an encouragement to do some generally reasonable things (set clear goals, monitor progress towards them, and provide precise and timely feedback) wrapped up in a parable full of clumsy prose and sprinkled liberally with business aphorisms.

Last week I was lent a copy of The One Minute Manager Meets the Monkey, one of what is clearly a not insubstantial franchise that's grown out of the original book. Unsurprisingly perhaps, given that it is part of a successful series, this book is similar to the first: another shop floor fable, more maxims, some sensible suggestions.

On this occasion, the advice is to do with delegation and, specifically, about managers who pull work to themselves rather than sharing it out. I might summarise the premise as:
  • Managers, while thinking they are servicing their team, may be blocking them.
  • The managerial role is to maximise the ratio of managerial effort to team output.
  • Which means leveraging the team as fully as possible.
  • Which in turn means giving people responsibility for pieces of work.

And I might summarise the advice as:
  • Specify the work to be done as far as is sensible.
  • Make it clear who is doing what, and give work to the team as far as is sensible.
  • Assess risks and find strategies to mitigate them.
  • Review on a schedule commensurate with the risks identified.

And I might describe the underlying conceit as: tasks and problems are monkeys to be passed from one person's back to another. (See Management Time: Who’s Got the Monkey?)  And also as: unnecessary.

So, as before, I liked the book's core message - the advice, to me, is a decent default - but not so much the way it is delivered. And, yes, of course, I should really have had someone read it for me.
Image: Amazon
Categories: Blogs

Schedule and Email TestTrack Reports Automatically

The Seapine View - Thu, 07/21/2016 - 17:30

Here at Seapine, we’re huge fans of automating as many tasks as possible. For example, sometimes it’s useful to schedule a TestTrack report to run and automatically be emailed to a manager or other stakeholder. Let’s take a look at how to make that happen.

It’s not too difficult to set up. TestTrack’s SOAP interface already allows you to run most reports. Combine that with Blat ( for email capabilities and your operating system’s job scheduler, then glue the pieces together with a little bit of scripting to get scheduled emailed TestTrack reports. See in this .zip file for a script to do just that. You can also trigger the script using TestTrack’s executable triggers feature if you need reports run when something happens in TestTrack.

I’m going to take this opportunity to demonstrate a less well-known method of using SOAP than through the typical HTTP server method—calling the SOAP CGI application from the command line instead. This can sometimes simplify development and speed up the resulting app. See the subroutines “RunSoapXML” and “RunSoapXMLFile” in in the .zip file.

Like any other command line tool, you can feed the SOAP CGI input from a file—in this case, a file containing XML. When using the SOAP CGI in this way, the web server is completely bypassed. That can speed up things like local SOAP triggers by skipping some network and web server overhead.

This method isn’t appropriate for all circumstances—for instance, it may require an extra installation of the SOAP CGI, which then has to be upgraded each time TestTrack is.


In this example, I’ll be using the Windows OS. (A Linux implementation would need to substitute sendmail for Blat, cron for Task Scheduler, flip “\” to “/”, and remove “.exe” from various file names. An OS X implementation would need to use the standard HTTP method of SOAP invocation, because there’s no SOAP CGI for OSX.)

  1. Create “C:\scheduledReport\”
  2. Download Blat from into the directory from step 1.
  3. Locate ttsoapcgi.exe (normally installed to your web server’s “scripts” or “cgi-bin” directory) and copy it to the directory from Step 1.
  4. Download and unzip the demo script and XML into the directory from Step 1.
  5. If you haven’t already, install a Perl interpreter (

After download, configure Blat using “blat -install < SMTP server address > < sender’s email address > [< try > [< port > [< profile >]]] [-q]” at the command line.

If you haven’t previously configured ttsoapcgi.exe on the machine you’re using, you’ll need to set up your TestTrack server connection information using the registry utility.

Once all that’s done, create a configuration text file consisting of the following five lines:

  1. A comma-separated list of recipient email addresses
  2. A TestTrack username that has permissions to log in using SOAP and run reports
  3. The password for the above username
  4. The name of the TestTrack project where the report resides
  5. The name of the report to run (Note: some report types, such as live charts, are not supported by SOAP)

For example:, 
Sample Project
Issue by Assigned User Report

Now is a good time to test that everything is set up correctly. Open the command line, switch to the above directory, and type: "< config file name >"

If all goes well, the recipient(s) should receive a fresh copy of the report by email.



Once we have a successful test, all that’s left is to schedule a task so the OS runs this report for us periodically.

task trigger

task action

That’s all! Automatic reports will now be emailed on the specified schedule.

Categories: Companies

The Sauce Journey – “Forming, Storming, Norming, Performing”

Sauce Labs - Thu, 07/21/2016 - 16:30

A few years ago, while working elsewhere, I came upon a scene of two engineers literally screaming at each other over the top of their cubicle walls about some aspect of a project. “Oh good,” I thought, “they’ve reached the storming stage, things can only get better from here.”

As I talked about in my previous post, forming Scrum teams leads to emergent behavior on the part of individuals as they adjust to the new regime. The same is true of small teams; once formed, the way in which individuals interact with each other tends to undergo a sequence of changes as well. The behavioral scientist Bruce Tuckman labeled these stages as Forming-Storming-Norming-Performing. As unpleasant as the transitions from stage to stage might be, all teams must progress through them in order to reach the point where they are truly self-managing.

In the forming stage, teams cohere in relation to external influences, like goals and tasks, but tend to remain focused on themselves as individuals. This is reinforced in the way that we typically constitute technical teams, where each person is recruited for their individual technical strengths. Forming is typically a stage that is driven by intellectual and analytical considerations, since it focuses on defining the project, identifying tasks, and assigning team members who can fulfill them.

Storming and norming, in contrast, are about engaging the emotional intelligence of team members to bring them together as a team. In the storming stage, as Tuckman put it in his 1965 paper1)“Tuckman’s stages of group development – Wikipedia, The Free Encyclopedia.” 2005. 20 Jun. 2016 jQuery("#footnote_plugin_tooltip_7062_1").tooltip({ tip: "#footnote_plugin_tooltip_text_7062_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });, “participants form opinions about the character and integrity of the other participants and feel compelled to voice these opinions if they find someone shirking responsibility or attempting to dominate. Sometimes participants question the actions or decision of the leader…” The storming stage can quite frankly be very upsetting, and the project will seem like a complete disaster in the midst of it. There is no way, however, to get to norming without it. For a team to become truly self-managing, they must learn how to surface conflict and disagreement, and resolve it among themselves. For this reason, it’s critical for managers to continue to provide guidance in decision-making, but leave it up to the team members themselves to resolve conflicts.

By the time teams have reached norming, they’ve developed the emotional intelligence that enables them to understand their team members better, and also helps them see themselves as a team rather than a collection of skills. Having passed through conflict and resolution, a sense of trust and even intimacy emerges. You can tell when a team has reached this phase by the way they behave during retrospectives – the introverts will begin to speak out, and the extroverts will begin to listen. Finally, when teams reach performing, you have full open communication, and the need for active management falls away.

High performing teams are those that have learned how to effectively communicate among themselves. In the end, Scrum, and all its rituals and practices, is about trying to create an environment where communication flourishes, rather than one which is governed by dictate. This environment doesn’t just “happen,” however; it emerges as individuals, and teams, go through a transformative process. Some teams may never get beyond storming, and teams that have progressed may regress as team dynamics change. The important thing to remember is that, on your own particular journey, you must learn to see conflict as a dynamic and potentially positive force, rather than as something to be avoided or nullified.

At the present moment, our teams at Sauce are in a mix of stages. Some teams that were together before we even started Scrum have seamlessly flowed into the norming stage and are well on their way to performing. New teams that were formed when we transitioned to Scrum have happily moved into the storming phase. This is especially the case in teams where we have people new to the company, and thus new to the culture. Other teams have gone back to the forming stage, because they have either been reconstituted or have acquired new members. One interesting observation is that we have many (15+) remote engineers, and contrary to intuition, these teams have moved more quickly into the storming and norming stages. My theory is that this is because emotional barriers are more easily crossed when not physically face to face, but I’ll talk about this more in a future post on the challenges of global Scrum.

Joe Alfaro is VP of Engineering at Sauce Labs. This is the fifth post in a series dedicated to chronicling our journey to transform Sauce Labs from Engineering to DevOps. Read the first post here.

References   [ + ]

1. ↑ “Tuckman’s stages of group development – Wikipedia, The Free Encyclopedia.” 2005. 20 Jun. 2016 function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } }
Categories: Companies

Get a new way to manipulate JSON with load testing in LoadRunner

HP LoadRunner and Performance Center Blog - Thu, 07/21/2016 - 08:48

JSON and LoadRunner teaser.png

JavaScript Object Notation (JSON) is a lightweight data-interchange format. It is an easier alternative to use than the common XML, and is used widely in the web world. Keep reading to find out how HPE LoadRunner supports it.

Categories: Companies

Integrating AutoMapper with ASP.NET Core DI

Jimmy Bogard - Wed, 07/20/2016 - 18:30

Part of the release of ASP.NET Core is a new DI framework that’s completely integrated with the ASP.NET pipeline. Previous ASP.NET frameworks either had no DI or used service location in various formats to resolve dependencies. One of the nice things about a completely integrated container (not just a means to resolve dependencies, but to register them as well), means it’s much easier to develop plugins for the framework that bridge your OSS project and the ASP.NET Core app. I already did this with MediatR and HtmlTags, but wanted to walk through how I did this with AutoMapper.

Before I got started, I wanted to understand what the pain points of integrating AutoMapper with an application are. The biggest one seems to be the Initialize call, most systems I work with use AutoMapper Profiles to define configuration (instead of one ginormous Initialize block). If you have a lot of these, you don’t want to have a bunch of AddProfile calls in your Initialize method, you want them to be discovered. So first off, solving the Profile discovery problem.

Next is deciding between the static versus instance way of using AutoMapper. It turns out that most everyone really wants to use the static way of AutoMapper, but this can pose a problem in certain scenarios. If you’re building a resolver, you’re often building one with dependencies on things like a DbContext or ISession, an ORM/data access thingy:

public class LatestMemberResolver : IValueResolver<object, object, User> {
  privat readonly AppContext _dbContext;
  public LatestMemberResolver(AppContext dbContext) {
    _dbContext = dbContext;
  public User Resolve(object source, object destination, User destMember, ResolutionContext context) {
    return _dbContext.Users.OrderByDescending(u => u.SignUpDate).FirstOrDefault();

With the new DI framework, the DbContext would be a scoped dependency, meaning you’d get one of those per request. But how would AutoMapper know how to resolve the value resolver correctly?

The easiest way is to also scope an IMapper to a request, as its constructor takes a function to build value resolvers, type converters, and member value resolvers:

IMapper mapper 
  = new Mapper(Mapper.Configuration, t => ServiceLocator.Resolve(t));

The caveat is you have to use an IMapper instance, not the Mapper static method. There’s a way to pass in the constructor function to a Mapper.Map call, but you have to pass it in *every single time*, and thus not so useful:

Mapper.Map<User, UserModel>(user, 
  opt => opt.ConstructServicesUsing(t => ServiceLocator.Resolve(t)));

Finally, if you’re using AutoMapper projections, you’d like to stick with the static initialization. Since the projection piece is an extension method, there’s no way to resolve dependencies other than passing them in, or service location. With static initialization, I know exactly where to go to look for AutoMapper configuration. Instance-based, you have to pass in your configuration to every single ProjectTo call.

In short, I want static initialization for configuration, but instance-based usage of mapping. Call Mapper.Initialize, but create mapper instances from the static configuration.

Initializating the container and AutoMapper

Before I worry about configuring the container (the IServiceCollection object), I need to initialize AutoMapper. I’ll assume that you’re using Profiles, and I’ll simply scan through a list of assemblies for anything that is a Profile:

private static void AddAutoMapperClasses(IServiceCollection services, IEnumerable<Assembly> assembliesToScan)
    assembliesToScan = assembliesToScan as Assembly[] ?? assembliesToScan.ToArray();

    var allTypes = assembliesToScan.SelectMany(a => a.ExportedTypes).ToArray();

    var profiles =
        .Where(t => typeof(Profile).GetTypeInfo().IsAssignableFrom(t.GetTypeInfo()))
        .Where(t => !t.GetTypeInfo().IsAbstract);

    Mapper.Initialize(cfg =>
        foreach (var profile in profiles)

The assembly list can come from a list of assemblies or types passed in to mark assemblies, or I can just look at what assemblies are loaded in the current DependencyContext (the thing ASP.NET Core populates with discovered assemblies):

public static void AddAutoMapper(this IServiceCollection services)

public static void AddAutoMapper(this IServiceCollection services, DependencyContext dependencyContext)
        .SelectMany(lib => lib.GetDefaultAssemblyNames(dependencyContext).Select(Assembly.Load)));

Next, I need to add all value resolvers, type converters, and member value resolvers to the container. Not every value resolver etc. might need to be initialized by the container, and if you don’t pass in a constructor function it won’t use a container, but this is just a safeguard just in case something needs to resolve these AutoMapper service classes:

var openTypes = new[]
foreach (var openType in openTypes)
    foreach (var type in allTypes
        .Where(t => t.GetTypeInfo().IsClass)
        .Where(t => !t.GetTypeInfo().IsAbstract)
        .Where(t => t.ImplementsGenericInterface(openType)))

I loop through every class and see if it implements the open generic interfaces I’m interested in, and if so, registers them as transient in the container. The “ImplementsGenericInterface” doesn’t exist in the BCL, but it probably should :) .

Finally, I register the mapper configuration and mapper instances in the container:

services.AddScoped<IMapper>(sp => 
  new Mapper(sp.GetRequiredService<IConfigurationProvider>(), sp.GetService));

While the configuration is static, every IMapper instance is scoped to a request, passing in the constructor function from the service provider. This means that AutoMapper will get the correct scoped instances to build its value resolvers, type converters etc.

With that in place, it’s now trivial to add AutoMapper to an ASP.NET Core application. After I create my Profiles that contain my AutoMapper configuration, I instruct the container to add AutoMapper (now released as a NuGet package from the AutoMapper.Extensions.Microsoft.DependencyInjection package):

public void ConfigureServices(IServiceCollection services)
    // Add framework services.


And as long as I make sure and add this after the MVC services are registered, it correctly loads up all the found assemblies and initializes AutoMapper. If not, I can always instruct the initialization to look in specific types/assemblies for Profiles. I can then use AutoMapper statically or instance-based in a controller:

public class UserController {
  private readonly IMapper _mapper;
  private readonly AppContext _dbContext;
  public UserController(IMapper mapper, AppContext dbContext) {
    _mapper = mapper;
    _dbContext = dbContext;
  public IActionResult Index() {
    var users = dbContext.Users
    return View(users);
  public IActionResult Show(int id) {
    var user = dbContext.Users.Where(u => u.Id == id).Single();
    var model = _mapper.Map<User, UserIndexModel>(user);
    return View(model);

The projections use the static configuration, while the instance-based uses any potential injected services. Just about as simple as it can get!

Other containers

While the new AutoMapper extensions package is specific to ASP.NET Core DI, it’s also how I would initialize and register AutoMapper with any container. Previously, I would lean on DI containers for assembly scanning purposes, finding all Profile classes, but this had the unfortunate side effect that Profiles could themselves have dependencies – a very bad idea! With the pattern above, it should be easy to extend to any other DI container.

Categories: Blogs

Integrate Automated Testing into Any Continuous Integration Process

Ranorex - Wed, 07/20/2016 - 09:24

Long gone is the time of waterfall’s strictly separated development & testing phases. Today, it’s all about fast feedback, quick iterations and frequent releases at a previously unseen velocity. It requires an agile methodology to keep up with the high demands. Your team’s success depends on a supporting infrastructure with the right tooling. Without any doubt, automation plays an essential role here. Our tip: Integrate test automation into your continuous integration (CI) process.

We wouldn’t want you to waste precious time if you’ve your development environment already set up. That’s why you can integrate Ranorex into any continuous integration process. Let’s have a closer look at the benefits of integrating test automation into your CI system, and how you can do it:

continuous integration automated testing overview

Automated testing and continuous integration

The idea of continuous integration is to frequently promote code changes and rapidly get feedback about the impact these changes have on the application or system. Including test automation in the development cycle enables you to automatically test each incremental code change.

So basically every time a developer commits code changes to the version control system (VCS) such as Git or TFVC, a build of the application under test as well as the Ranorex test automation project is triggered in the CI system. The resulting test automation executable then runs against the application under test.

To evaluate the outcome of the automated test, the continuous integration tool examines the return value of the executable or its output text (e.g. “TEST FAILED” for failure). With Ranorex, the return value ‘0′ signals the successful execution of the test script, while the return value ‘-1′ signals a failure. Each team member automatically receives a notification about a finished build. This notification includes build logs as well as a test execution report.

Advantages of integrating Ranorex into your CI system:
  • As each code change is immediately tested, potentially introduced bugs are found faster, which ultimately makes it easier to fix them.
  • The test automation report enhances transparency, as each team member will receive instant feedback about the state of code.
  • There’re no integration problems, as Ranorex can be integrated into any CI tool.
Setting up Ranorex test automation in your CI system

Note: You have to install Ranorex on each machine you’d like to execute Ranorex tests on. You’ll need a valid license to do so. Please find more information about Ranorex licenses on our dedicated Pricing page.

Each committed change in the application under test and the test automation project should be automatically tested. In other words, every change should trigger these 3 steps:

  • building the application under test
  • building the Ranorex test suite
  • executing the Ranorex test suite

continuous integration automation testing in detail

First, you need to manually set up these steps in your CI system.

1. Build the application under test

The first build step should generate an executable of your application under test. This executable should later be triggered from the Ranorex test suite project.
Thus, add a build step which will build your application under test (e.g. MSBuild build step, Ant build step, …).

2. Build the Ranorex test suite

In this second step, you’ll need to generate an executable to automate your application under test. To do so, add a build step (MSBuild or Visual Studio) and choose the project file (*.csproj) of your Ranorex project which should be built.

3. Execute the Ranorex test suite

The third step should execute the previously created executables. Simply add an execution step triggering the *.exe file of the test automation project and define the command line arguments if needed.

The test execution should now be triggered on the same system the projects were built on. If you want to trigger the execution on another system, you need to deploy the built executables and all connected files to that system. Please make sure to execute the application under test and the Ranorex test suite in a desktop and not in a console session.

Automated testing of frequent code changes

If the code of your application under test or your test automation project changes frequently, it doesn’t make sense to run the entire test suite including all test cases with every build. Instead, you should run only those test cases that are affected by the changes. How? Run configurations!
You can add and edit run configurations directly in the test suite (see user guide section ‘Running a Test Suite’).

You can trigger a run configuration using a command line argument. The following command line, for example, will run the test suite executable ‘TestCIProject’ with the run configuration (/rc) ‘SmokeTest’ and generate a zipped report file (/zr /zrf) ‘Report.rxzlog’ in the folder ‘/Reports/’.

TestCIProject.exe /rc:SmokeTest /zr /zrf:Reports/Report.rxzlog

Interested in more command line arguments? You find more in the user guide section ‘Running Tests without Ranorex Studio‘.

Test automation report – the importance of feedback

“No news is good news” is definitely not true for agile teams. It’s important that everyone in a team – whether it is a developer or tester – knows about the state of the code and, thus, the outcome of the automated test run. It really couldn’t be any easier: Simply add a post build action which sends a mail to your team members with the build log and the generated zipped report attached.

Integrate Ranorex into a specific CI system:

You’re using a specific CI tool? Whether it’s Bamboo, Jenkins, HP Quality Center, TeamCity or Microsoft Test Manager – check out the section below to find a detailed instruction on how to integrate Ranorex into your CI tool!

As you can see, it’s easy to integrate Ranorex test automation in your continuous integration system. Each code change in your application under test and your test automation project will be automatically tested, which enhances transparency and enables you to find bugs faster.

You want to know about the benefits of integrating Ranorex into your development environment? Try it out! Download the full-featured 30-day Ranorex trial and see the benefits for yourself! Have fun integrating!

Download Free Trial

The post Integrate Automated Testing into Any Continuous Integration Process appeared first on Ranorex Blog.

Categories: Companies

MediatR Extensions for Microsoft Dependency Injection Released

Jimmy Bogard - Tue, 07/19/2016 - 21:07

To help those building applications using the new Microsoft DI libraries (used in Orleans, ASP.NET Core, etc.), I pushed out a helper package to register all of your MediatR handlers into the container.


To use, just add the AddMediatR method to wherever you have your service configuration at startup:

public void ConfigureServices(IServiceCollection services)


You can either pass in the assemblies where your handlers are, or you can pass in Type objects from assemblies where those handlers reside. The extension will add the IMediator interface to your services, all handlers, and the correct delegate factories to load up handlers. Then in your controller, you can just use an IMediator dependency:

public class HomeController : Controller
  private readonly IMediator _mediator;

  public HomeController(IMediator mediator)
    _mediator = mediator;
  public IActionResult Index()
    var pong = _mediator.Send(new Ping {Value = "Ping"});
    return View(pong);

And you’re good to go. Enjoy!

Categories: Blogs

The Reinventing Testers Week NYC, September 25-29 2016, New York, USA

Software Testing Magazine - Tue, 07/19/2016 - 17:39
The Reinventing Testers Week in New York is a unique series of events focused on software testing. It includes a multitrack conference, full and half day masterclasses, a Quality Leader Bootcamp and a WITS Peer Workshop about reinventing testers. In the agenda of the Reinventing Testers Week you can find topics like “Discovering And Developing Advanced Testing Skills”, “Let’s Take Automation Checks Beyond Webdriver!”, “Agile Exploration”, “Advanced Data Collection – Supercharge Your Storytelling”, “Web Application Security – A Hands On Testing Challenge”, “We Are Work In Progress – Lessons On Becoming A Great Tester”, “The Life Of A Testing Craftsman”, “Identifying Risks In A Large Codebase”, “Play Your Way To Better Testing”. Web site: Location for TReinventing Testers Week: New York, NY, USA
Categories: Communities

How Does PhantomJS Fit Into Your Cloud Testing Strategy?

Sauce Labs - Tue, 07/19/2016 - 16:00

PhantomJS is a lightweight headless test runner which is perfect for command-line-based testing. What is PhantomJS? It allows us to access the browser’s DOM API. After all, PhantomJS is still a browser without the GUI skin. It is suitable for local development, version control pre-commit hooks testing, and as part of your continuous integration pipeline testing.

The headless test runner WILL NOT be a replacement for Selenium functional testing. The idea behind a command-line base testing suite is that it will provide fast feedback for a deployed web application without spinning up a browser. It is critical to set a standard where developers naturally execute static code analysis, unit tests, server spec tests, and PhantomJS tests in a local development environment before creating a pull request. The next section will help you understand why a well-defined cloud testing strategy will reduce web application bugs in production.

Cloud Testing Strategy

A major component of cloud platform solutions is that all of the code goes through the gauntlet of testing before merging a pull request into the master branch, and before pushing changes to production. We need to prove that the code deploys the web application correctly before we start using it in the cloud staging and production environment. It’s much easier and cheaper to catch and troubleshoot issues locally, as opposed to having production servers go down or affect the user experience.

Thanks to configuration management tools, we can use Chef, Vagrant, and VirtualBox to spin up a web application locally and begin testing changes before even creating a pull request.

The general strategy is to make testing easy for developers or whoever performs web application code changes. How do we make it easy for developers? The best testing tool for the job is Test Kitchen. Test Kitchen is an integration test harness tool for testing infrastructure code on isolated platforms. The great thing about this test harness is that it allows you to run your code on virtualization technologies and various cloud providers.

Here is the cloud testing strategy for developers. Each checkpoint will execute static code analysis, unit tests for application and chef role cookbooks, server spec tests, and PhantomJS tests.

Local DevelopmentTesting the application code locally using Chef, Vagrant, and VirtualBox GitHub Pre-commit HooksTesting the application code every time a pull request is created by using Chef, Vagrant, and VirtualBox Continuous IntegrationTesting the application after merging a pull request into the master branch and deploying the application on a cloud provider (staging and production)

The growth of cloud solutions doesn’t change the importance of testing practices. It does, however, elevate the importance of testing early in a local development environment and will increase confidence in code changes before pushing code. Please—Consider implementing these test checkpoints above when developing your cloud testing strategy.

Command-Line Testing Suite Using PhantomJS

The headless test runner allows us to make assertions about the final state of the machine after the Chef Role Cookbook completes. It is important and required to start with a “clean” virtual or cloud environment to run your PhantomJS tests to get accurate results. (Here is an example of PhantomJS Chef Role Cookbook developed by Custom Ink.)

Just remember that PhantomJS testing is not a shortcut for Selenium functional testing. PhantomJS is a perfect lightweight test runner to evaluate a web application before launching your Selenium tests across multiple browsers and platforms on Sauce Labs. What is the makeup of the lightweight command-line testing suite?

Let’s outline the lightweight acceptance test suite for checking a deployed application:

  • Compute the loading speed of a web application.
  • Capture network traffic in HTTP Archive format (HAR) format file. (The HAR is a JSON-formatted archive file format for logging of a web browser’s interaction with a site.)
  • Send HTTP GET and POST requests and validate the response such as the HTTP status code.
  • Set the initial viewport size before loading the page by choosing between ‘landscape’ and ‘portrait’.
  • The file system module allows access to files and directories, but I recommend using server spec testing for this type of validation.
  • DOM manipulation, pure interaction with the deployed web application by using the standard DOM scripting and CSS selectors. Keeps command-line tests super light.
  • Capture Screenshots

It validates that the build and deployment of your web application are ready for the next testing checkpoint.


I hope this blog post has encouraged you to look into PhantomJS and start defining a cloud testing strategy for your cloud applications. The idea behind cloud testing is that it is easier and cheaper to catch and troubleshoot issues locally, as opposed to having staging or production blow up. The tooling solutions are available. What is stopping you from changing your cloud testing strategy? Nothing.

Make a difference with test.allTheThings().

To close, PhantomJS comes with a lot of included examples. I recommend looking them over.

Greg Sypolt (@gregsypolt) is a Senior Engineer at Gannett – USA Today Network and co-founder of Quality Element. He is a passionate automation engineer seeking to optimize software development quality, while coaching team members on how to write great automation scripts and helping the testing community become better testers. Greg has spent most of his career working on software quality—concentrating on web browsers, APIs, and mobile. For the past five years, he has focused on the creation and deployment of automated test strategies, frameworks, tools and platforms.

Categories: Companies