Skip to content

Feed aggregator

Mutation Testing in Python

Testing TV - Tue, 10/25/2016 - 16:43
Mutation testing is a technique for systematically mutating source code in order to validate test suites. It makes small changes to a program’s source code and then runs a test suite; if the test suite ever succeeds on mutated code then a flag is raised. I’ll begin this talk with a description of the theory […]
Categories: Blogs

Ranorex Automates Android 7 Nougat

Ranorex - Tue, 10/25/2016 - 16:00

Android Nougat is Google’s big refresh of its phone and tablet operating system – split-screen mode, quick reply to notifications, revamped settings and toggle menus all make your phone easier and more friendly to use. We are happy to announce that with the release Ranorex 6.1.1, we now support Android Nougat.

In addition to this update, some minor bugs have been fixed. Check out the release notes for more details about the changes in this release.

Download free trial Learn how to test your Android app

The post Ranorex Automates Android 7 Nougat appeared first on Ranorex Blog.

Categories: Companies

What’s New in Surround SCM 2016.1 Documentation

The Seapine View - Tue, 10/25/2016 - 15:30

If you haven’t upgraded to Surround SCM 2016.1 yet, definitely consider doing it soon. This release has some nice new features and enhancements that you’ll want to take advantage of.

Here are some documentation updates to look at to get familiar with what’s new.

Surround SCM Web Client

Checking in files explains how to check in a file from the Surround SCM Web client. It also explains how these check ins are limited compared to check ins from the desktop client.

Downloading files explains that you can now download all files in a repository to a ZIP file.

TestTrack integration

Working with committed changelists explains that you can now attach committed changelists to items in TestTrack and external issue tracking tools.

A couple of other nice TestTrack integration enhancements to point out while we’re here. You can now:

  • Attach changelists with rename or remove file events to items in TestTrack or external issue tracking tools.
  • Find TestTrack items by number in the TestTrack Browser dialog box.
Labels

Viewing labels and Editing labels explain that you can now search for files and repositories to find labeled files.

There are also a few more enhancements to labels to note:

  • Performance is much better when working with labels with many files.
  • When viewing label differences, you can now see the number of missing, different, and identical files. Missing and different files in the list are highlighted in color to easily differentiate between them.
JIRA integration

Surround SCM JIRA Integration explains how to configure and use the JIRA 7/Surround SCM integration.

Remember, you can always find help on our web site. If you have documentation suggestions, please let us know.

Categories: Companies

SonarQube 6.1 in Screenshots

Sonar - Tue, 10/25/2016 - 14:40

The SonarSource team is proud to announce the release of SonarQube 6.1, which brings an improved interface and the first baby steps toward SonarQube clusters.

  • More Actionable Project Page
  • Redesigned Settings Pages
  • First Steps Toward Clustering

More Actionable Project Page

SonarQube 6.1 enhances the project front page to make duplications in the leak period useful and actionable.

Previously, we only tracked change in the duplications percentage against the global code base. So a very large project with only 100 new lines – all of them duplicated – still had a very small duplication percentage in the leak period. In other words, the true magnitude of new duplications was lost in the crowd. Now we calculate new duplications over code touched in the leak period, so those 100 new duplicated lines get the attention they deserve:

Redesigned Settings Pages

The global and project settings pages are redesigned for better clarity and ease of use in the new versioin:

Among the improvements the new pages bring is a clearer presentation of just what the default settings are:

First Steps Toward Clustering

There’s not a lot to show here, but it’s still worth mentioning that 6.1 takes the first steps down the road to a fully clusterizable architecture. You can still run everything on a single node if you want, but folks with large instances will be glad to know that we’re on the way to letting them distribute the load. Nothing’s configurable yet, but the planned capabilities are already starting to show up in the System Info portion of the UI:

That’s all, folks!

Its time now to download the new version and try it out. But don’t forget to read the installation or upgrade guide.

Categories: Open Source

OpenStack monitoring for enterprises

Dynatrace announces OpenStack support at Barcelona Summit OpenStack monitoring In today’s world, applications are the lifeblood of your business. When problems arise, reactive troubleshooting is no longer good enough. You can take control with 24/7 visibility into applications and cloud platform, even in the most demanding and dynamic environments. With OpenStack monitoring from Dynatrace, you […]

The post OpenStack monitoring for enterprises appeared first on about:performance.

Categories: Companies

Vertical Slice Test Fixtures for MediatR and ASP.NET Core

Jimmy Bogard - Mon, 10/24/2016 - 22:40

One of the nicest side effects of using MediatR is that my controllers become quite thin. Here’s a typical controller:

public class CourseController : Controller
{
    private readonly IMediator _mediator;

    public CourseController(IMediator mediator)
    {
        _mediator = mediator;
    }

    public async Task<IActionResult> Index(Index.Query query)
    {
        var model = await _mediator.SendAsync(query);

        return View(model);
    }

    public async Task<IActionResult> Details(Details.Query query)
    {
        var model = await _mediator.SendAsync(query);

        return View(model);
    }

    public ActionResult Create()
    {
        return View();
    }

    [HttpPost]
    [ValidateAntiForgeryToken]
    public IActionResult Create(Create.Command command)
    {
        _mediator.Send(command);

        return this.RedirectToActionJson(nameof(Index));
    }

    public async Task<IActionResult> Edit(Edit.Query query)
    {
        var model = await _mediator.SendAsync(query);

        return View(model);
    }

    [HttpPost]
    [ValidateAntiForgeryToken]
    public async Task<IActionResult> Edit(Edit.Command command)
    {
        await _mediator.SendAsync(command);

        return this.RedirectToActionJson(nameof(Index));
    }

    public async Task<IActionResult> Delete(Delete.Query query)
    {
        var model = await _mediator.SendAsync(query);

        return View(model);
    }

    [HttpPost]
    [ValidateAntiForgeryToken]
    public async Task<IActionResult> Delete(Delete.Command command)
    {
        await _mediator.SendAsync(command);

        return this.RedirectToActionJson(nameof(Index));
    }
}

Unit testing this controller is a tad pointless – I’d only do it if the controller actions were doing something interesting. With MediatR combined with CQRS, my application is modeled as a series of requests and responses, where my requests either represent a command or a query. In an actual HTTP request, I wrap my request in a transaction using an action filter, so the request looks something like:

image

The bulk of the work happens in my handler, of course. Now, in my projects, we have a fairly strict rule that all handlers need a test. But what kind of test should it be? At the very least, we want a test that executes our handlers as they would be used in a normal application. Since my handlers don’t have any real knowledge of the UI, they’re simple DTO-in, DTO-out handlers, I don’t need to worry about UI dependencies like controllers, filters and whatnot.

I do, however, want to build a test that goes just under the UI layer to execute the end-to-end behavior of my system. These are known as “subcutaneous tests”, and provide me with the greatest confidence that one of my vertical slices does indeed work. It’s the first test I write for a feature, and the last test to pass.

I need to make sure that my test properly matches “real world” usage of my system, which means that I’ll execute a series of transactions, one for the setup/execute/verify steps of my test:

image

The final piece is allowing my test to easily run these kinds of tests over and over again. To do so, I’ll combine a few tools at my disposal to ensure my tests run in a repeatable and predictable fashion.

Building the fixture

The baseline for my tests is known as a “fixture”, and what I’ll be building is a known starting state for my tests. There are a number of different environments I can do this in, but the basic idea are:

  • Reset the database before each test using Respawn to provide a known database starting state
  • Provide a fixture class that represents the known application starting state

I’ll show how to do this with xUnit, but the Fixie example is just as easy. First, I’ll need a known starting state for my fixture:

public class ContainerFixture
{
    private static readonly Checkpoint _checkpoint;
    private static readonly IServiceProvider _rootContainer;
    private static readonly IConfigurationRoot _configuration;
    private static readonly IServiceScopeFactory _scopeFactory;

    static ContainerFixture()
    {
        var host = A.Fake<IHostingEnvironment>();

        A.CallTo(() => host.ContentRootPath).Returns(Directory.GetCurrentDirectory());

        var startup = new Startup(host);
        _configuration = startup.Configuration;
        var services = new ServiceCollection();
        startup.ConfigureServices(services);
        _rootContainer = services.BuildServiceProvider();
        _scopeFactory = _rootContainer.GetService<IServiceScopeFactory>();
        _checkpoint = new Checkpoint();
    }

I want to use the exact same startup configuration that I use in my actual application that I do in my tests. It’s important that my tests match as much as possible the runtime configuration of my system. Mismatches here can easily result in false positives in my tests. The only thing I have to fake out are my hosting environment. Unlike the integration testing available for ASP.NET Core, I won’t actually run a test server. I’m just running through the same configuration. I capture some of the output objects as fields, for ease of use later.

Next, on my fixture, I expose a method to reset the database (for later use):

public static void ResetCheckpoint()
{
    _checkpoint.Reset(_configuration["Data:DefaultConnection:ConnectionString"]);
}

I can now create an xUnit behavior to reset the database before every test:

public class ResetDatabaseAttribute : BeforeAfterTestAttribute
{
    public override void Before(MethodInfo methodUnderTest)
    {
        SliceFixture.ResetCheckpoint();
    }
}

With xUnit, I have to decorate every test with this attribute. With Fixie, I don’t. In any case, now that I have my fixture, I can inject it with an IClassFixture interface:

public class EditTests : IClassFixture<SliceFixture>
{
    private readonly SliceFixture _fixture;

    public EditTests(SliceFixture fixture)
    {
        _fixture = fixture;
    }

With my fixture created, and a way to inject it into my tests, I can now use it in my tests.

Building setup/execute/verify fixture seams

In each of these steps, I want to make sure that I execute each of them in a committed transaction. This ensures that my test, as much as possible, matches the real world usage my application. In an actual system, the user clicks around, executing a series of transactions. In the case of editing an entity, the user first looks at a screen of an existing entity, POSTs a form, and then is redirected to a new page where they see the results of their action. I want to mimic this sort of flow in my tests as well.

First, I need a way to execute something against a DbContext as part of a transaction. I’ve already exposed methods on my DbContext to make it easier to manage a transaction, so I just need a way to do this through my fixture. The other thing I need to worry about is that with the built-in DI container with ASP.NET Core, I need to create a scope for scoped dependencies. That’s why I captured out that scope factory earlier. With the scope factory, it’s trivial to create a nice method to execute a scoped action:

public async Task ExecuteScopeAsync(Func<IServiceProvider, Task> action)
{
    using (var scope = _scopeFactory.CreateScope())
    {
        var dbContext = scope.ServiceProvider.GetService<DirectoryContext>();

        try
        {
            dbContext.BeginTransaction();

            await action(scope.ServiceProvider);

            await dbContext.CommitTransactionAsync();
        }
        catch (Exception)
        {
            dbContext.RollbackTransaction();
            throw;
        }
    }
}

public Task ExecuteDbContextAsync(Func<DirectoryContext, Task> action)
{
    return ExecuteScopeAsync(sp => action(sp.GetService<SchoolContext>()));
}

The method takes function that accepts an IServiceProvider and returns a Task (so that your action can be async). For convenience sake, if you just need a DbContext, I also provide an overload that just works with that instance. With this in place, I can build out the setup portion of my test:

[Fact]
[ResetDatabase]
public async Task ShouldEditEmployee()
{
    var employee = new Employee
    {
        Email = "jane@jane.com",
        FirstName = "Jane",
        LastName = "Smith",
        Title = "Director",
        Office = Office.Austin,
        PhoneNumber = "512-555-4321",
        Username = "janesmith",
        HashedPassword = "1234567890"
    };

    await _fixture.ExecuteDbContextAsync(async dbContext =>
    {
        dbContext.Employees.Add(employee);
        await dbContext.SaveChangesAsync();
    });

I build out an entity, and in the context of a scope and transaction, save it out. I’m intentionally NOT reusing those scopes or DbContext objects across the different setup/execute/verify steps of my test, because that’s not what happens in my app! My actual application creates a distinct scope per operation, so I should do that too.

Next, for the execute step, this will involve sending a request to the Mediator. Again, as with my DbContext method, I’ll create a convenience method to make it easy to send a scoped request:

public async Task<TResponse> SendAsync<TResponse>(IAsyncRequest<TResponse> request)
{
    var response = default(TResponse);
    await ExecuteScopeAsync(async sp =>
    {
        var mediator = sp.GetService<IMediator>();

        response = await mediator.SendAsync(request);
    });
    return response;
}

Since my “mediator.SendAsync” executes inside of that scope, with a transaction, I can be confident that when the handler completes it’s pushed the results of that handler all the way down to the database. My test now can send a request fairly easily:

var command = new Edit.Command
{
    Id = employee.Id,
    Email = "jane@jane2.com",
    FirstName = "Jane2",
    LastName = "Smith2",
    Office = Office.Dallas,
    Title = "CEO",
    PhoneNumber = "512-555-9999"
};

await _fixture.SendAsync(command);

Finally, in my verify step, I can use the same scope-isolated ExecuteDbContextAsync method to start a new transaction to do my assertions against:

await _fixture.ExecuteDbContextAsync(async dbContext =>
{
    var found = await dbContext.Employees.FindAsync(employee.Id);

    found.Email.ShouldBe(command.Email);
    found.FirstName.ShouldBe(command.FirstName);
    found.LastName.ShouldBe(command.LastName);
    found.Office.ShouldBe(command.Office);
    found.Title.ShouldBe(command.Title);
    found.PhoneNumber.ShouldBe(command.PhoneNumber);
});

With the setup, execute, and verify steps each in their own isolated transaction and scope, I ensure that my vertical slice test matches as much as possible the flow of actual usage. And again, because I’m using MediatR, my test only knows how to send a request down and verify the result. There’s no coupling whatsoever of my test to the implementation details of the handler. It could use EF, NPoco, Dapper, sprocs, really anything.

Wrapping up

Most integration tests I see wrap the entire test in a transaction or scope of some sort. This can lead to pernicious false positives, as I’ve not made the full round trip that my application makes. With subcutaneous tests executing against vertical slices of my application, I’ve got the closest representation as possible without executing HTTP requests to how my application actually works.

One thing to note – the built-in DI container doesn’t allow you to alter the container after it’s built. With more robust containers like StructureMap, I’d along with that scope allow you to register mock/stub dependencies that only live for that scope (using the magic of nested containers).

If you want to see a full working example using Fixie, check out Contoso University Core.

Categories: Blogs

The Artificial Intelligence-Driven Vision for Digital Performance Management

The goal is now in sight – if not yet in reach: a fully-automated operational production environment. The rise of DevOps shows the progress we’ve made in automating the provisioning and configuration of ops, as well as application deployment. Management of the ops environment isn’t far behind. IT Operations Management (ITOM), and in particular Application […]

The post The Artificial Intelligence-Driven Vision for Digital Performance Management appeared first on about:performance.

Categories: Companies

Webinar Recording and Q&A: What’s New in Surround SCM 2016.1

The Seapine View - Mon, 10/24/2016 - 17:30

Thanks to everyone who attended What’s New in Surround SCM 2016.1! The webinar recording is now available if you weren’t able to attend or if you would like to watch it again. The edited questions and answers from the webinar follows.

Questions & Answers Q: Do I need a Web server like IIS or Apache to use the Surround SCM Web client?

No, you do not need IIS or Apache, as the Surround SCM architecture is different than the TestTrack Web architecture. The installer includes an SCM Web Server component that runs as a service and listens on a specific port. The URL for end users includes the port number, so the browser points directly to the SCM Web Server component without the need for IIS or Apache.

Q: How does Surround integrate with Jenkins?

Jenkins integrates with Surround SCM through the Surround SCM command line interface. Information about integrating with Jenkins can be found in this knowledgebase article: http://www.seapine.com/knowledgebase/index.php?View=entry&EntryID=761

Q: Are there plans to implement a full Web client with the same functionality as the native client?

The current Web client is not intended for daily usage to change code, but for quick access on other machines. There are no near term plans for a full Web client, but that is still a possibility in the future. If you’re looking for a Web client to allow remote users to connect, remote users can use the Surround SCM native client, which communicates via TCP/IP.

Q: What is the oldest version of Surround we can upgrade to 2016.1?

You can upgrade directly to Surround SCM 2016.1 from Surround SCM 3.0 or later. However, if you are upgrading from version 2008.1 or earlier, it is recommended that you contact Seapine Support for guidance and that you read these two documents before upgrading:

http://www.seapine.com/knowledgebase/index.php?View=entry&EntryID=697

http://downloads.seapine.com/pub/docs/surroundscmupgradeguide.pdf

Q: Can you associate a file change checked into the Web client against a TestTrack ticket?

That functionality is available in the Surround SCM native client, but is not currently available in Surround SCM Web. The check in functionality via Surround SCM Web is very lightweight and does not currently support integration with TestTrack, changelists, code reviews, or workflow state changes. The Surround SCM Web integration with TestTrack has been captured as a feature request for consideration in a future version.

Q: Can you make the Surround SCM Web client read-only? Or does it match what the users have set as their security group?

The Surround SCM Web client will apply security permissions from the user’s security group, which matches the functionality in the Surround SCM native client. So if a repository/file is read-only based on the security group settings, then it will be read-only in Surround SCM Web.

Q: After committing a changelist, can you set the issue to fixed or closed in Surround?

In the Surround SCM user interface, you can only mark a TestTrack issue as fixed from the TestTrack browser window, which is displayed during an Attach to TestTrack operation. So if you’re attaching the TestTrack item to a committed Surround SCM changelist, then you can mark the issue as fixed at that point. However, after the Attach to TestTrack event is completed, the Surround SCM native client does not have a separate window to allow users to fix TestTrack issues.

Categories: Companies

How to Give Better Code Reviews

Software Testing Magazine - Mon, 10/24/2016 - 17:14
Wikipedia defines code review as a systematic examination of computer source code to improve the overall quality of software. In his blog post, Joel Kemp provides some propositions on how to give better code reviews. Joel Kemp starts hist by explaining that code reviews are a key tool create knowledge transfer and spread best practices throughout a development team. There are at least two phases in code reviews and most people stop at the first and less valuable phase where only the obvious issues are examined. The second phase is the contextual pass where participants examine issues like adequate usage of frameworks/libraries or a satisfactory test coverage. During this phase, Joel Kemp advice to be cautious of how you ask questions: you should suggest approaches or provoke exploration and not try to deliver solutions. The conclusion of the post is that “[…] reviewing code is as much of a craft as writing code. As reviewers, it’s our mission to improve on a number of fronts: our ability to understand code; our process, awareness, and criteria for analysis; and our teammates’ ability to write correct diffs.” Read the complete blog post on https://medium.com/@mrjoelkemp/giving-better-code-reviews-16109e0fdd36
Categories: Communities

Exploring Features and Stories: Lisa & Janet’s Agile Testing Days Tutorial

Agile Testing with Lisa Crispin - Sun, 10/23/2016 - 01:00

We hope you’ll join us at Agile Testing Days in Potsdam this December, and we invite you to join our tutorial on Day One! Our tutorial title is “Exploring Features and Stories: How Testing Helps Build Shared Understanding”. Lots of words – what do we mean by that? What might you learn in our hands-on tutorial? Here’s a sneak preview!

Building shared understanding

Shared understanding unicorn Delivering the right thing – is it a myth? (picture: www.squirrelpicnic.com)

I’ve often had this experience, even on “high functioning” agile teams: The product owner brings us a feature, we work hard to develop that code using technical- and customer-facing tests, the code is robust and solid. We demo to the customer, and are surprised to hear, “That’s not what I wanted!” Sometimes we may release a new feature to production, only to realize we missed some key functionality.

We can avoid most of this frustration and wasted effort by making sure we all share the same vision of why the business needs the feature, who will be using it, and how it should work. It’s a team effort, and testers with an agile testing mindset can bring so much to the party as we elicit specifications and explore requirements. Here are just a few of the techniques you’ll get to practice during our tutorial day.

7 Product Dimensions

7 product dimensions 7 Product Dimensions

In their book Discover to Deliver, Ellen Gottesdiener and Mary Gorman share a wealth of agile business analysis practices that help make sure we address all types of quality that our customers may want. We’ve found the 7 Product Dimensions especially useful. I use them as a “cheat sheet” to prompt questions about a new feature. Some are more obvious – who will use the feature? Some we often forget about – what policies, regulations and rules may impact the feature or be affected by it? A key area is the quality attribute dimension – those little things like security, stability, and testability. Asking questions about these dimensions of quality helps the customer and technical teams quickly get on the same page.

Process map or flow diagram

process map flow diagram Process Map / Flow Diagram

We’ll include tried and true techniques such as using process maps, also known as flow diagrams, to visualize the flow of work. Stakeholders and the software team members collaborate to map the sequence of actions included in the feature set, the materials or services being passed through as inputs and outputs, decision points, the various people involved, how much time is needed at each step. Talking things through while illustrating the flow helps flush out hidden assumptions and “unknown unknowns”.

Example mapping

Example Example Map from JoEllen Carter (testacious.com) Example Example Map from JoEllen Carter (testacious.com)

For many years, Janet and I have used the “Power of Three”, explained in our first book Agile Testing, involving a business expert, a developer and a tester in any discussions about feature or system behavior. (Some people call this “Three Amigos”, as described by George Dinwiddie).  Recently we learned a technique that makes these conversations even more productive: Example Mapping from Matt Wynne. We get the Amigos together – perhaps also including a designer, database or operations expert – to capture business rules along with the examples that illustrate how the business rules work. This is a quick way teams can prepare their stories for iteration planning meetings.

Add lots to your toolbox, and help your team have more fun!

We hope you’ll join us on the 5th of December for a day of practicing ways teams can quickly build shared understanding of features, and make sure they deliver the right thing. Please register now!

The post Exploring Features and Stories: Lisa & Janet’s Agile Testing Days Tutorial appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Making Fünf Myself

Hiccupps - James Thomas - Sat, 10/22/2016 - 07:11

The first post on Hiccupps was published five years ago this week. It's called Sign Language and, reading it back now, although I might not write it the same way today, I'm not especially unhappy with it. The closing sentence still feels like a useful heuristic, even if I didn't present it that way at the time:
Your audience is not just the target audience, it's anyone who sees what it is you've done and forms an opinion of you because of it.I've looked back over the blog on most of its anniversaries, and each time found different value:
  • I Done the Ton: After two years I compared my progress to my initial goals and reflected on how I'd become a tester and test manager 
  • It's the Thought That Counts: After three years I began to realise that the act of blogging was an end in itself, not just a means to an end 
  • My Two Cents: After four years, the value of time series data about myself and the evolution (or lack of evolution) of my thoughts and positions became clearer 

And so what have I observed after five years? Well, by taking the time series data to Excel (see the image at the top), I find that this has been a bumper year in terms of the number of posts I've produced.

I think it's significant that a year ago I attended and spoke at EuroSTAR in Maastricht and came back bursting with ideas. In November 2015 I wrote eight posts, the largest number in any month since November 2011. This year I've achieved that number three times and reached seven posts in a further three months.

But I don't confuse quantity with quality ... very often.

In fact, if I look back over this year's posts I see material that I am ridiculously proud of:
  • Joking With Jerry: Jerry Weinberg - yes, that Jerry Weinberg - asked me to organise a discussion on something that I'd written that he enjoyed. I think Jerry is the person I have been most influenced by as a tester and a manager and it's no exaggeration to say that, while nerve-wracking, it was a labour of love from start to end. 
  • Bug-Free Software? Go For It!: An essay I wrote in preparation for CEWT #2 which, I think, shows a biggering in my capacity to think bigger, and which I like because it reminds me that the Cambridge Exploratory Workshop on Testing is a thing. I set it up. It works. Other people are getting value from it. And we're doing another one in a couple of weeks. 
  • Toujours Testing: This one simply because it is a kind of personal manifesto. 
  • What is What is Professional Testing?: An essay I wrote in preparation for MEWT #5 which, I think, reflects the move I've been making over the years to perform what I might call exploratory retrospection. By this I mean that I will try to test my testing while it is ongoing rather than waiting until afterwards - although, of course, I reserve the right to do that too. What I like about this is that I can and do use the same kinds of tools in both cases. 
  • Tools: Take Your Pick: It's got ideas and tools up the wazoo. From the seed of a thought I had while cleaning the bathroom through the thicket of ideas that came pouring out once I started to scratch away at it. From the practical to the theoretical and back. I found it challenging to arrange the ideas in my head but immensely satisfying to write. 

I'll stop at five, for no other reason than this post is for the fifth birthday. I wouldn't be so crass as to say they're presents for you. But when they pop out, completed, they do sometimes feel like presents for me.
Categories: Blogs

Have you been taking DNS monitoring for granted? Not today!

DNS Incident, Friday, October 21, 2016 Update: 9:30am EST, October 26, 2016 The dust is still settling from Friday’s DDoS attack.  Cogeco Peer 1 provided an interesting infographic highlighting some of factors which businesses need to consider when trying to understand the cost of a DDoS attack. We are still looking at the overall cost and […]

The post Have you been taking DNS monitoring for granted? Not today! appeared first on about:performance.

Categories: Companies

Contoso University updated to ASP.NET Core

Jimmy Bogard - Fri, 10/21/2016 - 17:33

I pushed out a new repository, Contoso University Core, that updated my “how we do MVC” sample app to ASP.NET Core. It’s still on full .NET framework, but I also plan to push out a .NET Core version as well. In this, you can find usages of:

It uses all of the latest packages I’ve built out for the OSS I use, developed for ASP.NET Core applications. Here’s the Startup, for example:

public void ConfigureServices(IServiceCollection services)
{
    // Add framework services.
    services.AddMvc(opt =>
        {
            opt.Conventions.Add(new FeatureConvention());
            opt.Filters.Add(typeof(DbContextTransactionFilter));
            opt.Filters.Add(typeof(ValidatorActionFilter));
            opt.ModelBinderProviders.Insert(0, new EntityModelBinderProvider());
        })
        .AddRazorOptions(options =>
        {
            // {0} - Action Name
            // {1} - Controller Name
            // {2} - Area Name
            // {3} - Feature Name
            // Replace normal view location entirely
            options.ViewLocationFormats.Clear();
            options.ViewLocationFormats.Add("/Features/{3}/{1}/{0}.cshtml");
            options.ViewLocationFormats.Add("/Features/{3}/{0}.cshtml");
            options.ViewLocationFormats.Add("/Features/Shared/{0}.cshtml");
            options.ViewLocationExpanders.Add(new FeatureViewLocationExpander());
        })
        .AddFluentValidation(cfg => { cfg.RegisterValidatorsFromAssemblyContaining<Startup>(); });

    services.AddAutoMapper(typeof(Startup));
    services.AddMediatR(typeof(Startup));
    services.AddScoped(_ => new SchoolContext(Configuration["Data:DefaultConnection:ConnectionString"]));
    services.AddHtmlTags(new TagConventions());
}

Still missing are unit/integration tests, that’s next. Enjoy!

Categories: Blogs

He Said Captain

Hiccupps - James Thomas - Fri, 10/21/2016 - 10:41
A few months ago, as I was walking my two daughters to school, one of their classmates gave me the thumbs up and shouted "heeeyyy, Captain!"

Young as the lad was, I congratulated myself that someone had clearly recognised my innate leadership capabilities and felt compelled to verbalise his respect for them, and me. Chest puffed out I strutted across the playground, until one of my daughters pointed out that the t-shirt I was wearing had a Captain America star on the front of it. Doh!

Today, as I was getting dressed, my eldest daughter asked to choose a t-shirt for me to wear, and picked the Captain America one. "Do you remember the time ..." she said, and burst out laughing at my recalled vain stupidity.

Young as my daughter is, her laughter is well-founded and a useful lesson for me. I wear a virtual t-shirt at work, one with Manager written on it. People no doubt afford me respect, or at least deference, because of it. I hope they also afford me respect because of my actions. But from my side it can be hard to tell the difference. So I'll do well to keep any strutting in check.
Image: http://www.musicstack.com/item/386780229
Categories: Blogs

Behavior-Driven Development (BDD) in Java with JGiven

Software Testing Magazine - Fri, 10/21/2016 - 08:00
Although Behavior Driven Development has been existing for over 10 years, the methodology hasn’t yet been very popular in the Java world. One reason for this are the existing BDD tools for Java that are cumbersome for developers to use and require a lot of maintenance. This talke wants to change this with JGiven and provide Java developers with a framework that they like to use and at the same time satisfy the operating department with instructive reports. JGiven scenarios are written in the usual Given-When-Then formula with an embedded Java DSL. This allows developers to use all IDE features such as auto completion and refactoring tools. The resulting scenarios are indeed separately very readable, but JGiven can additionally generate more reports in different formats that can be used for collaboration with domain experts. Through a modular concept, new scenarios can be easily assembled from parts of other scenarios. This speeds up the creation of new scenarios and avoids test code duplication. Since neither Groovy nor Scala are still needed and JGiven is compatible with JUnit and TestNG, JGiven can be immediately applied in Java projects and be easily integrated into existing test infrastructures. In this presentation, the speaker will give an introduction to JGiven and, based on a short live coding session, show how quickly and easily BDD Scenarios can be written in JGiven. Video producer: http://www.tngtech.com
Categories: Communities

Advanced Android Espresso Testing

Testing TV - Fri, 10/21/2016 - 07:49
Do you test your Android apps? It’s okay if you don’t – historically the tools had not been stellar. But they have gotten much better, and I am going to show you my favorite, instrumentation testing with Espresso. The Espresso testing framework provides APIs for writing UI tests to simulate user interactions within a single […]
Categories: Blogs

Vivit Performance Engineering SIG: Education 101

HP LoadRunner and Performance Center Blog - Fri, 10/21/2016 - 05:08

Performance Engineering Education teaser.png

The October 11 edition of the Vivit Performance Engineering Special Interest Group promises to be an exciting session. The discussion will focus on Performance Engineering Education. Keep reading to learn more about the discussion.

Categories: Companies

Not born on the cloud yesterday: Easing into continuous deployments with blueprints

IBM UrbanCode - Release And Deploy - Thu, 10/20/2016 - 17:15

blueprints-continuous-development-300x200

As traditional IT enterprises embrace the cloud to handle continuous delivery, they face challenges posed by their existing legacy systems handling complex applications and environments. For example, mainframe-based systems (typically located within the firewall) often have tighter restrictions on security and data management, which can lead to slower iteration cycles.

According to a recent ADT Mag survey of IT executives, nearly two-thirds of respondents are integrating legacy applications with new mobile or front-end applications. Not surprisingly, managing complex environments was the top challenge when deploying applications that touch both legacy and new systems.

For a discussion on how the UrbanCode Deploy Blueprint Designer helps enterprises with complex applications make the move to the cloud, see the following document:

https://www.ibm.com/blogs/cloud-computing/2016/10/continuous-deployments-blueprints/

Categories: Companies

How to Combine Ranorex and NeoLoad Tests

Ranorex - Thu, 10/20/2016 - 15:43

Let's be honest: We rarely test the product functionality under load. But how can we be sure our end product works when our customers are using it? As we've described in our previous blog post "Combining Automated Functional and Load Testing", it often makes sense to combine functional and non-functional tests. A functional test, which works fine in idle conditions, might fail when the back-end server is under load. Just like simply stressing a back-end system may not reveal functional issues, which can only be found by an automated functional test. If we want to find those errors that only occur under load, we have to combine automated functional tests and automated load tests.

We're happy to announce that you can now combine Ranorex and NeoLoad tests!

In this blog, we want to show you how you can set up the Ranorex-NeoLoad integration and what you can do with it. But first, let's quickly cover the basics:

What is NeoLoad?

NeoLoad is an automated load and performance testing tool from Neotys.

NeoLoad offers a full-fledged REST API to either remote control the execution of a NeoLoad test or transmit timing values to NeoLoad. To enable integration with Ranorex, the REST API calls are wrapped with Ranorex functions and packaged into a NuGet package for easy deployment.

What do I need to enable the Ranorex-NeoLoad integration?

Now that you're all set, we want to show you in detail how you can:

  1. Set up the Ranorex-NeoLoad integration
  2. Use the load testing modules available with the integration
  3. Transmit navigation timing to a NeoLoad test
  4. Update meta-information in cross-browser tests
  5. Upgrade an existing Ranorex project with the Ranorex-NeoLoad NuGet package
Setting up the Ranorex - NeoLoad integration

First, we need to set up the integration:

  1. Add the NuGet package to the Ranorex project
  2. Extend the "app.config" file
Step 1: Add the NuGet package to the Ranorex project
  • Right-click on "References" in the Ranorex project view
  • Select "Manage Packages..."
  • Search for "Ranorex" and add the "Ranorex-NeoLoad integration" package

Manage Packages

 Add NeoLoad Integration Package

This will automatically add the necessary libraries to the Ranorex project. The following code modules will now appear in the module browser:

Added Modules in Module Browser

Step 2: Extending the "app.config" file

To ensure the Ranorex project is properly created, you need to extend the 'runtime' section in the 'app.config' file in the Ranorex project with the following information:

<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1" xmlns:bcl="urn:schemas-microsoft-com:bcl">
 <dependentAssembly bcl:name="System.Runtime">
 <assemblyIdentity name="System.Runtime" publicKeyToken="b03f5f7f11d50a3a" culture="neutral" />
 <bindingRedirect oldVersion="0.0.0.0-2.6.9.0" newVersion="2.6.9.0" />
 </dependentAssembly>
 <dependentAssembly>
 <assemblyIdentity name="System.Threading.Tasks" publicKeyToken="b03f5f7f11d50a3a" culture="neutral" />
 <bindingRedirect oldVersion="0.0.0.0-2.6.9.0" newVersion="2.6.9.0" />
 </dependentAssembly>
 <dependentAssembly>
 <assemblyIdentity name="System.Net.Http" publicKeyToken="b03f5f7f11d50a3a" culture="neutral" />
 <bindingRedirect oldVersion="0.0.0.0-2.2.28.0" newVersion="2.2.28.0" />
 </dependentAssembly>
 </assemblyBinding>

Copy code to clipboard

Copy this code and enter it in the 'runtime' section in the 'app.config' file, right after the line <enforceFIPSPolicy enbaled="false" />:

App Config before

Your 'app.config' file should now look like this:

App Config after

You can now use the modules, which are included in the NuGet package, freely within the Ranorex test automation project.

Modules included in the Ranorex-NeoLoad NuGet package

The following modules, and their individual variables, are included in the Ranorex-NeoLoad NuGet package:

ConnectToRuntimeApi

This module establishes a connection to the NeoLoad runtime API. This API is used to remote control a running NeoLoad test. It must be initialized before using the following modules: Start/StopNeoLoadTest and Add/RemoveVirtualUsers.

Show the variables available for this module RuntimeApiUri: The Uniform Resource Identifier (URI) of the NeoLoad REST service.
ApiKey: Key stored in NeoLoad to avoid unauthorized access to the REST API. If no key is in use, this variable can be left blank.

Select Edit > Preference to access these variables in NeoLoad.

ConnectToRuntimeApi

ConnectToDataExchangeApi

This module establishes a connection to the NeoLoad data-exchange API. This API is used to transmit external data to a running NeoLoad test (it is not active if no test is running). This module must be initialized before the modules Start/StopNeoLoadTest and Add/RemoveVirtualUsers.

Show the variables available for this module The following three values provide meta information for a running test and can be used in the "filter" functionality within NeoLoad test results.
Location: The location, where the functional test is performed (e.g., Graz, London, Office XYZ, ...)
Hardware: The hardware used where the functional test is running (e.g., Intel i5-5200u). A string describing the utilized operating system is automatically appended to the string defined in "Hardware".
Software: The software, tested in the functional test. When testing a browser, it is recommended to hand over the browser name. When performing a cross-browser test, it is recommended to bind this variable to the column specifying the browsers.
DataExchangeApiUri: The URI of the NeoLoad REST service.
ApiKey: A key stored in NeoLoad to avoid unauthorized access to the REST API (if no key is in use, this variable can be left blank).

Select Edit > Preference to access the last two variables in NeoLoad.

ConnectToDataExchangeApi

StartNeoLoadTest

This module starts a NeoLoad test scenario. You need to define the scenario in NeoLoad before.

Show the variables available for this module Scenario: The scenario, as defined within the NeoLoad test, that should be started.
Timeout: The maximum amount of time (in hh:mm:ss) given to Ranorex to start a specific test (recommended value: 00:01:00).

Interval: The time interval (in hh:mm:ss) after which Ranorex retries to start a specific test (recommended value: 00:00:10).

Important: Please make sure to add a leading 0 before a single digit number when entering the timeout and interval values.

StopNeoLoadTest

This module stops the currently running NeoLoad test.

Show the variables available for this module Timeout: The maximum amount of time (in hh:mm:ss) given to Ranorex to start a specific test (recommended value: 00:01:00).
Interval: The time interval (in hh:mm:ss) after which Ranorex retries to start a specific test (recommended value: 00:00:10).

Important: Please make sure to add a leading 0 before a single digit number when entering the timeout and interval values.

AddVirtualUsers

This module adds virtual users to a population, defined in a NeoLoad test scenario. This module can only be used when a test is already running.

Show the variables available for this module Population: The population, as defined in the NeoLoad test scenario, virtual users will be added to.
Amount: The amount of virtual users that should be added to the given population.

RemoveVirtualUsers

This module removes virtual users from a population, which is defined in a NeoLoad test scenario. This module can only be used when a test is already running.

Show the variables available for this module Population: The population, as defined in the NeoLoad test, virtual users will be removed from.
Amount: The amount of virtual users specified that will be removed from the given population.

Transmit navigation timing data from any browser to NeoLoad

Opening a website is related to a certain latency. This latency depends on various factors, such as the network connection or the browser used. It can be measured with the "Navigation Timing" API, which is offered by all browsers. If you evaluate these timing values, especially when the website is under load, you can localize potential bottlenecks. Eliminating the identified bottlenecks will ultimately improve the user experience.

The NuGet package offers a mechanism to calculate these timing values and transmit the results to NeoLoad. You can find a more detailed description of the navigation timing here. The timing values are calculated by the Ranorex/NeoLoad Nuget package:

Calculated Timing Values

Highlighted in green, you can see the timing values that are calculated by Ranorex and submitted to NeoLoad.

To transmit the timing values, you need to drag and drop the repository root element, which represents the DOM of the website under test, into the action table in Ranorex Studio. Once the NuGet package is added to the Ranorex project, the additional entry "SendNeoLoadTimingValues" will appear in the "Dynamic Actions" list.

Please note: This entry only appears if the root element was created after the NuGet package was added to the Ranorex project. You can find a description of how to enable the NeoLoad "capability" in an existing repository here.

Add Neoload Action

The "SendNeoLoadTimingValues" action accepts a "transaction name" as an argument. We recommend using the current page as a transaction name in the Ranorex action table. As soon as NeoLoad receives the timing values of this transaction, a tree with the root node containing the Ranorex test suite is automatically created. Another subfolder is automatically created for the respective transaction name. This folder contains the timing values transmitted from Ranorex.

Add SendNeoLoadTimingValues Action

Resulting Neoload Graphs

Important:  Please make sure to initialize the module "ConnectToDataExchangeApi" before you use the module "SendNeoLoadTimingValues". Otherwise, an error is thrown. 

You can drag the data series into the graph board in NeoLoad to visualize it. If you've provided meta-information, such as "Hardware", "Software" or "Location" in the "ConnectToDataExchangeApi", you can now use this information to filter timing values transmitted from Ranorex.

Update meta-information in cross-browser tests

If you execute the test in multiple browsers, you have to update the filter options in NeoLoad by calling the "ConnectToDataExchangeApi" module again. To do so, bind the data column, which specifies the browsers, with the "Software" argument from the "ConnectToDataExchangeApi" module. You can now compare timing values from different browsers.

Timing Values of Different Browser

Exemplary Ranorex project structure

In the screenshot below you can see an example of how you can use the modules provided in the NuGet package within a Ranorex test project:

Ranorex Testsuite Project Structure

As you can see, a connection to the runtime API is established in the global setup section. The login information take the form of global parameters.
At the very beginning, "StartNeoLoadTest" starts the NeoLoad test scenario. The following test case is data driven and provides the number of virtual users that will be added to the test. These values are provided in "AddVirtualUsers". The inner loop is a cross-browser loop. It defines the browsers in which the test will be executed.

Please note: The module "ConnectToDataExchangeApi" can be called multiple times to update the current browser with the filter feature in NeoLoad.

Upgrade an existing Ranorex project with the Ranorex/NeoLoad NuGet package

If you add the NuGet package to an existing Ranorex project, which already contains a Ranorex Object Repository with repository elements, the modules provided by the NuGet package are automatically available in the module browser. In this case, the "SendNeoLoadTimingValues" option won't be available in the "Dynamic Actions" list for the already existing repository items. Perform the following steps to enable this option:

1. Open the RanoreXPath editor

RanoreXPath Editor

2. Switch to "Browser & Results"

Switch to Browse and Results in Spy

3. Drag and drop the root element from the Ranorex Spy to the matching root element in the Ranorex Object Repository.

Drag and Drop to Repository

Now, the "SendNeoLoadTimingValues" will be available in the Dynamic Actions list for the repository root element that describes the website DOM.

Conclusion

In this blog, you've learned how you can combine Ranorex and NeoLoad tests. You've seen the modules and variables that are available with this integration and how you can transmit timing values to a NeoLoad performance tests. Now, you will be able to validate critical business cases under load conditions to ensure system stability under real usage conditions and identify potential bottle-necks across technology borders.

Further Resources

Watch the Ranorex-NeoLoad Webinar

The post How to Combine Ranorex and NeoLoad Tests appeared first on Ranorex Blog.

Categories: Companies

Security by Design: The Benefits of Building Quality In

Sonatype Blog - Thu, 10/20/2016 - 09:01
I recently sat down with Pete Erickson, founder of Modev, to discuss the recent findings from our 2016 State of the Software Supply Chain Report.  The conversation is available in the Security by Design podcast series that Pete has produced and made available on iTunes.  

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today