Skip to content

Feed aggregator

The Future is Here

Agile Testing with Lisa Crispin - Mon, 12/01/2014 - 06:49

Agile Testing Days 2014’s theme was the future of agile testing. What challenges are ahead, and how will we address them? Janet Gregory and I facilitated a workshop with experienced agile practitioners designed to identify some of the biggest issues related to testing and quality, and come up with experiments we can try to help overcome those challenges.

goalsFor me, it was exciting that we could get a room full of people who truly had lots of experience with testing on agile teams. We had a diverse mix of testers, programmers, managers, coaches, and people who multi-task among multiple roles, willing to share their experiences and collaborate to generate new ideas. In fact, many of the participants would be good coaches and facilitators for agile testing workshops themselves! More teams are succeeding in delivering business value frequently at a sustainable pace (to paraphrase Elisabeth Hendrickson). Testing and testers are a part of this success.

However, we all still face plenty of problems. During our first exercise, each participant wrote down the biggest obstacles to testing and quality that their teams face. We used an affinity diagram to identify the top three:

  • Whole team testing: how to get all roles on a team to collaborate for testing activities, how does testing “get respect” across the organization?
  • The “ketchup effect”: like getting ketchup out of a bottle, we try and try to deliver software features a little at a time, only to have them come gushing out at the end and making a big mess!
  • Agile testing mindset – how do we change testers’ mindsets? How do we spread this mindset of building quality in, testing early and often, across the organization?

We used several different brainstorming techniques to come up with experiments to work on these challenges: impact mapping, brain writing, and diagramming on a whiteboard (everyone chose mind mapping for this). You can see the results of some of this in the photos. Then we used a different technique to think about other challenges identified, such as how to build testing skill sets, building the right thing, and the tester’s role in continuous delivery.

Building Skill Sets

Building Skill Sets

This last technique was the “giveaway” (to borrow a term from Alex Schwarz and Fanny Pittack) I was the most happy to take from the workshop. Janet and I gave general instructions, but the participants self-organized. Each table group took a topic to start with and mind mapped ideas about that topic. Some teams supplemented their mind maps by drawing pictures. Then the magic happened – after a time period, the groups rotated so each was working on another group’s mind map and adding their own ideas. They rotated once more so that each group worked on each mind map.

You can see from the pictures how many ideas came out of this. Like brain writing, it is amazing that you can write down all the ideas you think you have, then, seeing someone else’s ideas, you can think of even more. I encourage you to take a look at these mind maps, and choose some ideas for your own team’s small experiments. Even more importantly, I urge you to try a brainstorming exercise such as the group mind mapping, rotating among topics, and see the power of your collective experience and skill sets!

Cube-shaped tester

Cube-shaped tester

As we rotated among the different topics drawing on mind maps, one participant, Marcelo Leite (@marcelo__leite on Twiter), made a note on the skills mind map about “cube-shaped testers”. Janet and I talk a lot about T-shaped testers and square-shaped teams, concepts we learned from Rob Lambert and Adam Knight. We asked Marcelo to explain the cube-shaped idea. As with the Rubiks cube, we have different “colors” of skills, we can twist them around and form different combinations. This way we can continually adapt to new and unique situations. A broad mix of skills lets us take on any future challenge.

I’m out here now working on my cube shaped skills. How about you? I’d love to hear about your own learning journey towards the future of agile testing.

You can take a look at the slides for our workshop, and email me if you’d like the resources list we handed out. Also do check out the slides from our keynote, which sadly the audience didn’t get to see as the projector malfunctioned.

More blogs about #AgileTD:

I know the Agile Testing Days organizers will post a list of all blog posts about the conference, but here are some I made note of (and I still haven’t read them all!) I’m sure I missed some, so please ping me with additional links if you have ‘em.


  • (be sure to go back from here and read all of Pete’s blogs including his live blogs from AgileTD)

The post The Future is Here appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Protractor: Angular testing made easy

Google Testing Blog - Sun, 11/30/2014 - 19:50
By Hank Duan, Julie Ralph, and Arif Sukoco in Seattle

Have you worked with WebDriver but been frustrated with all the waits needed for WebDriver to sync with the website, causing flakes and prolonged test times? If you are working with AngularJS apps, then Protractor is the right tool for you.

Protractor ( is an end-to-end test framework specifically for AngularJS apps. It was built by a team in Google and released to open source. Protractor is built on top of WebDriverJS and includes important improvements tailored for AngularJS apps. Here are some of Protractor’s key benefits:

  • You don’t need to add waits or sleeps to your test. Protractor can communicate with your AngularJS app automatically and execute the next step in your test the moment the webpage finishes pending tasks, so you don’t have to worry about waiting for your test and webpage to sync. 
  • It supports Angular-specific locator strategies (e.g., binding, model, repeater) as well as native WebDriver locator strategies (e.g., ID, CSS selector, XPath). This allows you to test Angular-specific elements without any setup effort on your part. 
  • It is easy to set up page objects. Protractor does not execute WebDriver commands until an action is needed (e.g., get, sendKeys, click). This way you can set up page objects so tests can manipulate page elements without touching the HTML. 
  • It uses Jasmine, the framework you use to write AngularJS unit tests, and Javascript, the same language you use to write AngularJS apps.

Follow these simple steps, and in minutes, you will have you first Protractor test running:

1) Set up environment

Install the command line tools ‘protractor’ and ‘webdriver-manager’ using npm:
npm install -g protractor

Start up an instance of a selenium server:
webdriver-manager update & webdriver-manager start

This downloads the necessary binary, and starts a new webdriver session listening on http://localhost:4444.

2) Write your test
// It is a good idea to use page objects to modularize your testing logic
var angularHomepage = {
nameInput : element(by.model('yourName')),
greeting : element(by.binding('yourName')),
get : function() {
setName : function(name) {

// Here we are using the Jasmine test framework
// See for more details
describe('angularjs homepage', function() {
it('should greet the named user', function(){
toEqual('Hello Julie!');

3) Write a Protractor configuration file to specify the environment under which you want your test to run:
exports.config = {
seleniumAddress: 'http://localhost:4444/wd/hub',

specs: ['testFolder/*'],

multiCapabilities: [{
'browserName': 'chrome',
// browser-specific tests
specs: 'chromeTests/*'
}, {
'browserName': 'firefox',
// run tests in parallel
shardTestFiles: true

baseUrl: '',

4) Run the test:

Start the test with the command:
protractor conf.js

The test output should be:
1 test, 1 assertions, 0 failures

If you want to learn more, here’s a full tutorial that highlights all of Protractor’s features:

Categories: Blogs

AutoMapper 3.3 released

Jimmy Bogard - Sat, 11/29/2014 - 18:40

View the release notes:

AutoMapper 3.3 Release Notes

And download it from NuGet. Some highlights in the release include:

  • Open generic support
  • Explicit LINQ expansion
  • Custom constructors for LINQ projection
  • Custom type converter support for LINQ projection
  • Parameterized LINQ queries
  • Configurable member visibility
  • Word/character replacement in member matching

In this release, I added documentation for every new feature (linked in the release notes), and pertinent improvements.

This will likely be the last 3.x release, as for the next release I’ll be focusing on refactoring for custom convention support, plus supporting the new .NET core runtime (and therefore support on Mac/Linux in addition to the 6 existing runtimes I support).

Happy mapping!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

5 tips to avoid flaky tests and build a reliable continuous integration test suite

BugBuster - Fri, 11/28/2014 - 13:00

One of the most important thing when running automated tests is to make sure the result of these tests are reliable and consistent (read: deterministic). This is especially true when your tests are part of a Continuous Integration system and are run automatically to verify each build. There is nothing worse than a test that passes sometimes and fails others without any new bugs being introduced. These are what are known as “flaky tests”. Excessive flakiness introduces noise and can lead to your company discarding the results of the tests altogether.

Here at BugBuster, we work with all sorts of companies to help them build automated tests and setup proper continuous integration. We’ve compiled a few tips and guidelines that you can use to prevent creating flaky tests and maintain a healthy and reliable automated testing system.

BugBuster test case successful

A BugBuster test case that successfully passes (every time).

1. If a test is flaky, either fix it right away or put it into quarantine.

There is nothing worse than tests that “cry wolf.” If you can’t fix a flaky test immediately, then simply put it aside and remove it from the scheduled continuous integration test runs. You can then come back to this test and fix it later. Flaky tests are often signs of bigger testing problems, but for the short term, you’ll be much better off without these false alarms going off randomly.

2. Control your environment: use a CI and control your deployment process

A common source of flakiness is the environment changing independently of your test. For example, if the database changes between two runs of a test (sometimes because the test itself created or altered data), unexpected conditions may appear that may change the outcome of the test. By automating your deployment process as part of a continuous integration system with a tool like Jenkins, you can deploy your application and reset your database to a known snapshot easily, ensuring that your tests always run in the same environment.

3. Expect your application to behave non-deterministically

Your tests must be deterministic – your application is not. Most web applications employ some sort of asynchronous mechanism, such as AJAX. Your test must expect these behaviors and deal with them accordingly. If your app is waiting for data from a remote server with AJAX, then your test must be waiting as well. This can be done by programming waits explicitly in your test (see Selenium/WebDriver’s explicit wait) or you can use BugBuster to deal with these issues transparently

4. Write meaningful error messages

You’re tests are part of the documentation of your application. If a test fails with a message like: “The result should be correct,” it really doesn’t help you to debug the issue. Why is the result not correct? What result is the test expecting? And what is “result” anyway? If, on the other hand, the error message had been: “The balance of the account should be positive”, then the error is instantly obvious. Writing meaningful error messages is a quick and easy win for improving your tests!

5. Write short and focused tests

A test should always focus on testing just one feature. While it may be tempting to write longer tests that go through multiple features of your applications, it’s just not a good idea. Dependencies between the features with so many possible conditions make it very hard to ensure that your test handles all possible cases correctly. Writing short and focused tests will ensure that your tests are reliable and efficient.

The post 5 tips to avoid flaky tests and build a reliable continuous integration test suite appeared first on BugBuster.

Categories: Companies

Black Friday / Cyber Monday 2014 Web and Mobile Performance Live Blog

Update December 1st, 2014 at 5:05PM CyberMonday Wrap Up Wrapping up Cyber Monday with a look at the Top Retailers who prepared the most for the onslaught of mobile traffic this Cyber Monday. Sears, Costco, Office Depot, REI, Saks and NewEgg all understood that servicing their customers on mobile devices required a different set of […]

The post Black Friday / Cyber Monday 2014 Web and Mobile Performance Live Blog appeared first on Dynatrace APM Blog.

Categories: Companies

Lots of test strategy

Thoughts from The Test Eye - Thu, 11/27/2014 - 20:09

I have been doing lots on test strategy the last year. Some talks, a EuroSTAR tutorial, blog entries, a small book in Swedish, teaching at higher vocational studies, and of course test strategies in real projects.

The definition I use is more concrete than many others. I want a strategy for a project, not for a department or the future. And I must say it works. It gets more clear to people what we are up to, and discussions get better. The conversations about it might be most important, but I like to also write a test strategy document, it clears up my thinking and is a document to go back to for other reviewers.

Yesterday in the EuroSTAR TestLab I created a test strategy together with other attendants, using James Bach’s Heuristic Test Strategy Model as a mental tool. The documented strategy can be downloaded here, and the content might be less interesting than the format. In this case, I used explicit chapters for Testing Missions, Information Sources and Quality Objectives because I felt those would be easiest to comment on for the product manager and developer.

I have also put my EuroSTAR Test Strategy slides on the Publications page.

Happy strategizing!

Categories: Blogs

Happy Thanksgiving from the Performace Product Team!

HP LoadRunner and Performance Center Blog - Thu, 11/27/2014 - 18:40

Hi dear performance engineer,


Just want to say big thanks for using our products!


For the countries that celebrate thanksging, HAPPY THANKSGIVING day. Enjoy Black Friday tomorrow and share your stories below.


Silvia Siqueira & HP Product team.


Happy Thanksgiving.PNG

Categories: Companies

Software Tester, Project People Ltd, Dublin, Ireland

Software Testing Magazine - Thu, 11/27/2014 - 18:28
Experienced Web tester required, in E-commerce environment. Partner directly with Product Manager and Web Production developers to provide QA coverage for product releases. At least 4 years of QA experience and 2 years of web testing experience. Demonstrable technical skills and work experience with SQL. Proven ability to develop Test Plans, Conditions, Scenarios, Scripts and Measurements. A strong understanding of (and preferably previous experience in) web testing and the E-commerce space To get more informations and to apply visit
Categories: Communities

Example of a transform for unit testing something tricky

Rico Mariani's Performance Tidbits - Thu, 11/27/2014 - 12:02

There were some requests for an example of my unit testing strategy so made up this fragment and included some things that would make your testing annoying.

This is the initial fragment.  Note that it uses annoying global methods that complicate testing as well as global state and system calls that have challenging failure conditions.


void DoWhatever(HWND hwnd)
    if (hMutex == NULL)
        hMutex = ::CreateMutex(NULL, FALSE, L"TestSharedMutex");

        if (hMutex == NULL)

    DWORD dwWaitResult = WaitForSingleObject(hMutex, 1000);

    BOOL fRelease = FALSE;

    switch (dwWaitResult)
        case WAIT_OBJECT_0:
            LPWSTR result = L"Some complicated result";
            ::MessageBox(hwnd, result, L"Report", MB_OK);
            fRelease = TRUE;

        case WAIT_ABANDONED:
            ::MessageBox(hwnd, L"MutexAquired via Abandon", L"Report", MB_OK);
            fRelease = TRUE;

        case WAIT_FAILED:
            ::MessageBox(hwnd, L"Mutex became invalid", L"Report", MB_OK);
            fRelease = FALSE;

        case WAIT_TIMEOUT:
            ::MessageBox(hwnd, L"Mutex acquisition timeout", L"Report", MB_OK);
            fRelease = FALSE;

    if (fRelease)


Now here is basically the same code after the transform I described in my last posting.  I've added a template parameter to deal with the globals and I've even made it so that the system type HWND can be changed to something simple so you don't need windows.h

template <class T, class _HWND> void DoWhateverHelper(_HWND hwnd)
    if (T::hMutex == NULL)
        T::hMutex = T::CreateMutex(NULL, FALSE, L"TestSharedMutex");

        if (T::hMutex == NULL)

    DWORD dwWaitResult = T::WaitForSingleObject(T::hMutex, 1000);

    BOOL fRelease = FALSE;

    switch (dwWaitResult)
        case WAIT_OBJECT_0:
            LPWSTR result = L"Some complicated result";
            T::MessageBox(hwnd, result, L"Report", MB_OK);
            fRelease = TRUE;

        case WAIT_ABANDONED:
            T::MessageBox(hwnd, L"MutexAquired via Abandon", L"Report", MB_OK);
            fRelease = TRUE;

        case WAIT_FAILED:
            T::MessageBox(hwnd, L"Mutex became invalid", L"Report", MB_OK);
            fRelease = FALSE;

        case WAIT_TIMEOUT:
            T::MessageBox(hwnd, L"Mutex acquisition timeout", L"Report", MB_OK);
            fRelease = FALSE;

    if (fRelease)

Now we make this binding struct that can be used to make the template class to do what it always did.

struct Normal
    static HANDLE CreateMutex(LPSECURITY_ATTRIBUTES pv, BOOL fOwn, LPCWSTR args)
        return ::CreateMutex(pv, fOwn, args);

    static void ReleaseMutex(HANDLE handle)

    static void MessageBox(HWND hwnd, LPCWSTR msg, LPCWSTR caption, UINT type)
        ::MessageBox(hwnd, msg, caption, type);

    static DWORD WaitForSingleObject(HANDLE handle, DWORD timeout)
        return ::WaitForSingleObject(handle, timeout);

    static HANDLE hMutex;

HANDLE Normal::hMutex;

This code now does exactly the same as the original.

void DoWhatever(HWND hwnd)
    DoWhateverHelper<Normal, HWND>(hwnd);

And now I include this very cheesy Mock version of the template which shows where you could put your test hooks.  Note that the OS types HWND and HANDLE are no longer present.  This code is OS neutral.   LPSECURITY_ATTRIBUTES could have been abstracted as well but I left it in because I'm lazy.  Note that HANDLE and HWND are now just int.  This mock could have as many validation hooks as you like.

struct Mock
    static int CreateMutex(LPSECURITY_ATTRIBUTES pv, BOOL fOwn, LPCWSTR args)
        // validate args
        return 1;

    static void ReleaseMutex(int handle)
        // validate that the handle is correct
        // validate that we should be releasing it in this test case

    static void MessageBox(int hwnd, LPCWSTR msg, LPCWSTR caption, UINT type)
        // note the message and validate its correctness

    static DWORD WaitForSingleObject(int handle, DWORD timeout)
        // return whatever case you want to test
        return WAIT_TIMEOUT;

    static int hMutex;

int Mock::hMutex;


In your test code you include calls that look like this to run your tests.  You could easily put this into whatever unit test framework you have.

void DoWhateverMock(int hwnd)
    DoWhateverHelper<Mock, int>(hwnd);

And that's it.

It wouldn't have been much different if we had used an abstract class instead of a template to do the job.  That can be easier/better, especially if the additional virtual call isn't going to cost you much.

We've boiled away as many types as we wanted to and we kept the heart of the algorithm so the unit testing is still valid.

Categories: Blogs

Debugging HTTP

Testing TV - Thu, 11/27/2014 - 11:52
In this world where we have moved beyond web pages and build ever-more asynchronous applications, often things that go wrong result in errors we can’t see. This session will give a very technical overview of HTTP and how to inspect your application’s communications, whether on the web or on a mobile device. Using Curl, Wireshark […]
Categories: Blogs

Test, Transform and Refactor

Software Testing Magazine - Thu, 11/27/2014 - 11:36
Let’s have a close look into the Red-Green-Refactor cycle and understand the subtleties of each step. When we go down the rabbit hole of Test Driven Design (TDD), we sometimes take too big steps leading us to many failed tests we just can bring back to green without writing a lot of code. We need to take a step back and take the shrinking potion of baby steps again. This presentation, full of test and code examples, will dig into each of the steps of TDD to help you understand how ...
Categories: Communities

Ranorex 5.2.1 Released

Ranorex - Thu, 11/27/2014 - 11:00
We are proud to announce that Ranorex 5.2.1 has been released and is now available for download. General changes/Features
  • Added support for Firefox 34
  • Added an overload to the RepoItemInfo.Exists method taking a timeout value which overrides the effective timeout of the repository item for that call
  • Extended the RanoreXPath Weight Rules editor to allow copy & paste of multiple rules (for import/export)
Please check out the release notes for more details about the changes in this release.

Download latest Ranorex version here.
(You can find a direct download link for the latest Ranorex version on the Ranorex Studio start page.) 

Categories: Companies

Ranorex 5.2.1 Released

Ranorex - Thu, 11/27/2014 - 11:00
We are proud to announce that Ranorex 5.2.1 has been released and is now available for download. General changes/Features
  • Added support for Firefox 34
  • Added an overload to the RepoItemInfo.Exists method taking a timeout value which overrides the effective timeout of the repository item for that call
  • Extended the RanoreXPath Weight Rules editor to allow copy & paste of multiple rules (for import/export)
Please check out the release notes for more details about the changes in this release.

Download latest Ranorex version here.
(You can find a direct download link for the latest Ranorex version on the Ranorex Studio start page.) 

Categories: Companies

Advanced script enhancements in LoadRunner’s new TruClient – Native Mobile protocol

HP LoadRunner and Performance Center Blog - Thu, 11/27/2014 - 10:20

p9.pngIn my previous blog post I introduced LoadRunner’s new TruClient – Native Mobile protocol. In this post I’ll explain about advanced script enhancements. We’ll cover the area of object identification parameterization, adding special device steps and overcoming record and replay problems with ‘Analog Mode’. This post will be followed by the final post in this series on the TruClient – Native Mobile protocol that will focus on debugging using the extended log, running a script on multiple devices and transaction timings.


(This post was written by Yehuda Sabag from the TruClient R&D Team)

Categories: Companies

Get the latest on Application Performance Engineering at HP Discover Barcelona 2014

HP LoadRunner and Performance Center Blog - Thu, 11/27/2014 - 09:02

I can’t believe it’s already been almost a year since we left the Fira Barcelona in Spain for HP Discover Barcelona 2013. That was one of the best Discover events I have been a part of, and I’ve been to a few over the year. The numbers speak barce_2351656b.jpgfor themselves, as shown on the right. Last year’s event was so spectacular that we couldn’t help but return for a 2nd straight year! But as amazing as last year’s conference was, I am even more
 excited for what’s in store this time around.


The Performance & Lifecycle Virtualization team has been working extremely hard all year to bring you the new version of HP LoadRunner and Performance Center as well as our much anticipated launch of HP StormRunner Load.



Categories: Companies

Do you care about your code? Track code coverage on new code, right now !

Sonar - Thu, 11/27/2014 - 06:40

A few weeks ago, I had a passionate debate with my old friend Nicolas Frankel about the usefulness of the code coverage metric. We started on Twitter and then Nicolas wrote a blog entry stating “Your code coverage metric is not meaningful” and so useless. Not only am I thinking exactly the opposite, but I would even say that not tracking the code coverage on new code is almost insane nowadays.

For what I know, I haven’t found anything in the code

But before talking about the importance of tracking code coverage on new code and the relating paradigm shift, let’s start by mentioning something which is probably one of the root causes of the misalignment with Nicolas: static and dynamic analysis tools will never, ever manage to say “your code is clean, well-designed, bug free and highly maintainable”. Static and dynamic analysis tools are only able to say “For what I know, I haven’t found anything wrong in the code.”. And so by extension this is also true for any metric/technique used to understand/analyse the source code. A high level of coverage is not a guarantee for the quality of the product, but a low level is a clear indication of insufficient testing.

For Nicolas, tracking the code coverage is useless because in some cases, unit tests leading to increase the code coverage can be crappy. For instance, unit tests might not contain any assertions, or unit tests might cover all branches but not all possible inputs. To fix those limitations, Nicolas says that the only solution is to do some mutation testing while computing code coverage (see for instance for Java) to make sure that unit tests are robust. Ok, but if you really want to touch the Grail, is it enough? Absolutely not! You can have a code coverage of 100% and some very robust but… fully unmaintainable unit tests. Mutation testing doesn’t provide any way, for instance to know how “unit” your unit tests are, or if there is lot of redundancy between your unit tests.

To sum-up, when you care about the maintainability, reliability and security of your application, you can/should invest some time and effort to reach some higher maturity levels. But if you wait to find the ultimate solution to start, that will never happen. Moreover maturity levels should be reached progressively:

  • It doesn’t make any sense to care about code coverage if there isn’t a continuous integration environment
  • It doesn’t make any sense to care about mutation testing if only 5% of the source code is covered by unit tests
  • … etc.

And here I don’t even mention the extra effort involved in the execution of mutation testing and the analysis of the results. But don’t miss my point: mutation testing is a great technic and I encourage you to give a try to and to the SonarQube Pitest plugin done by Alexandre Victoor. I’m just saying that as a starting point, mutation testing is already a too advanced technic.

Developers want to learn

There is a second root cause of misalignment with Nicolas: should we trust that developers have a will to progress? If the answer is NO, we might spend a whole life fighting with them and always making their lives more difficult. Obviously, you’ll always find some reluctant developers, doing some push back and not caring at all about the quality and reliability of the source code. But I prefer targeting the vast majority of developers eager to learn and to progress. For that majority of developers, the goal is to always make life more fun instead of making it harder. So, how do you infect your “learning” developers with the desire to unit test?

When you start the development of an application from scratch unit testing might be quite easy. But when you’re maintaining an application with 100,000 lines of code and only 5% is covered by unit tests, you could quickly feel depressed. And obviously most of us are dealing with legacy code. When you’re starting out so far behind, it can require years to reach a total unit test coverage of 90%. So for those first few years, how are you going to reinforce the practice? How are you going to make sure that in a team of 15 developers, all developers are going to play the same game?

At SonarSource we failed during many years

Indeed, we were stuck with a code coverage of 60% on the platform and were not able to progress. Thankfully, David Gageot joined the team at that time, and things were pretty simple for him: any new piece of code should have a coverage of at least 100% :-). That’s it, and that’s what he did. From there we decided to set-up a quality gate with a very simple and powerful criteria: when we release a new version of any product at SonarSource, the code coverage on new or updated code can’t be less than 80%. When this is the case, the request for release is rejected. That’s it, that’s what we did, and we finally started to fly. One year and an half later, the code coverage on the SonarQube platform is 82% and 84% on the overall SonarSource products (400,000 lines of code and 20,000 unit tests).

Code coverage on new/changed code is a game changer

And it’s pretty simple to understand why:

  • Whatever your application is, and may it be a legacy one or not, the quality gate is always the same and doesn’t evolve over time: just make the coverage on your new/changed lines of code greater than X%
  • There’s no longer a need to look at the global code coverage and legacy Technical Debt. Just forget it and stop feeling depressed!
  • As each year X% of your overall code evolves (at Google for example, each year 50% of the code evolves), having coverage on changed code means that even without paying attention to the overall code coverage, it will increase quickly just “as a side effect”.
  • If one part of the application is not covered at all by unit tests but has not evolved during the past 3 years, why should you invest the effort to increase the maintainability of this piece of code? It doesn’t make sense. With this approach, you’ll start taking care of it if and only if one day some functional changes need to be done. In other words, the cost to bootstrap this process is low. There’s no need to stop the line and make the entire team work for X months just to reimburse the old Technical Debt.
  • New developers don’t have any choice other than playing the game from day 1 because if they start injecting some uncovered piece of code, the feedback loop is just a matter of hours, and anyway their new code will never go into production.

This new approach to deal with the Technical Debt is part of this paradigm shift explained in our “Continuous Inspection” white paper. Another blog entry will follow explaining how to easily track any kind of Technical Debt with such approach, not just debt related to the lack of code coverage. And thanks Nicolas Frankel for keeping feeding this open debate.

Categories: Open Source

Pulse 2.7 Released

a little madness - Thu, 11/27/2014 - 06:12

I’m dusting off the blog with a bang, announcing that Pulse 2.7 has gone gold! This release brings a broad range of new features and improvements, including:

  • New agent storage management options, including the ability to prevent builds when disk space is low.
  • Configuration system performance improvements.
  • Live logging performance improvements.
  • Xcode command updates, including a new clang output processor.
  • A new plugin for integration of XCTest reports.
  • More flexibility and feedback for manual triggering.
  • New service support, including integration with systemd and upstart.
  • Improved support for git 2.x, especially partial clones.
  • Support for Subversion 1.8.
  • Improved dependency revision handling across multiple SCMs.
  • More convenient actions for cancelling builds.
  • The ability to run post build hooks on agents.

As always we based these improvements on feedback from our customers, and we thank all those that took the time to let us know their priorities.

Pulse 2.7 packages can de downloaded from the downloads page. If you’re an existing customer with an active support contract then this is a free upgrade. If you’re new to Pulse, we also provide free licenses for evaluation, open source projects and small teams!

Categories: Companies

Meet the uTesters: Iwona Pekala

uTest - Wed, 11/26/2014 - 23:24

Iwona Pekala is a gold rated full-time tester on paid projects at uTest, and a uTester for over 3 years. Iwona is also currently serving as a uTest Forums moderator for the second consecutive quarter. She is a fan of computers and technology, and lives in Kraków, Poland.

Be sure to also follow Iwona’s profile on uTest as well so you can stay up to date with her activity in the community!

IwonauTest: Android or iOS?

Iwona: Android. I can customize it in more ways when compared to iOS. Additionally, apps have more abilities, there is a lot of hardware to choose from, and it takes less time to accomplish basic tasks like selecting text or tapping small buttons.

uTest: What drew you into testing initially? What’s kept you at it?

Iwona: I became a tester accidentally. I was looking for a summer internship for computer science students (I was thinking about becoming a programmer). The first offer I got was for the role of tester. I was about to change it, and I was transitioned to a developer role after some time. It was uTest that kept me as a tester, particularly the flexibility of work and variety of projects.

uTest: Which areas do you want to improve in as a tester? Which areas of testing do you want to explore?

Iwona: I need to be more patient and increase my attention to details. When it comes to hard skills, I would like to gain experience in security, usability and automation testing.

uTest: QA professional or tester?

Iwona: I describe myself as a tester, but those are just words, so it doesn’t really matter what you call that role as long as you know what its responsibilities are.

uTest: What’s one trait or quality you seek in a fellow software testing colleague?

Iwona: Flexibility and the skill of coping with grey areas. As a tester, you need accommodate to changing situations, and you hit grey areas on a daily basis. It’s important to use common sense, but still stay in scope.

You can also check out all of the past entries in our Meet the uTesters series.

Categories: Companies

Integrating Ranorex Test Cases into Jira

Ranorex - Wed, 11/26/2014 - 16:07

Jira is an issue and project tracing software from Atlassian. The following article describes integrating Ranorex Test Cases into Jira. That way you will empower Ranorex to submit or modify testing issues within Jira in an automated way.


As Jira offers a REST web service (API description available here), it becomes possible to submit issues automatically. This is achieved using the JiraRestClient  and RestSharp.

These libraries are wrapped with Ranorex functionality, forming re-usable modules, available within this library. The integration of these Jira testing modules into Ranorex Test Automation is described subsequently.

The following steps need to be done:

Step 1 – Adding the Libraries to Ranorex for Jira Automation:

Predefined modules (for x86 architecture and .NET 3.5) are available here. The assemblies in this zip-file just need to get added to the Ranorex project. In succession the modules (as shown below) will appear in the module browser under “JiraReporter” (demonstrated on the Ranorex KeePass sample):


Step 2 – Using the Modules in the Ranorex Test Suite

Individual modules are available within the “JiraReporter” project. These modules merely need to get used within the Ranorex Test Suite, as shown below:


The modules are interacting with Jira, based on the results of the related test cases. Except the initialization module, it is recommended placing the modules in the test case’s teardown.

Available modules for Jira automation:

  • InitializeJiraReporter — This module establishes the connection to the Jira server. It is mandatory for the following modules to be functional.
  • AutoCreateNewIssueIfTestCaseFails — If the test case fails, an issue is automatically created on the server, which is defined in “InitializeJiraReporter”. An issue number is automatically created by the server.
    A compressed Ranorex report is uploaded automatically as well.
  • ReOpenExistingIssueIfTestCaseFails — If the test case fails, an existing and already closed issue gets re-opened.
  • ResolveIssueIfTestCaseSuccessful — If the test case is successful, an existing and already open issue is set to “resolved”.
  • UpdateExistingIssueIfTestCaseFails — If a test case fails, attributes of an existing issue are updated.


Step 3 – Configure Parameters for the Modules

The modules expose different variables for configuration. Each module accepts different parameters, but they’re all used in the same way among the modules. Which module accepts which parameters can be seen when using the modules in the Ranorex project.

  • JiraUserName: The username to connect to the Jira server.
  • JiraPassword: The password for the specified user.
  • JiraServerURL: The URL for the Jira server.
  • JiraProjectKey: The project key as specified in Jira (e.g. MYP).
  • JiraIssueType: An issue type, as available in Jira (e.g., Bug)
  • JiraSummary: Some free summary text for the issue.
  • JiraDescription: Some free description text for the issue.
  • JiraLabels: Labels for the issue separated by “;” (e.g., Mobile; USB; Connection)
  • JiraIssueKey: The key for the respective issue (e.g., MYP-25).


The configuration of the modules is then done with common Ranorex data binding:


… and you’re done:

In succession, Ranorex will automatically interact with Jira when one of the modules is executed. The issues can then get processed in Jira. The following figure shows an automatically created issue together with its attached report:



Advanced usage:

The JiraReporter project offers two more modules; (i) the module “OnDemandCreateNewIssueIfTestCaseFails” and (ii) the module groupProcessExistingIssue”. These modules offer further convenience functionality and are explained in more detail in succession.

Module Group – ProcessExistingIssue

This module group groups the following modules in the given order:

  • ReOpenExistingIssueIfTestCaseFails
  • UpdateExistingIssueIfTestCaseFails
  • ResolveIssueIfTestCaseSuccessful

It might be useful to process an existing issue, as it reopens and updates the issue automatically in case of a failure. Otherwise, if the test case is successful, it closes the issue.
Thus, it can be used to monitor an already known and fixed issue. To use this module group, the whole Ranorex  “JiraReporter”project, available on GitHub needs to get added to the solution.


This module provides functionality, creating a new issue out of the Ranorex Report. A new issue only gets created when the link, provided within the report is clicked. So the user or tester can decide whether an issue is create or not.

The compressed Ranorex report is uploaded to the newly created issue as well.


Note: This functionality relies on a batch file created by Ranorex in the output folder and the execution of the Jira Command Line interface (CLI). It does not depend on a prior initialization from “InitializeJiraReporter”.

The module exposes the same variables as the modules mentioned above. One additional parameter is essential for this module:

  • JiraCLIFileLocation: The full path to the “jira-cli-<version>.jar” file, provided by the Jira CLI.

Following requirements need to be met to use this module:

  • Remote API must be enabled in your JIRA installation
  • The mentioned batch file needs to be accessible over the same file path, where the file was initially created. If the file is moved to a new location, the link is not working anymore.
    In this case the batch-file needs to be started manually.


JiraReporter Source Code:

The whole project which contains the code for the JiraReporter is available on GitHub under the following link:

Please feel free to modify the code according to individual needs and/or upload new modules.


Categories: Companies

A Faster Android 5 is Coming; Get the Most out of your Android App Performance!

A new version of Android is coming up, Lollipop, and as usual the Android team at Google is promising that it will be faster, backed by a new ART runtime and the promise is to reach performance improvements up to 100%. I am lucky that I can always take a look at new releases of […]

The post A Faster Android 5 is Coming; Get the Most out of your Android App Performance! appeared first on Dynatrace APM Blog.

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today