Skip to content

Feed aggregator

5 tips to avoid flaky tests and build a reliable continuous integration test suite

BugBuster - 13 hours 58 min ago

One of the most important thing when running automated tests is to make sure the result of these tests are reliable and consistent (read: deterministic). This is especially true when your tests are part of a Continuous Integration system and are run automatically to verify each build. There is nothing worse than a test that passes sometimes and fails others without any new bugs being introduced. These are what are known as “flaky tests”. Excessive flakiness introduces noise and can lead to your company discarding the results of the tests altogether.

Here at BugBuster, we work with all sorts of companies to help them build automated tests and setup proper continuous integration. We’ve compiled a few tips and guidelines that you can use to prevent creating flaky tests and maintain a healthy and reliable automated testing system.

BugBuster test case successful

A BugBuster test case that successfully passes (every time).

1. If a test is flaky, either fix it right away or put it into quarantine.

There is nothing worse than tests that “cry wolf.” If you can’t fix a flaky test immediately, then simply put it aside and remove it from the scheduled continuous integration test runs. You can then come back to this test and fix it later. Flaky tests are often signs of bigger testing problems, but for the short term, you’ll be much better off without these false alarms going off randomly.

2. Control your environment: use a CI and control your deployment process

A common source of flakiness is the environment changing independently of your test. For example, if the database changes between two runs of a test (sometimes because the test itself created or altered data), unexpected conditions may appear that may change the outcome of the test. By automating your deployment process as part of a continuous integration system with a tool like Jenkins, you can deploy your application and reset your database to a known snapshot easily, ensuring that your tests always run in the same environment.

3. Expect your application to behave non-deterministically

Your tests must be deterministic – your application is not. Most web applications employ some sort of asynchronous mechanism, such as AJAX. Your test must expect these behaviors and deal with them accordingly. If your app is waiting for data from a remote server with AJAX, then your test must be waiting as well. This can be done by programming waits explicitly in your test (see Selenium/WebDriver’s explicit wait) or you can use BugBuster to deal with these issues transparently

4. Write meaningful error messages

You’re tests are part of the documentation of your application. If a test fails with a message like: “The result should be correct,” it really doesn’t help you to debug the issue. Why is the result not correct? What result is the test expecting? And what is “result” anyway? If, on the other hand, the error message had been: “The balance of the account should be positive”, then the error is instantly obvious. Writing meaningful error messages is a quick and easy win for improving your tests!

5. Write short and focused tests

A test should always focus on testing just one feature. While it may be tempting to write longer tests that go through multiple features of your applications, it’s just not a good idea. Dependencies between the features with so many possible conditions make it very hard to ensure that your test handles all possible cases correctly. Writing short and focused tests will ensure that your tests are reliable and efficient.

The post 5 tips to avoid flaky tests and build a reliable continuous integration test suite appeared first on BugBuster.

Categories: Companies

Black Friday / Cyber Monday 2014 Web and Mobile Performance Live Blog

Update November 28th, 2014 at 3:00pm Time to have a look at how the third parties are doing this Black Friday afternoon.  As of 3:00PM EST Outage Analyzer is seeing 8 Open Outages. 7 of which are impacting US End Users. It appears that ev.ib-ibi.com and configusa.veinteractive.com are having some issues this afternoon. In this case it […]

The post Black Friday / Cyber Monday 2014 Web and Mobile Performance Live Blog appeared first on Dynatrace APM Blog.

Categories: Companies

Lots of test strategy

Thoughts from The Test Eye - Thu, 11/27/2014 - 20:09
Skills

I have been doing lots on test strategy the last year. Some talks, a EuroSTAR tutorial, blog entries, a small book in Swedish, teaching at higher vocational studies, and of course test strategies in real projects.

The definition I use is more concrete than many others. I want a strategy for a project, not for a department or the future. And I must say it works. It gets more clear to people what we are up to, and discussions get better. The conversations about it might be most important, but I like to also write a test strategy document, it clears up my thinking and is a document to go back to for other reviewers.

Yesterday in the EuroSTAR TestLab I created a test strategy together with other attendants, using James Bach’s Heuristic Test Strategy Model as a mental tool. The documented strategy can be downloaded here, and the content might be less interesting than the format. In this case, I used explicit chapters for Testing Missions, Information Sources and Quality Objectives because I felt those would be easiest to comment on for the product manager and developer.

I have also put my EuroSTAR Test Strategy slides on the Publications page.

Happy strategizing!

Categories: Blogs

Happy Thanksgiving from the Performace Product Team!

HP LoadRunner and Performance Center Blog - Thu, 11/27/2014 - 18:40

Hi dear performance engineer,

 

Just want to say big thanks for using our products!

 

For the countries that celebrate thanksging, HAPPY THANKSGIVING day. Enjoy Black Friday tomorrow and share your stories below.

 

Silvia Siqueira & HP Product team.

 

Happy Thanksgiving.PNG

Categories: Companies

Software Tester, Project People Ltd, Dublin, Ireland

Software Testing Magazine - Thu, 11/27/2014 - 18:28
Experienced Web tester required, in E-commerce environment. Partner directly with Product Manager and Web Production developers to provide QA coverage for product releases. At least 4 years of QA experience and 2 years of web testing experience. Demonstrable technical skills and work experience with SQL. Proven ability to develop Test Plans, Conditions, Scenarios, Scripts and Measurements. A strong understanding of (and preferably previous experience in) web testing and the E-commerce space To get more informations and to apply visit http://www.softdevjobs.com/job/software-tester-dublin-dublin-ireland-project-people-ltd-cb1febeb82/
Categories: Communities

Example of a transform for unit testing something tricky

Rico Mariani's Performance Tidbits - Thu, 11/27/2014 - 12:02

There were some requests for an example of my unit testing strategy so made up this fragment and included some things that would make your testing annoying.

This is the initial fragment.  Note that it uses annoying global methods that complicate testing as well as global state and system calls that have challenging failure conditions.

HANDLE hMutex = NULL;

void DoWhatever(HWND hwnd)
{
    if (hMutex == NULL)
    {
        hMutex = ::CreateMutex(NULL, FALSE, L"TestSharedMutex");

        if (hMutex == NULL)
            return;
    }

    DWORD dwWaitResult = WaitForSingleObject(hMutex, 1000);

    BOOL fRelease = FALSE;

    switch (dwWaitResult)
    {
        case WAIT_OBJECT_0:
            {
            LPWSTR result = L"Some complicated result";
            ::MessageBox(hwnd, result, L"Report", MB_OK);
            fRelease = TRUE;
            break;
            }

        case WAIT_ABANDONED:
            ::MessageBox(hwnd, L"MutexAquired via Abandon", L"Report", MB_OK);
            fRelease = TRUE;
            break;

        case WAIT_FAILED:
            ::MessageBox(hwnd, L"Mutex became invalid", L"Report", MB_OK);
            fRelease = FALSE;
            break;

        case WAIT_TIMEOUT:
            ::MessageBox(hwnd, L"Mutex acquisition timeout", L"Report", MB_OK);
            fRelease = FALSE;
            break;
    }

    if (fRelease)
    {
        ::ReleaseMutex(hMutex);
    }
}

 

Now here is basically the same code after the transform I described in my last posting.  I've added a template parameter to deal with the globals and I've even made it so that the system type HWND can be changed to something simple so you don't need windows.h

template <class T, class _HWND> void DoWhateverHelper(_HWND hwnd)
{
    if (T::hMutex == NULL)
    {
        T::hMutex = T::CreateMutex(NULL, FALSE, L"TestSharedMutex");

        if (T::hMutex == NULL)
            return;
    }

    DWORD dwWaitResult = T::WaitForSingleObject(T::hMutex, 1000);

    BOOL fRelease = FALSE;

    switch (dwWaitResult)
    {
        case WAIT_OBJECT_0:
            {
            LPWSTR result = L"Some complicated result";
            T::MessageBox(hwnd, result, L"Report", MB_OK);
            fRelease = TRUE;
            break;
            }

        case WAIT_ABANDONED:
            T::MessageBox(hwnd, L"MutexAquired via Abandon", L"Report", MB_OK);
            fRelease = TRUE;
            break;

        case WAIT_FAILED:
            T::MessageBox(hwnd, L"Mutex became invalid", L"Report", MB_OK);
            fRelease = FALSE;
            break;

        case WAIT_TIMEOUT:
            T::MessageBox(hwnd, L"Mutex acquisition timeout", L"Report", MB_OK);
            fRelease = FALSE;
            break;
    }

    if (fRelease)
    {
        T::ReleaseMutex(T::hMutex);
    }
}

Now we make this binding struct that can be used to make the template class to do what it always did.

struct Normal
{
    static HANDLE CreateMutex(LPSECURITY_ATTRIBUTES pv, BOOL fOwn, LPCWSTR args)
    {
        return ::CreateMutex(pv, fOwn, args);
    }

    static void ReleaseMutex(HANDLE handle)
    {
        ::ReleaseMutex(handle);
    }

    static void MessageBox(HWND hwnd, LPCWSTR msg, LPCWSTR caption, UINT type)
    {
        ::MessageBox(hwnd, msg, caption, type);
    }

    static DWORD WaitForSingleObject(HANDLE handle, DWORD timeout)
    {
        return ::WaitForSingleObject(handle, timeout);
    }

    static HANDLE hMutex;
};

HANDLE Normal::hMutex;

This code now does exactly the same as the original.

void DoWhatever(HWND hwnd)
{
    DoWhateverHelper<Normal, HWND>(hwnd);
}

And now I include this very cheesy Mock version of the template which shows where you could put your test hooks.  Note that the OS types HWND and HANDLE are no longer present.  This code is OS neutral.   LPSECURITY_ATTRIBUTES could have been abstracted as well but I left it in because I'm lazy.  Note that HANDLE and HWND are now just int.  This mock could have as many validation hooks as you like.

struct Mock
{
    static int CreateMutex(LPSECURITY_ATTRIBUTES pv, BOOL fOwn, LPCWSTR args)
    {
        // validate args
        return 1;
    }

    static void ReleaseMutex(int handle)
    {
        // validate that the handle is correct
        // validate that we should be releasing it in this test case
    }

    static void MessageBox(int hwnd, LPCWSTR msg, LPCWSTR caption, UINT type)
    {
        // note the message and validate its correctness
    }

    static DWORD WaitForSingleObject(int handle, DWORD timeout)
    {
        // return whatever case you want to test
        return WAIT_TIMEOUT;
    }

    static int hMutex;
};

int Mock::hMutex;

 

In your test code you include calls that look like this to run your tests.  You could easily put this into whatever unit test framework you have.

void DoWhateverMock(int hwnd)
{
    DoWhateverHelper<Mock, int>(hwnd);
}

And that's it.

It wouldn't have been much different if we had used an abstract class instead of a template to do the job.  That can be easier/better, especially if the additional virtual call isn't going to cost you much.

We've boiled away as many types as we wanted to and we kept the heart of the algorithm so the unit testing is still valid.

Categories: Blogs

Debugging HTTP

Testing TV - Thu, 11/27/2014 - 11:52
In this world where we have moved beyond web pages and build ever-more asynchronous applications, often things that go wrong result in errors we can’t see. This session will give a very technical overview of HTTP and how to inspect your application’s communications, whether on the web or on a mobile device. Using Curl, Wireshark […]
Categories: Blogs

Test, Transform and Refactor

Software Testing Magazine - Thu, 11/27/2014 - 11:36
Let’s have a close look into the Red-Green-Refactor cycle and understand the subtleties of each step. When we go down the rabbit hole of Test Driven Design (TDD), we sometimes take too big steps leading us to many failed tests we just can bring back to green without writing a lot of code. We need to take a step back and take the shrinking potion of baby steps again. This presentation, full of test and code examples, will dig into each of the steps of TDD to help you understand how ...
Categories: Communities

Advanced script enhancements in LoadRunner’s new TruClient – Native Mobile protocol

HP LoadRunner and Performance Center Blog - Thu, 11/27/2014 - 10:20

p9.pngIn my previous blog post I introduced LoadRunner’s new TruClient – Native Mobile protocol. In this post I’ll explain about advanced script enhancements. We’ll cover the area of object identification parameterization, adding special device steps and overcoming record and replay problems with ‘Analog Mode’. This post will be followed by the final post in this series on the TruClient – Native Mobile protocol that will focus on debugging using the extended log, running a script on multiple devices and transaction timings.

 

(This post was written by Yehuda Sabag from the TruClient R&D Team)

Categories: Companies

Get the latest on Application Performance Engineering at HP Discover Barcelona 2014

HP LoadRunner and Performance Center Blog - Thu, 11/27/2014 - 09:02

I can’t believe it’s already been almost a year since we left the Fira Barcelona in Spain for HP Discover Barcelona 2013. That was one of the best Discover events I have been a part of, and I’ve been to a few over the year. The numbers speak barce_2351656b.jpgfor themselves, as shown on the right. Last year’s event was so spectacular that we couldn’t help but return for a 2nd straight year! But as amazing as last year’s conference was, I am even more
 excited for what’s in store this time around.

 

The Performance & Lifecycle Virtualization team has been working extremely hard all year to bring you the new version of HP LoadRunner and Performance Center as well as our much anticipated launch of HP StormRunner Load.

 

 

Categories: Companies

Do you care about your code? Track code coverage on new code, right now !

Sonar - Thu, 11/27/2014 - 06:40

A few weeks ago, I had a passionate debate with my old friend Nicolas Frankel about the usefulness of the code coverage metric. We started on Twitter and then Nicolas wrote a blog entry stating “Your code coverage metric is not meaningful” and so useless. Not only am I thinking exactly the opposite, but I would even say that not tracking the code coverage on new code is almost insane nowadays.

For what I know, I haven’t found anything in the code

But before talking about the importance of tracking code coverage on new code and the relating paradigm shift, let’s start by mentioning something which is probably one of the root causes of the misalignment with Nicolas: static and dynamic analysis tools will never, ever manage to say “your code is clean, well-designed, bug free and highly maintainable”. Static and dynamic analysis tools are only able to say “For what I know, I haven’t found anything wrong in the code.”. And so by extension this is also true for any metric/technique used to understand/analyse the source code. A high level of coverage is not a guarantee for the quality of the product, but a low level is a clear indication of insufficient testing.

For Nicolas, tracking the code coverage is useless because in some cases, unit tests leading to increase the code coverage can be crappy. For instance, unit tests might not contain any assertions, or unit tests might cover all branches but not all possible inputs. To fix those limitations, Nicolas says that the only solution is to do some mutation testing while computing code coverage (see for instance pitest.org for Java) to make sure that unit tests are robust. Ok, but if you really want to touch the Grail, is it enough? Absolutely not! You can have a code coverage of 100% and some very robust but… fully unmaintainable unit tests. Mutation testing doesn’t provide any way, for instance to know how “unit” your unit tests are, or if there is lot of redundancy between your unit tests.

To sum-up, when you care about the maintainability, reliability and security of your application, you can/should invest some time and effort to reach some higher maturity levels. But if you wait to find the ultimate solution to start, that will never happen. Moreover maturity levels should be reached progressively:

  • It doesn’t make any sense to care about code coverage if there isn’t a continuous integration environment
  • It doesn’t make any sense to care about mutation testing if only 5% of the source code is covered by unit tests
  • … etc.

And here I don’t even mention the extra effort involved in the execution of mutation testing and the analysis of the results. But don’t miss my point: mutation testing is a great technic and I encourage you to give a try to http://pitest.org/ and to the SonarQube Pitest plugin done by Alexandre Victoor. I’m just saying that as a starting point, mutation testing is already a too advanced technic.

Developers want to learn

There is a second root cause of misalignment with Nicolas: should we trust that developers have a will to progress? If the answer is NO, we might spend a whole life fighting with them and always making their lives more difficult. Obviously, you’ll always find some reluctant developers, doing some push back and not caring at all about the quality and reliability of the source code. But I prefer targeting the vast majority of developers eager to learn and to progress. For that majority of developers, the goal is to always make life more fun instead of making it harder. So, how do you infect your “learning” developers with the desire to unit test?

When you start the development of an application from scratch unit testing might be quite easy. But when you’re maintaining an application with 100,000 lines of code and only 5% is covered by unit tests, you could quickly feel depressed. And obviously most of us are dealing with legacy code. When you’re starting out so far behind, it can require years to reach a total unit test coverage of 90%. So for those first few years, how are you going to reinforce the practice? How are you going to make sure that in a team of 15 developers, all developers are going to play the same game?

At SonarSource we failed during many years

Indeed, we were stuck with a code coverage of 60% on the platform and were not able to progress. Thankfully, David Gageot joined the team at that time, and things were pretty simple for him: any new piece of code should have a coverage of at least 100% :-). That’s it, and that’s what he did. From there we decided to set-up a quality gate with a very simple and powerful criteria: when we release a new version of any product at SonarSource, the code coverage on new or updated code can’t be less than 80%. When this is the case, the request for release is rejected. That’s it, that’s what we did, and we finally started to fly. One year and an half later, the code coverage on the SonarQube platform is 82% and 84% on the overall SonarSource products (400,000 lines of code and 20,000 unit tests).

Code coverage on new/changed code is a game changer

And it’s pretty simple to understand why:

  • Whatever your application is, and may it be a legacy one or not, the quality gate is always the same and doesn’t evolve over time: just make the coverage on your new/changed lines of code greater than X%
  • There’s no longer a need to look at the global code coverage and legacy Technical Debt. Just forget it and stop feeling depressed!
  • As each year X% of your overall code evolves (at Google for example, each year 50% of the code evolves), having coverage on changed code means that even without paying attention to the overall code coverage, it will increase quickly just “as a side effect”.
  • If one part of the application is not covered at all by unit tests but has not evolved during the past 3 years, why should you invest the effort to increase the maintainability of this piece of code? It doesn’t make sense. With this approach, you’ll start taking care of it if and only if one day some functional changes need to be done. In other words, the cost to bootstrap this process is low. There’s no need to stop the line and make the entire team work for X months just to reimburse the old Technical Debt.
  • New developers don’t have any choice other than playing the game from day 1 because if they start injecting some uncovered piece of code, the feedback loop is just a matter of hours, and anyway their new code will never go into production.

This new approach to deal with the Technical Debt is part of this paradigm shift explained in our “Continuous Inspection” white paper. Another blog entry will follow explaining how to easily track any kind of Technical Debt with such approach, not just debt related to the lack of code coverage. And thanks Nicolas Frankel for keeping feeding this open debate.

Categories: Open Source

Pulse 2.7 Released

a little madness - Thu, 11/27/2014 - 06:12

I’m dusting off the blog with a bang, announcing that Pulse 2.7 has gone gold! This release brings a broad range of new features and improvements, including:

  • New agent storage management options, including the ability to prevent builds when disk space is low.
  • Configuration system performance improvements.
  • Live logging performance improvements.
  • Xcode command updates, including a new clang output processor.
  • A new plugin for integration of XCTest reports.
  • More flexibility and feedback for manual triggering.
  • New service support, including integration with systemd and upstart.
  • Improved support for git 2.x, especially partial clones.
  • Support for Subversion 1.8.
  • Improved dependency revision handling across multiple SCMs.
  • More convenient actions for cancelling builds.
  • The ability to run post build hooks on agents.

As always we based these improvements on feedback from our customers, and we thank all those that took the time to let us know their priorities.

Pulse 2.7 packages can de downloaded from the downloads page. If you’re an existing customer with an active support contract then this is a free upgrade. If you’re new to Pulse, we also provide free licenses for evaluation, open source projects and small teams!

Categories: Companies

Meet the uTesters: Iwona Pekala

uTest - Wed, 11/26/2014 - 23:24

Iwona Pekala is a gold rated full-time tester on paid projects at uTest, and a uTester for over 3 years. Iwona is also currently serving as a uTest Forums moderator for the second consecutive quarter. She is a fan of computers and technology, and lives in Kraków, Poland.

Be sure to also follow Iwona’s profile on uTest as well so you can stay up to date with her activity in the community!

IwonauTest: Android or iOS?

Iwona: Android. I can customize it in more ways when compared to iOS. Additionally, apps have more abilities, there is a lot of hardware to choose from, and it takes less time to accomplish basic tasks like selecting text or tapping small buttons.

uTest: What drew you into testing initially? What’s kept you at it?

Iwona: I became a tester accidentally. I was looking for a summer internship for computer science students (I was thinking about becoming a programmer). The first offer I got was for the role of tester. I was about to change it, and I was transitioned to a developer role after some time. It was uTest that kept me as a tester, particularly the flexibility of work and variety of projects.

uTest: Which areas do you want to improve in as a tester? Which areas of testing do you want to explore?

Iwona: I need to be more patient and increase my attention to details. When it comes to hard skills, I would like to gain experience in security, usability and automation testing.

uTest: QA professional or tester?

Iwona: I describe myself as a tester, but those are just words, so it doesn’t really matter what you call that role as long as you know what its responsibilities are.

uTest: What’s one trait or quality you seek in a fellow software testing colleague?

Iwona: Flexibility and the skill of coping with grey areas. As a tester, you need accommodate to changing situations, and you hit grey areas on a daily basis. It’s important to use common sense, but still stay in scope.

You can also check out all of the past entries in our Meet the uTesters series.

Categories: Companies

Integrating Ranorex Test Cases into Jira

Ranorex - Wed, 11/26/2014 - 16:07

Jira is an issue and project tracing software from Atlassian. The following article describes integrating Ranorex Test Cases into Jira. That way you will empower Ranorex to submit or modify testing issues within Jira in an automated way.

Jira-Integration

As Jira offers a REST web service (API description available here), it becomes possible to submit issues automatically. This is achieved using the JiraRestClient  and RestSharp.

These libraries are wrapped with Ranorex functionality, forming re-usable modules, available within this library. The integration of these Jira testing modules into Ranorex Test Automation is described subsequently.

The following steps need to be done:

Step 1 – Adding the Libraries to Ranorex for Jira Automation:

Predefined modules (for x86 architecture and .NET 3.5) are available here. The assemblies in this zip-file just need to get added to the Ranorex project. In succession the modules (as shown below) will appear in the module browser under “JiraReporter” (demonstrated on the Ranorex KeePass sample):

AddReference

Step 2 – Using the Modules in the Ranorex Test Suite

Individual modules are available within the “JiraReporter” project. These modules merely need to get used within the Ranorex Test Suite, as shown below:

Modules_TestSuite

The modules are interacting with Jira, based on the results of the related test cases. Except the initialization module, it is recommended placing the modules in the test case’s teardown.

Available modules for Jira automation:

  • InitializeJiraReporter — This module establishes the connection to the Jira server. It is mandatory for the following modules to be functional.
  • AutoCreateNewIssueIfTestCaseFails — If the test case fails, an issue is automatically created on the server, which is defined in “InitializeJiraReporter”. An issue number is automatically created by the server.
    A compressed Ranorex report is uploaded automatically as well.
  • ReOpenExistingIssueIfTestCaseFails — If the test case fails, an existing and already closed issue gets re-opened.
  • ResolveIssueIfTestCaseSuccessful — If the test case is successful, an existing and already open issue is set to “resolved”.
  • UpdateExistingIssueIfTestCaseFails — If a test case fails, attributes of an existing issue are updated.

 

Step 3 – Configure Parameters for the Modules

The modules expose different variables for configuration. Each module accepts different parameters, but they’re all used in the same way among the modules. Which module accepts which parameters can be seen when using the modules in the Ranorex project.

  • JiraUserName: The username to connect to the Jira server.
  • JiraPassword: The password for the specified user.
  • JiraServerURL: The URL for the Jira server.
  • JiraProjectKey: The project key as specified in Jira (e.g. MYP).
  • JiraIssueType: An issue type, as available in Jira (e.g., Bug)
  • JiraSummary: Some free summary text for the issue.
  • JiraDescription: Some free description text for the issue.
  • JiraLabels: Labels for the issue separated by “;” (e.g., Mobile; USB; Connection)
  • JiraIssueKey: The key for the respective issue (e.g., MYP-25).

 

The configuration of the modules is then done with common Ranorex data binding:

DataBinding

… and you’re done:

In succession, Ranorex will automatically interact with Jira when one of the modules is executed. The issues can then get processed in Jira. The following figure shows an automatically created issue together with its attached report:

JiraPic

 

Advanced usage:

The JiraReporter project offers two more modules; (i) the module “OnDemandCreateNewIssueIfTestCaseFails” and (ii) the module groupProcessExistingIssue”. These modules offer further convenience functionality and are explained in more detail in succession.

Module Group – ProcessExistingIssue

This module group groups the following modules in the given order:

  • ReOpenExistingIssueIfTestCaseFails
  • UpdateExistingIssueIfTestCaseFails
  • ResolveIssueIfTestCaseSuccessful

It might be useful to process an existing issue, as it reopens and updates the issue automatically in case of a failure. Otherwise, if the test case is successful, it closes the issue.
Thus, it can be used to monitor an already known and fixed issue. To use this module group, the whole Ranorex  “JiraReporter”project, available on GitHub needs to get added to the solution.

OnDemandCreateNewIssueIfTestCaseFails

This module provides functionality, creating a new issue out of the Ranorex Report. A new issue only gets created when the link, provided within the report is clicked. So the user or tester can decide whether an issue is create or not.

The compressed Ranorex report is uploaded to the newly created issue as well.

rxReport

Note: This functionality relies on a batch file created by Ranorex in the output folder and the execution of the Jira Command Line interface (CLI). It does not depend on a prior initialization from “InitializeJiraReporter”.

The module exposes the same variables as the modules mentioned above. One additional parameter is essential for this module:

  • JiraCLIFileLocation: The full path to the “jira-cli-<version>.jar” file, provided by the Jira CLI.

Following requirements need to be met to use this module:

  • Remote API must be enabled in your JIRA installation
  • The mentioned batch file needs to be accessible over the same file path, where the file was initially created. If the file is moved to a new location, the link is not working anymore.
    In this case the batch-file needs to be started manually.

 

JiraReporter Source Code:

The whole project which contains the code for the JiraReporter is available on GitHub under the following link:

https://github.com/ranorex/Ranorex-Jira-Integration

Please feel free to modify the code according to individual needs and/or upload new modules.

 

Categories: Companies

A Faster Android 5 is Coming; Get the Most out of your Android App Performance!

A new version of Android is coming up, Lollipop, and as usual the Android team at Google is promising that it will be faster, backed by a new ART runtime and the promise is to reach performance improvements up to 100%. I am lucky that I can always take a look at new releases of […]

The post A Faster Android 5 is Coming; Get the Most out of your Android App Performance! appeared first on Dynatrace APM Blog.

Categories: Companies

Seapine’s 2014 Holiday Shopping Guide

The Seapine View - Wed, 11/26/2014 - 12:00

G302 Daedalus Prime MOBA Gaming MouseTrying to come up with gift ideas for those hard-to-shop-for people on your holiday list? Maybe some of Seapine’s customers can help!

For the Gamers

Borderlands the Pre-Sequel The newest Borderlands game from 2K Games and Gearbox is a favorite around the Seapine office. ($60)

G302 Daedalus Prime MOBA Gaming Mouse If the gamers on your list rock it PC style, they’ll love this gaming mouse from Logitech. Designed by pro gamers, the G302 is a precision gaming tool. ($50)

Kingdom Hearts 2.5 HD ReMIX Fans of Square Enix‘s Kingdom Hearts series will love this new HD compilation for the PlayStation 3. It releases December 2, just in time for the holidays. ($40)

Super Smash Bros. Namco Bandai‘s new game for the Nintendo DS and Wii U. has been called “completely insane and absolutely amazing.” ($40)

Epic Gamer T-Shirt Who doesn’t want to be an Epic Gamer? Every gamer on your list will love this shirt from Epic Games. ($25)

For the Artists

Intuos Tablet Wacom is the leading manufacturer of pen tablets, stylus, and interactive pen displays that let the artists on your list express their creativity as fluently in the digital world as they would with ink on paper or paint on canvas. ($99)

Hasselblad LunarLunar LF 18-55mm For photographers, Hasselblad is the top of the line. Our favorite is the LF 18-55mm with the olive wood accents. It’s a gorgeous camera that takes gorgeous photos. ($7,000)

Momentum Headphones For the music lovers on your list, nothing sounds better than Sennheiser. They’ll rock out in style with a pair of Momentum headphones, which WIRED called “more than just good-looking, they’re downright sexy.” ($270)

Music Gear from TC Electronic Shopping for a rocker? Get great guitar and bass gear from TC Electronic! TC makes high-quality pedals, amps, and more. (prices vary)

For the Kids

Frozen There’s no denying it: Frozen is still hot. Don’t own it? Pick up a copy on DVD or Blu-Ray from Walt Disney Motion Pictures and “Let It Go.” ($25 on Blu-ray)

The Ghost From the hit new Star Wars Rebels, LEGO’s version of the Ghost spaceship will have the Jedi on your list itching to fight the Empire. ($90)

LeapTV Get young minds and bodies moving with this educational, active video game system from LeapFrog. ($150)LeapTV

Olaf’s in Trouble Olaf’s in Trouble is a Frozen version of the classic Trouble game by Hasbro. If your kids love Frozen, they’ll have a blast playing this game as their favorite Frozen character, traveling around Arendelle to save Olaf. ($15)

Nerf N-Strike Elite Demolisher 2-in-1 Blaster For the bigger kids on your list, get them the newest in Nerf firepower. They’ll dominate their next Nerf war with motorized dart firing and missiles. ($40)

For the Ones with Everything

[+] Trip Universal Air Vent Mount Logitech makes fantastic gadgets, and one of our favorites is the [+] Trip smartphone mount for the car. It’s perfect for gadget lovers, no matter what phone they have. ($30)

Bathfitter Has someone been hinting about upgrading the master bathroom? Surprise them with a makeover from Bathfitter. (prices vary)

Barcelona Got someone on your list who says they don’t want “stuff”? Give the experience of a lifetime with a trip to Barcelona! BCN Travel makes the planning easy. (prices vary)

Bourdon Messenger Bag For the sophisticated traveler, this messenger bag from Alfred Dunhill is sure to please. ($1040)LOVE by Cartier

Braun Series 7 790cc Shaver “Movember” is coming to an end, so the bearded ones on your list might soon need a new shaver. Braun makes the best. ($270)

LOVE Bracelet Add a little sparkle to the holidays for that special woman in your life with this gorgeous white gold and diamond bracelet from Cartier. ($11,100)

For Stocking Stuffers

Davids Tea Festive Collection Our sales and support teams enjoy Davids Tea so much, they’ve taken to holding “high tea” every day. If you’ve got a tea lover on your list, they’re sure to enjoy the Festive Collection. ($50)

A Christmas Story This classic holiday comedy from Warner Brothers is a favorite at Seapine. If you know anyone who hasn’t seen it, they’ll thank you for putting this in their stocking. ($18)

Sonic Drive-In Gift Card Sonic Drive-In has great food and desserts, so everyone will appreciate a Sonic gift card! (prices vary)

Need more ideas?

If you still haven’t checked off everyone on your list, visit Conn’s Home Plus for ideas and great deals on electronics, computers, appliances, furniture, and more.

Happy shopping and Happy Holidays from Seapine Software!

Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

TenKod Launches New Solution For Mobile Testing

Software Testing Magazine - Tue, 11/25/2014 - 21:23
Featuring ingenious technology, TenKod EZ TestApp offers clients a cost effective yet efficient solution within the world of mobile applications testing. Generated test projects boast compatibility with all leading Continuous Integration (CI) systems such as Jenkins, Atlassian Bamboo and JetBrains TeamCity and do not require device jail breaking, rooting and application instrumentation. EZ TestApp approach helps development organizations shift mobile application production from a post, late phase, to the early stages of the development process. EZ TestApp provides the conveniences associated with Open Source and Proprietary Supported Platform, meaning that it ...
Categories: Communities

Being a Better Test Leader or Test Manager

Software Testing Magazine - Tue, 11/25/2014 - 18:31
It is not always easy to have management responsibilities in software development when you come from a technical position. This is also true in software testing. In this article, Mark Garzone shares some tips on how to be a better test leader or test manager. Author: Mark Garzone, https://www.smashwords.com/books/view/485652 Are you a test leader or inspire to become a test leader of a team of testers? Follow these tips on leading your team of testers to stellar results. Invest in your tester’s team education by buying tester books, paying for tester certification programs, ...
Categories: Communities

Talking Turkey in Texas: Open Source Governance Lags

Sonatype Blog - Tue, 11/25/2014 - 16:56
Deep in the heart of Texas, I was leading a panel discussion at the Lone Star Application Security Conference (LASCON) a few weeks ago.  The panel was “talking turkey” the importance of application security and open source software development, when the conversation led to a discussion about...

To read more, visit our blog at blog.sonatype.com.
Categories: Companies

New uTest Platform Features Emphasize Quality

uTest - Tue, 11/25/2014 - 16:56

Last week, uTest launched two new Platform features for uTesters on paid projects which continue to drive the needle in our continuous pursuit of quality (plus a very useful change to existing tester dashboard functionality). Here’s a recap of what is included in the latest uTest Platform release.

Bug Report Integrity

Most testers understand the role of a bug report is to provide information. However, a “good” or valuable bug report takes that a step further and provides useful and actionable information in an efficient way. As such, in addition to approving tester issues, Test Team Leads (TTLs) and Project Managers (PMs) have the ability to rate the integrity of a tester’s bug report by setting the bug report integrity to High, Unrated or Low. However, by default, all bugs will be set to Unrated.

bug-report-integrity

The Bug Report Integrity feature will reward testers who meet a high report integrity standard by providing a positive rating impact to the quality sub-rating. Conversely, we will also seek to educate testers who may be missing the mark by negating any positive impact that may have occurred based on the value of the bug itself.

For more information, please review the Bug Report Integrity uTest University course.

Tester Scorecard

When navigating into a test cycle, you will see a new tab called “Tester Scorecard.” Clicking this tab will bring up a ranked list of testers based on their bug submissions and the final decisions on these bugs — i.e. approvals and rejections.

Points are awarded according to the explanation at the top of the Scorecard and result in a score that is used to rank testers based on their performance. Sorting the table by any of the columns is possible. If two testers have identical scores (i.e. same number of bugs approved at the same value tiers), the tester that started reporting bugs first will be first in the ranking with same point scores.

Our hope is that this Scorecard will spark some additional competition among top performers and will also be useful for PMs and TTLs to choose testers for participation bonuses. Of course, it is still at the discretion of the TTL or PM to decide who won any bug battles or is eligible for any bonus payments.

Note: Scores indicated on the scorecard do not impact the tester’s rating.

Score Card

Feature Change: Payout Card

Additionally, there was an improvement to existing functionality within the tester dashboard. Pending payouts are now included so that testers can easily see how much they have earned:

Payout Card

If you like what you see, feel free to leave your comments below, or share your ideas on these and other recent platform updates by visiting the uTest Forums. We’d love to hear your suggestions, and frequently share this valuable feedback with our development team for future platform iterations!

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today