Skip to content

Feed aggregator

Lots of test strategy

Thoughts from The Test Eye - 2 hours 10 min ago
Skills

I have been doing lots on test strategy the last year. Some talks, a EuroSTAR tutorial, blog entries, a small book in Swedish, teaching at higher vocational studies, and of course test strategies in real projects.

The definition I use is more concrete than many others. I want a strategy for a project, not for a department or the future. And I must say it works. It gets more clear to people what we are up to, and discussions get better. The conversations about it might be most important, but I like to also write a test strategy document, it clears up my thinking and is a document to go back to for other reviewers.

Yesterday in the EuroSTAR TestLab I created a test strategy together with other attendants, using James Bach’s Heuristic Test Strategy Model as a mental tool. The documented strategy can be downloaded here, and the content might be less interesting than the format. In this case, I used explicit chapters for Testing Missions, Information Sources and Quality Objectives because I felt those would be easiest to comment on for the product manager and developer.

I have also put my EuroSTAR Test Strategy slides on the Publications page.

Happy strategizing!

Categories: Blogs

Example of a transform for unit testing something tricky

Rico Mariani's Performance Tidbits - 10 hours 17 min ago

There were some requests for an example of my unit testing strategy so made up this fragment and included some things that would make your testing annoying.

This is the initial fragment.  Note that it uses annoying global methods that complicate testing as well as global state and system calls that have challenging failure conditions.

HANDLE hMutex = NULL;

void DoWhatever(HWND hwnd)
{
    if (hMutex == NULL)
    {
        hMutex = ::CreateMutex(NULL, FALSE, L"TestSharedMutex");

        if (hMutex == NULL)
            return;
    }

    DWORD dwWaitResult = WaitForSingleObject(hMutex, 1000);

    BOOL fRelease = FALSE;

    switch (dwWaitResult)
    {
        case WAIT_OBJECT_0:
            {
            LPWSTR result = L"Some complicated result";
            ::MessageBox(hwnd, result, L"Report", MB_OK);
            fRelease = TRUE;
            break;
            }

        case WAIT_ABANDONED:
            ::MessageBox(hwnd, L"MutexAquired via Abandon", L"Report", MB_OK);
            fRelease = TRUE;
            break;

        case WAIT_FAILED:
            ::MessageBox(hwnd, L"Mutex became invalid", L"Report", MB_OK);
            fRelease = FALSE;
            break;

        case WAIT_TIMEOUT:
            ::MessageBox(hwnd, L"Mutex acquisition timeout", L"Report", MB_OK);
            fRelease = FALSE;
            break;
    }

    if (fRelease)
    {
        ::ReleaseMutex(hMutex);
    }
}

 

Now here is basically the same code after the transform I described in my last posting.  I've added a template parameter to deal with the globals and I've even made it so that the system type HWND can be changed to something simple so you don't need windows.h

template <class T, class _HWND> void DoWhateverHelper(_HWND hwnd)
{
    if (T::hMutex == NULL)
    {
        T::hMutex = T::CreateMutex(NULL, FALSE, L"TestSharedMutex");

        if (T::hMutex == NULL)
            return;
    }

    DWORD dwWaitResult = T::WaitForSingleObject(T::hMutex, 1000);

    BOOL fRelease = FALSE;

    switch (dwWaitResult)
    {
        case WAIT_OBJECT_0:
            {
            LPWSTR result = L"Some complicated result";
            T::MessageBox(hwnd, result, L"Report", MB_OK);
            fRelease = TRUE;
            break;
            }

        case WAIT_ABANDONED:
            T::MessageBox(hwnd, L"MutexAquired via Abandon", L"Report", MB_OK);
            fRelease = TRUE;
            break;

        case WAIT_FAILED:
            T::MessageBox(hwnd, L"Mutex became invalid", L"Report", MB_OK);
            fRelease = FALSE;
            break;

        case WAIT_TIMEOUT:
            T::MessageBox(hwnd, L"Mutex acquisition timeout", L"Report", MB_OK);
            fRelease = FALSE;
            break;
    }

    if (fRelease)
    {
        T::ReleaseMutex(T::hMutex);
    }
}

Now we make this binding struct that can be used to make the template class to do what it always did.

struct Normal
{
    static HANDLE CreateMutex(LPSECURITY_ATTRIBUTES pv, BOOL fOwn, LPCWSTR args)
    {
        return ::CreateMutex(pv, fOwn, args);
    }

    static void ReleaseMutex(HANDLE handle)
    {
        ::ReleaseMutex(handle);
    }

    static void MessageBox(HWND hwnd, LPCWSTR msg, LPCWSTR caption, UINT type)
    {
        ::MessageBox(hwnd, msg, caption, type);
    }

    static DWORD WaitForSingleObject(HANDLE handle, DWORD timeout)
    {
        return ::WaitForSingleObject(handle, timeout);
    }

    static HANDLE hMutex;
};

HANDLE Normal::hMutex;

This code now does exactly the same as the original.

void DoWhatever(HWND hwnd)
{
    DoWhateverHelper<Normal, HWND>(hwnd);
}

And now I include this very cheesy Mock version of the template which shows where you could put your test hooks.  Note that the OS types HWND and HANDLE are no longer present.  This code is OS neutral.   LPSECURITY_ATTRIBUTES could have been abstracted as well but I left it in because I'm lazy.  Note that HANDLE and HWND are now just int.  This mock could have as many validation hooks as you like.

struct Mock
{
    static int CreateMutex(LPSECURITY_ATTRIBUTES pv, BOOL fOwn, LPCWSTR args)
    {
        // validate args
        return 1;
    }

    static void ReleaseMutex(int handle)
    {
        // validate that the handle is correct
        // validate that we should be releasing it in this test case
    }

    static void MessageBox(int hwnd, LPCWSTR msg, LPCWSTR caption, UINT type)
    {
        // note the message and validate its correctness
    }

    static DWORD WaitForSingleObject(int handle, DWORD timeout)
    {
        // return whatever case you want to test
        return WAIT_TIMEOUT;
    }

    static int hMutex;
};

int Mock::hMutex;

 

In your test code you include calls that look like this to run your tests.  You could easily put this into whatever unit test framework you have.

void DoWhateverMock(int hwnd)
{
    DoWhateverHelper<Mock, int>(hwnd);
}

And that's it.

It wouldn't have been much different if we had used an abstract class instead of a template to do the job.  That can be easier/better, especially if the additional virtual call isn't going to cost you much.

We've boiled away as many types as we wanted to and we kept the heart of the algorithm so the unit testing is still valid.

Categories: Blogs

Debugging HTTP

Testing TV - 10 hours 26 min ago
In this world where we have moved beyond web pages and build ever-more asynchronous applications, often things that go wrong result in errors we can’t see. This session will give a very technical overview of HTTP and how to inspect your application’s communications, whether on the web or on a mobile device. Using Curl, Wireshark […]
Categories: Blogs

Test, Transform and Refactor

Software Testing Magazine - 10 hours 43 min ago
Let’s have a close look into the Red-Green-Refactor cycle and understand the subtleties of each step. When we go down the rabbit hole of Test Driven Design (TDD), we sometimes take too big steps leading us to many failed tests we just can bring back to green without writing a lot of code. We need to take a step back and take the shrinking potion of baby steps again. This presentation, full of test and code examples, will dig into each of the steps of TDD to help you understand how ...
Categories: Communities

Advanced script enhancements in LoadRunner’s new TruClient – Native Mobile protocol

p9.pngIn my previous blog post I introduced LoadRunner’s new TruClient – Native Mobile protocol. In this post I’ll explain about advanced script enhancements. We’ll cover the area of object identification parameterization, adding special device steps and overcoming record and replay problems with ‘Analog Mode’. This post will be followed by the final post in this series on the TruClient – Native Mobile protocol that will focus on debugging using the extended log, running a script on multiple devices and transaction timings.

 

(This post was written by Yehuda Sabag from the TruClient R&D Team)

Categories: Companies

Get the latest on Application Performance Engineering at HP Discover Barcelona 2014

I can’t believe it’s already been almost a year since we left the Fira Barcelona in Spain for HP Discover Barcelona 2013. That was one of the best Discover events I have been a part of, and I’ve been to a few over the year. The numbers speak barce_2351656b.jpgfor themselves, as shown on the right. Last year’s event was so spectacular that we couldn’t help but return for a 2nd straight year! But as amazing as last year’s conference was, I am even more
 excited for what’s in store this time around.

 

The Performance & Lifecycle Virtualization team has been working extremely hard all year to bring you the new version of HP LoadRunner and Performance Center as well as our much anticipated launch of HP StormRunner Load.

 

 

Categories: Companies

Do you care about your code? Track code coverage on new code, right now !

Sonar - Thu, 11/27/2014 - 06:40

A few weeks ago, I had a passionate debate with my old friend Nicolas Frankel about the usefulness of the code coverage metric. We started on Twitter and then Nicolas wrote a blog entry stating “Your code coverage metric is not meaningful” and so useless. Not only am I thinking exactly the opposite, but I would even say that not tracking the code coverage on new code is almost insane nowadays.

For what I know, I haven’t found anything in the code

But before talking about the importance of tracking code coverage on new code and the relating paradigm shift, let’s start by mentioning something which is probably one of the root causes of the misalignment with Nicolas: static and dynamic analysis tools will never, ever manage to say “your code is clean, well-designed, bug free and highly maintainable”. Static and dynamic analysis tools are only able to say “For what I know, I haven’t found anything wrong in the code.”. And so by extension this is also true for any metric/technique used to understand/analyse the source code. A high level of coverage is not a guarantee for the quality of the product, but a low level is a clear indication of insufficient testing.

For Nicolas, tracking the code coverage is useless because in some cases, unit tests leading to increase the code coverage can be crappy. For instance, unit tests might not contain any assertions, or unit tests might cover all branches but not all possible inputs. To fix those limitations, Nicolas says that the only solution is to do some mutation testing while computing code coverage (see for instance pitest.org for Java) to make sure that unit tests are robust. Ok, but if you really want to touch the Grail, is it enough? Absolutely not! You can have a code coverage of 100% and some very robust but… fully unmaintainable unit tests. Mutation testing doesn’t provide any way, for instance to know how “unit” your unit tests are, or if there is lot of redundancy between your unit tests.

To sum-up, when you care about the maintainability, reliability and security of your application, you can/should invest some time and effort to reach some higher maturity levels. But if you wait to find the ultimate solution to start, that will never happen. Moreover maturity levels should be reached progressively:

  • It doesn’t make any sense to care about code coverage if there isn’t a continuous integration environment
  • It doesn’t make any sense to care about mutation testing if only 5% of the source code is covered by unit tests
  • … etc.

And here I don’t even mention the extra effort involved in the execution of mutation testing and the analysis of the results. But don’t miss my point: mutation testing is a great technic and I encourage you to give a try to http://pitest.org/ and to the SonarQube Pitest plugin done by Alexandre Victoor. I’m just saying that as a starting point, mutation testing is already a too advanced technic.

Developers want to learn

There is a second root cause of misalignment with Nicolas: should we trust that developers have a will to progress? If the answer is NO, we might spend a whole life fighting with them and always making their lives more difficult. Obviously, you’ll always find some reluctant developers, doing some push back and not caring at all about the quality and reliability of the source code. But I prefer targeting the vast majority of developers eager to learn and to progress. For that majority of developers, the goal is to always make life more fun instead of making it harder. So, how do you infect your “learning” developers with the desire to unit test?

When you start the development of an application from scratch unit testing might be quite easy. But when you’re maintaining an application with 100,000 lines of code and only 5% is covered by unit tests, you could quickly feel depressed. And obviously most of us are dealing with legacy code. When you’re starting out so far behind, it can require years to reach a total unit test coverage of 90%. So for those first few years, how are you going to reinforce the practice? How are you going to make sure that in a team of 15 developers, all developers are going to play the same game?

At SonarSource we failed during many years

Indeed, we were stuck with a code coverage of 60% on the platform and were not able to progress. Thankfully, David Gageot joined the team at that time, and things were pretty simple for him: any new piece of code should have a coverage of at least 100% :-). That’s it, and that’s what he did. From there we decided to set-up a quality gate with a very simple and powerful criteria: when we release a new version of any product at SonarSource, the code coverage on new or updated code can’t be less than 80%. When this is the case, the request for release is rejected. That’s it, that’s what we did, and we finally started to fly. One year and an half later, the code coverage on the SonarQube platform is 82% and 84% on the overall SonarSource products (400,000 lines of code and 20,000 unit tests).

Code coverage on new/changed code is a game changer

And it’s pretty simple to understand why:

  • Whatever your application is, and may it be a legacy one or not, the quality gate is always the same and doesn’t evolve over time: just make the coverage on your new/changed lines of code greater than X%
  • There’s no longer a need to look at the global code coverage and legacy Technical Debt. Just forget it and stop feeling depressed!
  • As each year X% of your overall code evolves (at Google for example, each year 50% of the code evolves), having coverage on changed code means that even without paying attention to the overall code coverage, it will increase quickly just “as a side effect”.
  • If one part of the application is not covered at all by unit tests but has not evolved during the past 3 years, why should you invest the effort to increase the maintainability of this piece of code? It doesn’t make sense. With this approach, you’ll start taking care of it if and only if one day some functional changes need to be done. In other words, the cost to bootstrap this process is low. There’s no need to stop the line and make the entire team work for X months just to reimburse the old Technical Debt.
  • New developers don’t have any choice other than playing the game from day 1 because if they start injecting some uncovered piece of code, the feedback loop is just a matter of hours, and anyway their new code will never go into production.

This new approach to deal with the Technical Debt is part of this paradigm shift explained in our “Continuous Inspection” white paper. Another blog entry will follow explaining how to easily track any kind of Technical Debt with such approach, not just debt related to the lack of code coverage. And thanks Nicolas Frankel for keeping feeding this open debate.

Categories: Open Source

Meet the uTesters: Iwona Pekala

uTest - Wed, 11/26/2014 - 23:24

Iwona Pekala is a gold rated full-time tester on paid projects at uTest, and a uTester for over 3 years. Iwona is also currently serving as a uTest Forums moderator for the second consecutive quarter. She is a fan of computers and technology, and lives in Kraków, Poland.

Be sure to also follow Iwona’s profile on uTest as well so you can stay up to date with her activity in the community!

IwonauTest: Android or iOS?

Iwona: Android. I can customize it in more ways when compared to iOS. Additionally, apps have more abilities, there is a lot of hardware to choose from, and it takes less time to accomplish basic tasks like selecting text or tapping small buttons.

uTest: What drew you into testing initially? What’s kept you at it?

Iwona: I became a tester accidentally. I was looking for a summer internship for computer science students (I was thinking about becoming a programmer). The first offer I got was for the role of tester. I was about to change it, and I was transitioned to a developer role after some time. It was uTest that kept me as a tester, particularly the flexibility of work and variety of projects.

uTest: Which areas do you want to improve in as a tester? Which areas of testing do you want to explore?

Iwona: I need to be more patient and increase my attention to details. When it comes to hard skills, I would like to gain experience in security, usability and automation testing.

uTest: QA professional or tester?

Iwona: I describe myself as a tester, but those are just words, so it doesn’t really matter what you call that role as long as you know what its responsibilities are.

uTest: What’s one trait or quality you seek in a fellow software testing colleague?

Iwona: Flexibility and the skill of coping with grey areas. As a tester, you need accommodate to changing situations, and you hit grey areas on a daily basis. It’s important to use common sense, but still stay in scope.

You can also check out all of the past entries in our Meet the uTesters series.

Categories: Companies

Integrating Ranorex Test Cases into Jira

Ranorex - Wed, 11/26/2014 - 16:07

Jira is an issue and project tracing software from Atlassian. The following article describes integrating Ranorex Test Cases into Jira. That way you will empower Ranorex to submit or modify testing issues within Jira in an automated way.

Jira-Integration

As Jira offers a REST web service (API description available here), it becomes possible to submit issues automatically. This is achieved using the JiraRestClient  and RestSharp.

These libraries are wrapped with Ranorex functionality, forming re-usable modules, available within this library. The integration of these Jira testing modules into Ranorex Test Automation is described subsequently.

The following steps need to be done:

Step 1 – Adding the Libraries to Ranorex for Jira Automation:

Predefined modules (for x86 architecture and .NET 3.5) are available here. The assemblies in this zip-file just need to get added to the Ranorex project. In succession the modules (as shown below) will appear in the module browser under “JiraReporter” (demonstrated on the Ranorex KeePass sample):

AddReference

Step 2 – Using the Modules in the Ranorex Test Suite

Individual modules are available within the “JiraReporter” project. These modules merely need to get used within the Ranorex Test Suite, as shown below:

Modules_TestSuite

The modules are interacting with Jira, based on the results of the related test cases. Except the initialization module, it is recommended placing the modules in the test case’s teardown.

Available modules for Jira automation:

  • InitializeJiraReporter — This module establishes the connection to the Jira server. It is mandatory for the following modules to be functional.
  • AutoCreateNewIssueIfTestCaseFails — If the test case fails, an issue is automatically created on the server, which is defined in “InitializeJiraReporter”. An issue number is automatically created by the server.
    A compressed Ranorex report is uploaded automatically as well.
  • ReOpenExistingIssueIfTestCaseFails — If the test case fails, an existing and already closed issue gets re-opened.
  • ResolveIssueIfTestCaseSuccessful — If the test case is successful, an existing and already open issue is set to “resolved”.
  • UpdateExistingIssueIfTestCaseFails — If a test case fails, attributes of an existing issue are updated.

 

Step 3 – Configure Parameters for the Modules

The modules expose different variables for configuration. Each module accepts different parameters, but they’re all used in the same way among the modules. Which module accepts which parameters can be seen when using the modules in the Ranorex project.

  • JiraUserName: The username to connect to the Jira server.
  • JiraPassword: The password for the specified user.
  • JiraServerURL: The URL for the Jira server.
  • JiraProjectKey: The project key as specified in Jira (e.g. MYP).
  • JiraIssueType: An issue type, as available in Jira (e.g., Bug)
  • JiraSummary: Some free summary text for the issue.
  • JiraDescription: Some free description text for the issue.
  • JiraLabels: Labels for the issue separated by “;” (e.g., Mobile; USB; Connection)
  • JiraIssueKey: The key for the respective issue (e.g., MYP-25).

 

The configuration of the modules is then done with common Ranorex data binding:

DataBinding

… and you’re done:

In succession, Ranorex will automatically interact with Jira when one of the modules is executed. The issues can then get processed in Jira. The following figure shows an automatically created issue together with its attached report:

JiraPic

 

Advanced usage:

The JiraReporter project offers two more modules; (i) the module “OnDemandCreateNewIssueIfTestCaseFails” and (ii) the module groupProcessExistingIssue”. These modules offer further convenience functionality and are explained in more detail in succession.

Module Group – ProcessExistingIssue

This module group groups the following modules in the given order:

  • ReOpenExistingIssueIfTestCaseFails
  • UpdateExistingIssueIfTestCaseFails
  • ResolveIssueIfTestCaseSuccessful

It might be useful to process an existing issue, as it reopens and updates the issue automatically in case of a failure. Otherwise, if the test case is successful, it closes the issue.
Thus, it can be used to monitor an already known and fixed issue. To use this module group, the whole Ranorex  “JiraReporter”project, available on GitHub needs to get added to the solution.

OnDemandCreateNewIssueIfTestCaseFails

This module provides functionality, creating a new issue out of the Ranorex Report. A new issue only gets created when the link, provided within the report is clicked. So the user or tester can decide whether an issue is create or not.

The compressed Ranorex report is uploaded to the newly created issue as well.

rxReport

Note: This functionality relies on a batch file created by Ranorex in the output folder and the execution of the Jira Command Line interface (CLI). It does not depend on a prior initialization from “InitializeJiraReporter”.

The module exposes the same variables as the modules mentioned above. One additional parameter is essential for this module:

  • JiraCLIFileLocation: The full path to the “jira-cli-<version>.jar” file, provided by the Jira CLI.

Following requirements need to be met to use this module:

  • Remote API must be enabled in your JIRA installation
  • The mentioned batch file needs to be accessible over the same file path, where the file was initially created. If the file is moved to a new location, the link is not working anymore.
    In this case the batch-file needs to be started manually.

 

JiraReporter Source Code:

The whole project which contains the code for the JiraReporter is available on GitHub under the following link:

https://github.com/ranorex/Ranorex-Jira-Integration

Please feel free to modify the code according to individual needs and/or upload new modules.

 

Categories: Companies

A Faster Android 5 is Coming; Get the Most out of your Android App Performance!

A new version of Android is coming up, Lollipop, and as usual the Android team at Google is promising that it will be faster, backed by a new ART runtime and the promise is to reach performance improvements up to 100%. I am lucky that I can always take a look at new releases of […]

The post A Faster Android 5 is Coming; Get the Most out of your Android App Performance! appeared first on Dynatrace APM Blog.

Categories: Companies

Seapine’s 2014 Holiday Shopping Guide

The Seapine View - Wed, 11/26/2014 - 12:00

G302 Daedalus Prime MOBA Gaming MouseTrying to come up with gift ideas for those hard-to-shop-for people on your holiday list? Maybe some of Seapine’s customers can help!

For the Gamers

Borderlands the Pre-Sequel The newest Borderlands game from 2K Games and Gearbox is a favorite around the Seapine office. ($60)

G302 Daedalus Prime MOBA Gaming Mouse If the gamers on your list rock it PC style, they’ll love this gaming mouse from Logitech. Designed by pro gamers, the G302 is a precision gaming tool. ($50)

Kingdom Hearts 2.5 HD ReMIX Fans of Square Enix‘s Kingdom Hearts series will love this new HD compilation for the PlayStation 3. It releases December 2, just in time for the holidays. ($40)

Super Smash Bros. Namco Bandai‘s new game for the Nintendo DS and Wii U. has been called “completely insane and absolutely amazing.” ($40)

Epic Gamer T-Shirt Who doesn’t want to be an Epic Gamer? Every gamer on your list will love this shirt from Epic Games. ($25)

For the Artists

Intuos Tablet Wacom is the leading manufacturer of pen tablets, stylus, and interactive pen displays that let the artists on your list express their creativity as fluently in the digital world as they would with ink on paper or paint on canvas. ($99)

Hasselblad LunarLunar LF 18-55mm For photographers, Hasselblad is the top of the line. Our favorite is the LF 18-55mm with the olive wood accents. It’s a gorgeous camera that takes gorgeous photos. ($7,000)

Momentum Headphones For the music lovers on your list, nothing sounds better than Sennheiser. They’ll rock out in style with a pair of Momentum headphones, which WIRED called “more than just good-looking, they’re downright sexy.” ($270)

Music Gear from TC Electronic Shopping for a rocker? Get great guitar and bass gear from TC Electronic! TC makes high-quality pedals, amps, and more. (prices vary)

For the Kids

Frozen There’s no denying it: Frozen is still hot. Don’t own it? Pick up a copy on DVD or Blu-Ray from Walt Disney Motion Pictures and “Let It Go.” ($25 on Blu-ray)

The Ghost From the hit new Star Wars Rebels, LEGO’s version of the Ghost spaceship will have the Jedi on your list itching to fight the Empire. ($90)

LeapTV Get young minds and bodies moving with this educational, active video game system from LeapFrog. ($150)LeapTV

Olaf’s in Trouble Olaf’s in Trouble is a Frozen version of the classic Trouble game by Hasbro. If your kids love Frozen, they’ll have a blast playing this game as their favorite Frozen character, traveling around Arendelle to save Olaf. ($15)

Nerf N-Strike Elite Demolisher 2-in-1 Blaster For the bigger kids on your list, get them the newest in Nerf firepower. They’ll dominate their next Nerf war with motorized dart firing and missiles. ($40)

For the Ones with Everything

[+] Trip Universal Air Vent Mount Logitech makes fantastic gadgets, and one of our favorites is the [+] Trip smartphone mount for the car. It’s perfect for gadget lovers, no matter what phone they have. ($30)

Bathfitter Has someone been hinting about upgrading the master bathroom? Surprise them with a makeover from Bathfitter. (prices vary)

Barcelona Got someone on your list who says they don’t want “stuff”? Give the experience of a lifetime with a trip to Barcelona! BCN Travel makes the planning easy. (prices vary)

Bourdon Messenger Bag For the sophisticated traveler, this messenger bag from Alfred Dunhill is sure to please. ($1040)LOVE by Cartier

Braun Series 7 790cc Shaver “Movember” is coming to an end, so the bearded ones on your list might soon need a new shaver. Braun makes the best. ($270)

LOVE Bracelet Add a little sparkle to the holidays for that special woman in your life with this gorgeous white gold and diamond bracelet from Cartier. ($11,100)

For Stocking Stuffers

Davids Tea Festive Collection Our sales and support teams enjoy Davids Tea so much, they’ve taken to holding “high tea” every day. If you’ve got a tea lover on your list, they’re sure to enjoy the Festive Collection. ($50)

A Christmas Story This classic holiday comedy from Warner Brothers is a favorite at Seapine. If you know anyone who hasn’t seen it, they’ll thank you for putting this in their stocking. ($18)

Sonic Drive-In Gift Card Sonic Drive-In has great food and desserts, so everyone will appreciate a Sonic gift card! (prices vary)

Need more ideas?

If you still haven’t checked off everyone on your list, visit Conn’s Home Plus for ideas and great deals on electronics, computers, appliances, furniture, and more.

Happy shopping and Happy Holidays from Seapine Software!

Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

TenKod Launches New Solution For Mobile Testing

Software Testing Magazine - Tue, 11/25/2014 - 21:23
Featuring ingenious technology, TenKod EZ TestApp offers clients a cost effective yet efficient solution within the world of mobile applications testing. Generated test projects boast compatibility with all leading Continuous Integration (CI) systems such as Jenkins, Atlassian Bamboo and JetBrains TeamCity and do not require device jail breaking, rooting and application instrumentation. EZ TestApp approach helps development organizations shift mobile application production from a post, late phase, to the early stages of the development process. EZ TestApp provides the conveniences associated with Open Source and Proprietary Supported Platform, meaning that it ...
Categories: Communities

Being a Better Test Leader or Test Manager

Software Testing Magazine - Tue, 11/25/2014 - 18:31
It is not always easy to have management responsibilities in software development when you come from a technical position. This is also true in software testing. In this article, Mark Garzone shares some tips on how to be a better test leader or test manager. Author: Mark Garzone, https://www.smashwords.com/books/view/485652 Are you a test leader or inspire to become a test leader of a team of testers? Follow these tips on leading your team of testers to stellar results. Invest in your tester’s team education by buying tester books, paying for tester certification programs, ...
Categories: Communities

Talking Turkey in Texas: Open Source Governance Lags

Sonatype Blog - Tue, 11/25/2014 - 16:56
Deep in the heart of Texas, I was leading a panel discussion at the Lone Star Application Security Conference (LASCON) a few weeks ago.  The panel was “talking turkey” the importance of application security and open source software development, when the conversation led to a discussion about...

To read more, visit our blog at blog.sonatype.com.
Categories: Companies

New uTest Platform Features Emphasize Quality

uTest - Tue, 11/25/2014 - 16:56

Last week, uTest launched two new Platform features for uTesters on paid projects which continue to drive the needle in our continuous pursuit of quality (plus a very useful change to existing tester dashboard functionality). Here’s a recap of what is included in the latest uTest Platform release.

Bug Report Integrity

Most testers understand the role of a bug report is to provide information. However, a “good” or valuable bug report takes that a step further and provides useful and actionable information in an efficient way. As such, in addition to approving tester issues, Test Team Leads (TTLs) and Project Managers (PMs) have the ability to rate the integrity of a tester’s bug report by setting the bug report integrity to High, Unrated or Low. However, by default, all bugs will be set to Unrated.

bug-report-integrity

The Bug Report Integrity feature will reward testers who meet a high report integrity standard by providing a positive rating impact to the quality sub-rating. Conversely, we will also seek to educate testers who may be missing the mark by negating any positive impact that may have occurred based on the value of the bug itself.

For more information, please review the Bug Report Integrity uTest University course.

Tester Scorecard

When navigating into a test cycle, you will see a new tab called “Tester Scorecard.” Clicking this tab will bring up a ranked list of testers based on their bug submissions and the final decisions on these bugs — i.e. approvals and rejections.

Points are awarded according to the explanation at the top of the Scorecard and result in a score that is used to rank testers based on their performance. Sorting the table by any of the columns is possible. If two testers have identical scores (i.e. same number of bugs approved at the same value tiers), the tester that started reporting bugs first will be first in the ranking with same point scores.

Our hope is that this Scorecard will spark some additional competition among top performers and will also be useful for PMs and TTLs to choose testers for participation bonuses. Of course, it is still at the discretion of the TTL or PM to decide who won any bug battles or is eligible for any bonus payments.

Note: Scores indicated on the scorecard do not impact the tester’s rating.

Score Card

Feature Change: Payout Card

Additionally, there was an improvement to existing functionality within the tester dashboard. Pending payouts are now included so that testers can easily see how much they have earned:

Payout Card

If you like what you see, feel free to leave your comments below, or share your ideas on these and other recent platform updates by visiting the uTest Forums. We’d love to hear your suggestions, and frequently share this valuable feedback with our development team for future platform iterations!

Categories: Companies

Finding and Fixing Memory Leaks in Tibco Business Works

We are working with a lot of performance engineers that have Tibco Business Works (BW) in the mix of technologies they are responsible for. This particular story comes from A. Alam – a performance engineer who is responsible for a large enterprise application that uses Tibco to connect their different system components. Alam and his […]

The post Finding and Fixing Memory Leaks in Tibco Business Works appeared first on Dynatrace APM Blog.

Categories: Companies

Report from the field on TDD in embedded development

James Grenning’s Blog - Mon, 11/24/2014 - 23:11

Thanks James.

Upper management actually asked me to share my TDD experience as well & so I just published an article internally to our Embedded Software newsletter describing how TDD helped my project. Here’s the summary from that article (I think the dates really say it all):

My doubts that TDD could be used for an embedded application with an emphasis on external peripherals have been eliminated, and I have found the time invested in writing tests and mocks to be well worth it.

I find it compelling that

  1. I required only 4 days of actual hardware testing before achieving my integration goal and that goal came essentially 2 months ahead of schedule.
  2. For the past 5 months, since May, I have not used the in-system Debugger at all and instead rely on TDD to minimize the introduction of bugs in the first place.

Based on my experience, I found TDD to be a positive feedback exercise – passing my first tests & catching bugs immediately, encouraged me to write more tests, which lead to more successful results until I now have a high level of code coverage and a handy set of regression tests. (And since I wasn’t frantically debugging in the lab, I had enough time to write this article!)

Thanks, Name Withheld

Categories: Blogs

Advanced Usage of py.test Fixtures

Testing TV - Mon, 11/24/2014 - 18:47
One unique and powerful feature of py.test is the dependency injection of test fixtures using function arguments. This talk presents py.test’s fixture mechanism gradually introducing more complex uses and features. This should lead to an understanding of the power of the fixture system and how to build complex but easily-managed test suites using them. Video […]
Categories: Blogs

Will You Get a Job in 2024 Without TDD?

Software Testing Magazine - Mon, 11/24/2014 - 18:32
This presentation looks at the chasm-crossing potential of Test-Driven Development (TDD) and some related technologies. The aim is that you will still be able to get a good job in 2024. Geoffrey Moores’s book “Crossing the chasm” outlines the difficulties faced by a new, disruptive technology, when adoption moves from innovators and visionaries into the mainstream. Test Driven Development is clearly a disruptive technology, that changes the way you approach software design and testing. It hasn’t yet been embraced by everyone, but is it just a matter of time? Ten years ...
Categories: Communities

Meet the uTesters: David Oreol

uTest - Mon, 11/24/2014 - 16:15

David Oreol has been a uTester since the very beginning, and is a full-time Test Team Lead Premier and Gold-rated tester on paid projects at uTest. Before juTester-David-Oreol-300x300oining the community, David earned a B.S. in Computer Science from California State University Fresno and worked in IT and as a software engineer.

Be sure to also follow David’s profile on uTest as well so you can stay up to date with his activity in the community!

uTest: Android or iOS?

David: For work, both. I like testing on both environments, but for personal use, it is iOS and Mac all the way. I like the ease of use and integration between the mobile and desktop platforms. I don’t like having to constantly tweak my phone or computer to get it to work. I used to be a die-hard Windows fan, but I switched to Mac a few years ago and haven’t looked back.

uTest: What drew you into testing initially? What’s kept you at it? 

David: I’ve always been one to sign up for beta testing of apps I use, so it was a natural fit. I have a degree in Software Engineering as well, so that certainly helps out. What’s kept me going is the variety of products. I’ve tested everything from hardware devices to websites to Mac and PC apps to iOS and Android apps. Many of the products I have gotten to test weren’t available to the public yet. Seeing something that I tested out in the wild is a big thrill for me, even if I can’t tell anyone that I worked on it.

uTest: What’s your go-to gadget?

David: For work, my new iPhone 6 Plus. I’m finding some interesting bugs with it since it has the larger screen and the new wider landscape layout. For relaxing, I love my Kindle Paperwhite. The e-ink screen is so much easier on my eyes than a traditional backlit screen. I think that everyone that reads a lot should own an e-ink reader.

uTest: What is the one tool you use as a tester that you couldn’t live without?

David: My 27” iMac. The large screen really helps with big spreadsheets for work. Additionally, OS X has built in virtual desktops that are super easy to use. I normally run 7 desktops with different browsers and tools on each one. It’s almost like having multiple monitors, but without taking up all my desk space.

uTest: What keeps you busy outside testing?

David: Lately, I’ve been running and walking a lot. I enjoy the time away from the computer. Otherwise, I spend most of my time with my wife and playing with our ferrets. We also really enjoy hiking and tent camping.

You can also check out all of the past entries in our Meet the uTesters series.

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today