Skip to content

Feed aggregator

NServiceBus 5.0 behaviors in action: routing slips

Jimmy Bogard - Thu, 10/02/2014 - 14:57

I’ve wrote in the past how routing slips can provide a nice alternative to NServiceBus sagas, using a stateless, upfront approach. In NServiceBus 4.x, it was quite clunky to actually implement them. I had to plug in to two interfaces that didn’t really apply to routing slips, only because those were the important points in the pipeline to get the correct behavior.

In NServiceBus 5, these behaviors are much easier to build, because of the new behavior pipeline features. Behaviors in NServiceBus are similar to HttpHandlers, or koa.js callbacks, in which form a series of nested wrappers around inner behaviors in a sort of Russian doll model. It’s an extremely popular model, and most modern web frameworks include some form of it (Web API filters, node, Fubu MVC behaviors, etc.)

Behaviors in NServiceBus are applied to two distinct contexts: incoming messages and outgoing messages. Contexts are represented by context objects, allowing you to get access to information about the current context instead of doing things like dependency injection to do so.

In converting the route supervisor in my routing slips implementation, I greatly simplified the whole thing, and got rid of quite a bit of cruft.

Creating the behavior

To first create my behavior, I need to create an implementation of an IBehavior interface with the context I’m interested in:

public class RouteSupervisor
    : IBehavior<IncomingContext> {
    public void Invoke(IncomingContext context, Action next) {

Next, I need to fill in the behavior of my invocation. I need to detect if the current request has a routing slip, and if so, perform the operation of routing to the next step. I’ve already built a component to manage this logic, so I just need to add it as a dependency:

private readonly IRouter _router;

public RouteSupervisor(IRouter router)
    _router = router;

Then in my Invoke call:

public void Invoke(IncomingContext context, Action next)
    string routingSlipJson;

    if (context.IncomingLogicalMessage.Headers.TryGetValue(Router.RoutingSlipHeaderKey, out routingSlipJson))
        var routingSlip = JsonConvert.DeserializeObject<RoutingSlip>(routingSlipJson);




I first pull out the routing slip from the headers. But this time, I can just use the context to do so, NServiceBus manages everything related to the context of handling a message in that object.

If I don’t find the header for the routing slip, I can just call the next behavior. Otherwise, I deserialize the routing slip from JSON, and set this value in the context. I do this so that a handler can access the routing slip and attach additional contextual values.

Next, I call the next action (next()), and finally, I send the current message to the next step.

With my behavior created, I now need to register my step.

Registering the new behavior

Since I have now a pipeline of behavior, I need to tell NServiceBus when to invoke my behavior. I do so by first creating a class that represents the information on how to register this step:

public class Registration : RegisterStep
    public Registration()
        : base(
            "RoutingSlipBehavior", typeof (RouteSupervisor),
            "Unpacks routing slip and forwards message to next destination")

I tell NServiceBus to insert this step before a well-known step, of loading handlers. I (actually Andreas) picked this point in the pipeline because in doing so, I can modify the services injected into my step. This last piece is configuring and turning on my behavior:

public static BusConfiguration RoutingSlips(this BusConfiguration configure)
    configure.RegisterComponents(cfg =>
        cfg.ConfigureComponent(b => 

    return configure;

I register the Router component, and next the current routing slip. The routing slip instance is pulled from the current context’s routing slip – what I inserted into the context in the previous step.

Finally, I register the route supervisor into the pipeline. With the current routing slip registered as a component, I can allow handlers to access the routing slip and add attachment for subsequent steps:

public RoutingSlip RoutingSlip { get; set; }

public void Handle(SequentialProcess message)
    // Do other work

    RoutingSlip.Attachments["Foo"] = "Bar";

With the new pipeline behaviors in place, I was able to remove quite a few hacks to get routing slips to work. Building and registering this new behavior was simple and straightforward, a testament to the design benefits of a behavior pipeline.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

andagon GmbH - New European Service Partner

Ranorex - Thu, 10/02/2014 - 10:30
As we have rapidly grown our presence and customer base in Europe, we have also seen an increasing demand for Ranorex consulting and implementation services. To help meet this demand, we have the pleasure of announcing that we have partnered with andagon GmbH.

andagon GmbH is a well-established player in the DACH region for software quality assurance, test automation and test management. Andagon does offer a high-competitive product and service portfolio including consulting services, professional services, its own ALM solution named aqua and innovative cloud-based testing services.

Since 2009 andagon works deeply together with Ranorex providing test automation services from enterprise customers to startups and training services. All of its 60 test specialists do have a high expertise in using Ranorex and delivered solutions to customers in many different projects. For more information about andagon, please visit
Categories: Companies

Why Attend the DevOps Enterprise Summit?

Sonatype Blog - Thu, 10/02/2014 - 07:54
Major enterprises are embracing DevOps. The DevOps Enterprise Summit is bringing together top practitioners who are leading DevOps transformations in large, complex organizations. It is a three-day conference on October 21-23, where leaders share their lessons learned, spanning culture,...

To read more, visit our blog at
Categories: Companies

More Agile Testing!

Agile Testing with Lisa Crispin - Wed, 10/01/2014 - 19:04


Moar Agile TestingMoreAgileTestingCover

More (Agile Testing)

(More Agile) Testing

Janet Gregory and I, along with more than 40 contributors and many helpful reviewers (it takes a village), have finished our new book. We delve into many areas that are new or more important since our first book Agile Testing was published. Please see our book website to see the giant mind map of the entire book, plus bios of all our contributors, and one of the chapters, Using Models to Help Plan!

The post More Agile Testing! appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Better than nothing

The Kalistick Blog - Wed, 10/01/2014 - 18:19

Here’s a question I get occasionally:

public class C<T> where T : class 
    public void M<U>(C<U> cu) where U : T
    { ... }

This gives an error stating that U must be a reference type to use it in C<U>. But the outer constraint says that T is a reference type, and the inner constraint says that U is a T, so why doesn’t the compiler know that U is a reference type? Is this a bug in the compiler?

Let me answer that question with a classic “dad joke”, delivered on the occasion of setting fire to dinner at the barbecue:

  • An overcooked hamburger is better than nothing.
  • Nothing is better than a delicious steak.
  • Therefore: an overcooked hamburger is better than a delicious steak.

A similarly bad syllogism I like:

  • “Santa Claus” is a proper noun.
  • “A proper noun” is three words long.
  • Therefore: “Santa Claus” is three words long.

And finally:

  • T is a reference type.
  • A U is a T.
  • Therefore: U is a reference type.

None of these syllogisms are valid and all of them lead to nonsense results — though the exact way in which each goes wrong is slightly different, each of them somehow abuses the word “is”. In our last example, the problem is that the word “is” has been used to mean two completely different things. Let’s rewrite that syllogism and we’ll see where it goes wrong.

  • The type argument provided for T must be a reference type.
  • The type argument provided for U must be implicitly convertible to that provided for T.
  • Therefore: U is a reference type.

Now it is clear that the conclusion does not follow from either premise. U must be convertible to T, and it is possible that it is convertible via a boxing conversion, which implies that U is not a reference type. T could be object and U could be int, for example. So no, the bug is not in the compiler. The error message is correct, and U needs to have a reference type constraint placed on it.

The post Better than nothing appeared first on Software Testing Blog.

Categories: Companies

uTest Announces the Grand Prize Winner of the Ideal Tool Contest

uTest - Wed, 10/01/2014 - 16:59

we-have-a-winnerLast month, we asked the uTest Community to submit their ideas for the ideal testing tool – one with a unique feature or that combines some favorite features and functions into one tool. The Ideal Tool Contest was a competition for testers to design a testing tool targeted at the manual functional tester. We also offered one of the largest prize packages in recent history, with over $1,000 in prize money as well as uTest t-shirts.

Voting for the Ideal Tool Contest just wrapped up yesterday and we are happy to announce the Grand Prize winner and the four runners-up. Be sure to leave a comment to congratulate these folks! You can also take a moment to click through and read each of the winning entries.

Grand Prize Winner

The 30-second Recorder by Amir Horesh

A testing tool helping the tester to reproduce a recent defect found by recording his activities in the last 30 seconds, and producing his activities in an activity log as well. This tool will save all of the user’s activities (clicks on screen, typing, etc.) in a log file and will record the screen itself and the user’s activities for the last 30 seconds (always keeps 30 secs of recording), helping the tester to provide clear steps to reproduce the defect for the developer. Read the complete entry (PDF).


The Swiss Knife QA Tool by Anand Reddy Pesaladinne

A one-stop solution for any cross-mobile OS like Android, iOS, and Windows Phone devices. Swiss Knife QA tool is a mobile application analyzer tool which helps you to connect Android, iOS, and Windows Phone device’s using USB cable to a Windows or Mac machine and helps you to capture log files, take screenshots, and record a screencast of the screen. Read the complete entry (PDF).

The Complete Mobile Bug Report by Georgios Boletis

A complete mobile bug report (GUI, functional, or technical) for both environments (iOS and Android) that contains screenshots (preferably with markups), videos, logs, and crash reports. In order to get all this info, currently you need several separate apps for each environment and, of course before that, the installation of the testing app is needed. This ideal testing tool would combine all these features into one. Read the complete entry (PDF).

The Bug Recommender and Custom Template Tool by Ronny Sugianto

Single desktop app with support by mobile and web add-ons – all completely synchronized with one another. The Bug Recommender System automatically scans for each basic function of the product (e.g. find broken links, broken image, unplayable video, basic issue for form validation, etc.). For each issue found, the system directly captures a screenshot with annotation where is the location of issue and converts it to a custom template report that ready to submit alongside with all the attachments. Read the complete entry (PDF).

The Website Analyzer by Vanessa Yaginuma

A tool that analyzes a website, based on a configurable criteria, and provides a visual report like a heat map of the application. The goal of the tool is to provide additional information for manual testing execution by highlighting areas of the page where the tester can focus first. Read the complete entry (PDF).


Again, congratulations to all the winners. uTesters, be sure to keep an eye out for future uTest contests. We have some fun competitions in store for our community!

Categories: Companies

Ruby Selenium-Webdriver - Quick Start

Yet another bloody blog - Mark Crowther - Wed, 10/01/2014 - 16:19
Guess how old Selenium is? If you didn't know, it's now (over) 10 years old... no really! How about Selenium 2? Well that was released in July of 2011, so it's not 'new' by any means. 

If you've not had a look at it yet, now's the time! Selenium-Webdriver will allow you to execute web tests using a range of browser easier than before. You can also use your favourite programming or scripting language and a range of other tools to enhance your testing.

As always, I'll be using Ruby on Windows for this demo and assume you have Firefox browser available - let's get going!


1. Install Ruby
To do that either read the blog post here or watch the video on YouTube:

Set-up and install Ruby
2. Check your Gems
We're going to need the selenium-webdriver gem. To install that, open a CMD window (start > run > 'cmd') and type gem install selenium-webdriver. You can check installed Gems by typing gem list which shows what's available and their version.

3. Start Interactive Ruby (IRB)
For this demo we'll just run commands straight from IRB. Using a CMD window type irb to start IRB.

In IRB type require 'selenium-webdriver' to start a webdriver instance so we can pass it commands to execute.

4.  Open the browser!
Yes, we're ready to start using Webdriver. Now type the following to invoke an instance of Firefox with the reference of browser.

browser = Selenium::WebDriver.for(:firefox)

If all is OK then Firefox will open. If you get an 'access' warning, just click OK.

5.  Run some tests
Now work through the following commands to run a basic test using Google.

Google will now load in the blank browser instance.

Type: browser.find_element(:name, "q").send_keys("Hello)
This will type 'Hello' in the query text field, but not return it.

Type: browser.find_element(:name, "btnG".click)
We'll now see search results returned.


Watch the video!

References for DYOR:

Liked this post?
Say thanks by Following the blog or subscribing to the YouTube Channel!

Categories: Blogs

Playbook for Performance at Velocity New York 2014

Last week at Velocity Conference - New York I had the opportunity to sit in keynote address by Mikey Dickerson on the topic “One Year After Where Are We Now?” Mikey Dickerson is the Administrator/Deputy CIO of USDS. In October, 2013 he took a leave of absence from Google to join what became known as the “ad […]

The post Playbook for Performance at Velocity New York 2014 appeared first on Compuware APM Blog.

Categories: Companies

Latest open source vulnerability further highlights importance of security

Kloctalk - Klocwork - Wed, 10/01/2014 - 15:00

Earlier this year, the discovery of the Heartbleed vulnerability caused a tremendous amount of discussion and worry around open source security. While some argued that this incident revealed that open source is not as reliable as originally thought, most experts believed that Heartbleed was essentially an anomaly, one that would drive companies and IT personnel to improve their security efforts.

However, now another open source vulnerability has appeared, one that is quite possibly even more dangerous than Heartbleed. This bug, known as Shellshock, is a serious flaw that could cause major problems. As was the case with Heartbleed, this discovery should serve as motivation for companies relying on open source to shore up their security efforts.

As Harvard Business Review contributor Karim Lakhani explained, Shellshock is a software flaw that exploits a vulnerability in Bash Shell. This allows hackers to potentially gain control over Linux and Unix computer systems, running whatever commands they so desire.

Shellshock has major implications for the Internet of Things, according to Lakhani. A huge number of machines are now connected to the Internet of Things and may be vulnerable to cybercriminals exploiting this flaw. This, the writer explained, makes Shellshock a much more significant threat to companies’ cybersecurity efforts than Heartbleed ever was. Heartbleed threatened personal information, while Shellshock threatens actual operations.

Response time
Fortunately, as Lakhani noted, affected organizations, including open source communities, have quickly initiated efforts to mitigate the damage and deliver solutions that can counter Shellshock. Some of these are already available.

However, this is only a short-term solution, and it does not address the greater issue. According to Lakhani, Shellshock will not be the last major vulnerability to appear, and firms need to take steps to prepare for more discoveries of this sort.

Infoworld contributor Roger Grimes seconded this notion. Grimes argued that this latest vulnerability should be seen as evidence that simply having more eyes viewing open source code does not automatically ensure its security. Put simply, he explained that while open source presents the opportunity for many people and organizations to view and evaluate a given piece of code’s security, the fact of the matter is that most will not take this step. And if that’s the case, then the potential inherent security advantages of open source will remain theoretical.

Proactive efforts
All of this does not mean that companies should abandon open source. Instead, businesses need to realize that it is dangerous to rely on others to verify the security of these solutions. This mindset has led many companies to embrace open source without taking the appropriate defensive measures, thereby putting their assets at risk.

Instead, organizations should invest in high-quality open source security tools that can protect these resources. Specifically, open source scanning tools can reveal precisely how open source is being used throughout the organization, This insight is key for ensuring that the company is following best practices with its open source adoption, limiting the risk of exposure or data loss.

Categories: Companies

Ruby Basics » Part 15 | Hashes - A Quick Intro

Yet another bloody blog - Mark Crowther - Wed, 10/01/2014 - 12:32
Welcome to the first post of the second part, in our series on Ruby Basics. To see what's coming up, check out the Trello board:


When we looked at Arrays, we saw that collections of data were stored under a given Array name. These were accessed by knowing the integer value of the data item’s location in the array. If you recall in the Basics 1 Wrap Up, we had the following Array:
        rolesArray = ["Developer", "Tester"]
To access these we need to use [0] and [1], as in
        print rolesArray[0]
Later on we assigned David or Alan as one of these roles and this worked fine. But what if we now wanted to assign them individual salaries, periods of employment, holidays allocated or other relevant data. We couldcreate Arrays and put the data in the same sequence as the employee array. For example we might set-up:
rolesArray = ["Developer", "Tester"]
rolesHolder = ["Dave", "Alan"]
rolesSalary = [40000, 35000]
rolesHoliday = [25, 25]
I’m guessing you can see that’s all well and good if everything stays in order. For those with a little Database knowledge this problems with the above approach scream even louder. What we need is a way to explicitly pair the data above with a key bit of data that won’t change. In this case that key bit of data is the employee name.  How can we label the various bits of data with the employee name?
What we need is a key --> value pairing of data, so no matter what order they are stored, we can find, edit, update, and delete the correct one. As luck (Ruby) would have it, what we need is a Hash.


A Hash is a collection of Key-Value pairs. Hashes are also referred to as associative arrays, maps and dictionaries. They’re like Arrays in that they’re still variables storing variables, however unlike Arrays the data isn’t stored in a particular order. Also unlike Arrays, we don’t push/pop data into and out of the Hash, we simply insert and delete Hash values. Let’s look at making a Hash for some of the above data.
We can make a new empty Hash in a similar way to a new empty Array;
        rolesHash =
If we print the above, of course nothing will be returned. As we then acquire data to add to it, we can insert the data by giving the key-value pairs:
        rolesHash["David"] = "Developer"
Try running the entire snippet below:
rolesHash = Hash.newputs rolesHash
rolesHash["David"] = "Developer"rolesHash["Alan"] = "Tester"
puts rolesHash
Here we add two key-value pairs to our newly created Hash and print the entire Hash out, which looks something like this:
        {"David"=>"Developer", "Alan"=>"Tester"}
If we wanted to find out what role David was currently in we could look it up using the key:
                    puts rolesHash["David"]

If you try this with a name that is not in the Hash, then the result will be nil which isn’t very informative. A better way is to define a default value, for example:
                    rolesHash ="No Role Assigned")

Try it again and watch the default message get printed.
Adding to the Hash is good, but we also need to delete items too. To do this we simply call the delete method on the hash and specify which key we want deleting.
If you’d prefer to just build out your Hash from the start, you can do that too.
salaryHash = Hash["David"=> 30000, "Alan" => 35000]puts salaryHash

We’ll leave the basics of Hashes there, as always have a look at the Ruby docs to see more of the methods available. Later on, we’ll look at some of the more complex aspects of Hashes, but for now we have what we need!

Read More

Liked this post?
Say thanks by Following the blog or subscribing to the YouTube Channel!

Categories: Blogs

Announcing the GTAC 2014 Agenda

Google Testing Blog - Wed, 10/01/2014 - 02:37
by Anthony Vallone on behalf of the GTAC Committee

We have completed selection and confirmation of all speakers and attendees for GTAC 2014. You can find the detailed agenda at:

Thank you to all who submitted proposals! It was very hard to make selections from so many fantastic submissions.

There was a tremendous amount of interest in GTAC this year with over 1,500 applicants (up from 533 last year) and 194 of those for speaking (up from 88 last year). Unfortunately, our venue only seats 250. However, don’t despair if you did not receive an invitation. Just like last year, anyone can join us via YouTube live streaming. We’ll also be setting up Google Moderator, so remote attendees can get involved in Q&A after each talk. Information about live streaming, Moderator, and other details will be posted on the GTAC site soon and announced here.

Categories: Blogs

NASDAQ OMX and a Year of Resiliency

The Kalistick Blog - Tue, 09/30/2014 - 22:29

Best Practices in Software Testing for Financial Services

On September 25th we hosted a networking event at the Breslin’s Liberty Hall at the Ace Hotel in New York, where we featured one of our customers, Ann Neidenbach, SVP, Global Technology Services, NASDAQ OMX (NASDAQ: NDAQ), a leading provider of trading, exchange technology, information and public company services across six continents. As the creator of the world’s first electronic stock market, its technology powers more than 70 marketplaces in 50 countries, and 1 in 10 of the world’s securities transactions. NASDAQ OMX is home to more than 3,400 listed companies with a market value of over $8.5 trillion and more than 10,000 corporate clients. NASDAQ OMX is no doubt one of our largest customers.

At our event, Ann described the importance of self-regulating and reporting, best practices in improving software resiliency, how to drive accountability throughout your organization, how to improve customer satisfaction and anticipate regulatory requirements. Given how these markets, and the technology they utilize, are under increased scrutiny today by regulators and customers alike and because of disturbances caused by electronic trading, concerns over software quality of algorithms and trading systems, and high profile security breaches, it’s no wonder that highly-recognized organizations like NASDAQ OMX are adopting best practices in software testing.

“It’s difficult dealing with all 15 of the US equity markets along with the complexities of over 50 venues if you’re an electronic trader,” Ann stated. “Every year you’re working to keep up with the changes, your team is undergoing change, and don’t forget you need to be generating revenue.” In order to overcome these hurdles, Ann’s team has approached their testing by reducing risk through best practices in testing, and over the course of this year, coined “The Year of Resiliency,” NASDAQ OMX has achieved this by focusing on core principles of software testing via reliable and resilient systems, preventing errors through robust system design, deployment and operation, as well as improving information security internal protections.

To many of you this may sound familiar.

Back in April of 2013 the Financial Services Sector Coordinating Council (FSSCC) for Critical Infrastructure Protection and Homeland Security released the “Research & Development Agenda for the Finance Services Sector.” The agenda outlines core objectives for the financial services industry, an overview of the threat landscape as well as proposed actions for everyone involved. Among this list “Software Technology Assurance,” and “Testing Financial Applications,” are top-of-mind. “The financial services sector is fighting an asymmetric battle against its adversaries; we create static defenses against every possible attack, while our adversaries create targeted attacks on only the weakest points of our systems. In light of this, we need better defensive tools, tactics and processes that enable us to respond with more agility,” the agenda states. It goes on to say, “Testing should not occur not just as one point in time but over the entire lifecycle. It should include the supply chain, starting where the software and hardware are first manufactured and shipped… Testing needs to be more automated so it can simulate and test greater conditions, attacks and situations.” Not only this, under the new approach organizations must now prioritize training for improved architecture and the newer approach of “defensive coding.”

Industry-wide, Ann and her peers are focused on looking at their architecture and a big part of this is has been a “bend; don’t break” approach. “What can we do to extend our testing and planning so we have a seamless backup or a seamless fail?” Ann goes on to say, “It’s a very interesting time – this is a prevalent ask from our government and not just the SEC – from businessmen and congressmen. The House Finance Committee just spoke about what we need to do to improve investor confidence. And, we never want to lose sight of why we have markets in the first place: it’s about job creation. The roots of why markets exist and why we trade are not just about influence and money.”

Given the complexity and the importance of these issues, NASDAQ OMX has chosen Coverity to help overcome and improve this daunting task. Traditionally, in most organizations developers are writing their own unit scripts; meanwhile QA teams are “doing things by hand – whatever that means nowadays,” but you can only go so far.

How Coverity can help.

“We’ve been focusing on functional and flows but really most on integration testing,” Ann states. “Our Market Operations team was throwing everything at it, and we reached a point where they said we need to get you to think and automate. With Coverity, we’re coming at the code sideways and taking the data to run through every single state the data can transform to – as well as seeing what error or problem it can create.” Coverity helps developers find and fix defects and vulnerabilities at the earliest stages of development, in an automated and highly reportable fashion – a perfect fit for the needs of NASDAQ.

Today, Ann and team are leveraging not only Coverity Test Advisor for Development, but also Coverity Policy Manager. “When you’re talking to a regulator or a board, it’s great when you can say ‘I have 70,000 test cases and this here is a bug from that was put in a year ago.’ We still have our scripts. We run Coverity every night. The new twist on this is focusing on state transition testing.” From Coverity Policy Manager, Ann can see at-a-glance the state of her software quality and security so she knows what has been tested, what needs to be fixed and when her team is truly ready to launch their code.

With the Coverity platform being used across Development, Security and QA teams, it has proven to be a successful combination at NASDAQ OMX and according to Ann, is making the release process that much more reliable and robust. We are thrilled to work to be working with such a large organization as NASDAQ OMX and are proud they’re able to rely on Coverity to help improve test efficiency and deliver high quality, secure code.

To learn more about how you can improve your resiliency and test efficiency checkout:

  • Test Automation During Development: A Paradigm Shift
  • Managing Quality with Developer Desktop Analysis
  • Development Testing for Java Applications
  • PCI Compliance Starts at the Source
  • The post NASDAQ OMX and a Year of Resiliency appeared first on Software Testing Blog.

    Categories: Companies

    Test Automation in the Age of Continuous Delivery - Tue, 09/30/2014 - 19:57
      I spend a lot of my time with clients figuring out the minutia of how to implement optimal test automation strategies that support their transition to Continuous Delivery. The goal is typically to be able to release software after each 2-week iteration (Sprint). When faced with such a compressed development and testing schedule, most […]
    Categories: Communities

    Open Source Android Testing Tools

    Software Testing Magazine - Tue, 09/30/2014 - 17:45
    The shift towards mobile platforms is a strong trend currently and Android is the most widely adopted mobile OS with an estimated market share above 80% in 2014. You should naturally test all the apps developed for Android and a large number of open source testing tools have been developed to achieve this goal. This article presents a list of open source Android testing tools. For each tool you will get a small description of its features and pointers to additional resources that discusses the tool more in details. Feel ...
    Categories: Communities

    Xamarin Test Cloud Launched

    Software Testing Magazine - Tue, 09/30/2014 - 16:13
    Xamarin has announced the public launch of Xamarin Test Cloud, with over 1,000 real devices available to help build better apps. Xamarin ran a survey that found that nearly 80% of mobile developers are relying primarily on manual testing in their attempts to deliver great app experiences. And yet, more than 75% say that that the quality of their apps is either “very important” or “mission critical.” Xamarin Test Cloud provides continuous testing, as it is not something you should do at the end of the development cycle. Developers should be ...
    Categories: Communities

    Continuous Delivery - The Real Deal

     CD Summit

    Continuous delivery (CD), a methodology that allows you to deliver software faster and with lower risk, is a topic that is gaining foothold in startups like Choose Digital and Neustar, and in enterprises like Cisco and Thomson Reuters. CD enables companies to accelerate innovation, move faster than the competition and finally allow IT to quickly meet the application needs of the business.
    We have just completed our second set of CD Summits in London and Paris to packed houses, including a standing room-only event in London. The word is getting out that our summits are the place to learn about CD: they provide a full day of education for executives and technologists that covers the people, process and technology aspects of continuous delivery. Additionally, our partner sponsors provide a unique view on how various tools for testing, infrastructure provisioning and application deployment fit together to create a toolchain in support of the CD pipeline.
    Next up in our CD Summit series are Chicago on Oct. 15th, San Francisco on Oct. 22nd and Washington D.C. on Nov. 19th. Please consider joining us for a day that you will find well worth your time.
    Here is what people have had to say about past CD Summits:
    “These summits are very impressive. The scope of presentations covers all of the important aspects, and the technology presentations cover much of the pipeline. “- Kurt Bittner, Forrester Research
    “This summit was fantastic. Thanks very much.” - New York City attendee
    "The London summit was full, so I traveled to Paris, and I'm very glad I did." - Paris attendee           
    “I need the slides to show my boss.” – London attendee
    We start off the morning of each summit with an executive-level presentation discussing the business benefits that can be realized by CD. We then have presentations covering the people, process and technology impacts of CD. You’ll hear about real world examples of CD in action by enterprises that are actually transforming their practices. For example, at the upcoming Summits, Choose Digital will present in Chicago, Cisco in San Francisco and both Thomson Reuters and Neustar in Washington D.C.
    Here’s an example agenda:
    8:00  Registration (includes continental breakfast)9:00The Business of Continuous Delivery - Kurt Bittner, Forrester Research9:45Orchestrating the Continuous Delivery Process - Steve Harris, CloudBees10:30Break11:00Three Pillars of Continuous Delivery: Culture, Tooling & Practices - Andrew Phillips, XebiaLabs11:45Achieving "Fast IT" With Continuous Delivery - Nick Pace, Cisco Systems12:30Lunch (provided)14:00Jenkins for Continuous Delivery - Kohsuke Kawaguchi, CloudBees14:30Accelerating Application Delivery with Continuous Testing - Peter Galvin, SOASTA15:00Break15:30Automating Infrastructure - Gabriel Schuyler, Puppet Labs16:00Successfully Implementing Continuous Delivery - MomentumSI16:30Continuous Delivery in the Real World: From Jenkins to Production - Mario Cruz, Choose Digital17:00Panel: Ask the Experts17:30Reception: Continuous Beer Delivery

    During lunch, attendees have the opportunity to speak with other attendees and our expert presenters on a topic of their choice. After lunch, we kick off the afternoon session with Jenkins Founder Koshuke Kowaguchi discussing the use of Jenkins for CD. After the Summit and during the social hour, our partners, including XebiaLabs, SOASTA and Puppet Labs, will all discuss how to automate the software delivery pipeline. As well, you’ll be treated to breakfast, lunch and an evening reception.
    Join us in ChicagoSan Francisco or Washington D.C. for an event that is not to be missed.
    See you there!
    André Pino

    André Pino is vice president of marketing at CloudBees. 

    Categories: Companies

    Six Ways Testers Can Get in Touch with Their Inner Programmer

    uTest - Tue, 09/30/2014 - 15:40

    This piece was originally posted by our good friends over at SmartBear Software. If you haven’t read it already for some context to this article, check out Part I in this B93X8G / Luminous Keyboardseries, “Don’t Fear the Code: How Basic Coding Can Boost Your Testing Career.”

    Michael Larsen will also be joining us for our next Testing the Limits interview, so be sure to stay tuned to the uTest Blog.

    Start Small, and Start Local

    My first recommendation to anyone who wants to take a bigger step into programming is to “start with the shell.” If you use a PC, you have PowerShell. If you are using Mac or Linux, you have a number of shells to use (I do most of my shell scripting using bash).

    The point is, get in and see how you interact with the files and the data on your system that can inform your testing. Accessing files, looking for text patterns, moving things around or performing search and replace operations are things that the shell does exceptionally well.

    Learning how to use the various command line options, and “batching commands” together is important. From there, many of the variable, conditional, looping and branching options that more dedicated programming languages use are available in the shell. The biggest benefit to shell programming is that there are many avenues that can be explored, and that a user can do something by many different means. It’s kind of like a Choose Your Own Adventure book!

    When in Rome, Do What Your Programmers Do

    It’s not mandatory that we learn the same languages and use the same languages our programmers and engineers are using, but there are a number of benefits if we do. First, we have expertise that we can call on if we find ourselves stuck with questions. We can utilize work that has already been done and is stored in shared libraries. We can leverage existing infrastructure and take advantage of unit and integration tests that already exist to help inform additional tests that may be needed. All of this comes as a side benefit when we learn the languages our team actively uses.

    There is a down side to this, too. If our testing infrastructure uses libraries from the development code to create our tests, we might get false positives, or we may have tests pass that really shouldn’t, because bugs in the underlying code mask errors. If you are just starting out, or are part of a small team, using the development infrastructure as a basis for your programming efforts makes sense. If you need to have a completely independent and isolated code base for testing purposes, then yes, having a different language and technology stack for testing might be a smart move.

    Look For and Try to Solve Authentic Problems

    Programming courses and books are optimized to teach syntax. They are not written to solve our unique problems. This is why simple examples in book often do not help us when we try to apply them to our own issues and circumstances.  To this end I say, “Make every problem about you.” Ask yourself “how can I take this statement or idea and apply it to what I am working on right now?” Try to think about what you are learning and apply it immediately in your everyday work.

    If you have an output file that has a bunch of date and time stamps that you want to remove, start working with some ideas in your programming language of choice (or go old school and use sed or awk with regular expressions), but see what it takes to physically remove them reliably and get the output you want to keep. Not only will this be more applicable and usable, I’ll dare say it will make the learning process more enjoyable, too.

    Carve Out Time Every Day for Consistent Practice

    Most of us “less-than-expert” programmers can point to one reason; we have not put the time in on a consistent basis for it to become a regular habit. When I lift weights, I often stop training once I meet a goal, or began the activity I was training for. Much of the time, during long periods of down time, I would lose the gains I made. But I didn’t worry too much, because I could bring myself back to where I was quickly once I resumed my workouts. That phenomenon is caused by “muscle memory,” and to have muscle memory be a factor, you first have to build some muscle.

    Likewise, many languages I’ve used for various reasons over the years for programming (C, C++, Java, Tcl/Tk, Perl, Ruby, etc.) have a similar issue. If I do a bunch of work with one for a while, and then don’t touch it again for a few months, it’s almost like starting back at ground zero. But after a little time, I see the connections and my programming equivalent of “muscle memory” comes back.

    Find a Partner and Work Together Where You Can

    Donald Fagen of Steely Dan fame once said that “Walter [Becker, his songwriting partner] can’t start a song, and I can’t finish one. Therefore, we work great together!” I have a similar problem. I’m very much like Walter Becker when it comes to writing code. I can offer ideas and make additions to stuff that’s already underway, but put me in front of a blank text editor and say, “OK, write something” and we will be in for a struggle.

    Therefore, I try to take advantage of opportunities when I can work with people who can balance out my own abilities, or get some ideas so I can go in different directions based on where they have started. Two sets of eyes considering the same problem is always helpful. The debates and questions spawned from those interactions open up avenues neither of us alone would have considered. Also, this model allows both parties to swap the roles of programmer and tester, and communicate to both “mindsets.”

    For Added Fun, Make It “Do or Die”

    I believe that situations that really put you in a pressure cooker, where failure is not an option, can be powerful drivers to making programming much more interesting. I had this experience recently by taking on the role of the build master and release owner for a week. When it was my turn, I came into work to find the build was red. The answer? Fix it! Even if I couldn’t do so myself, I was responsible to make sure that I found someone who could, whoever that person could be.

    I had to get acquainted (and fast) with what was being checked in, which branches were being committed, were there any conflicts, why did tests fail, what could I isolate, and how could I get as specific as possible so I could either get the programmer most likely to be able to fix it, or do the work myself.

    Sounds scary, huh? It was. It was also FUN! Knowing that I couldn’t slink into the shadows and hope someone else would take care of it, and that it was “do or die” time, I had to figure it out, communicate with the other programmers, and make the build green so we could push. It was an awesome experience. It showed how much I could learn in a short time, and how I could help the build and release process with the programming skills I already have.

    To use a snowboarding metaphor, I was standing on the lip of a twenty-foot cornice, deciding if I should drop in or not. In this situation, I was pushed off the lip. I had two choices… crash and burn, or stick the landing. I decided I was going to stick the landing.

    My point with these articles is to tell anyone out there who feels on the fence about their programming skills that you are not alone. I want to make sure you understand the foundation that you may already have in place. Most of us already program, we just don’t consider what we do, or how we do it, on par with what we consider “real programming.” If you are one who thinks that way, I’m asking you to stop it. Seriously. Learning how to program, and doing it in a meaningful way that will enhance your career immensely, has never been easier. No matter the language, platform, or problem, you will have to work at it…and regularly. The good news is, if you do, you will have a skill that can take you in may different directions—as a software tester and beyond.

    Michael Larsen is a software tester based out of San Francisco, California. Michael started his pursuit of software testing full-time at Cisco Systems in 1992. After a picture-87071-1360261260decade at Cisco, he’s worked with a broad array of technologies and in industries including virtual machine software, capacitance touch devices, video game development and distributed database and web applications.

    Michael is a member of the Board of Directors for the Association for Software Testing, the producer of and a regular commentator for the podcast “This Week in Software Testing,” and a founding member of the “Americas” Chapter of “Weekend Testing.” Michael also blogs at TESTHEAD and can be reached on Twitter at @mkltesthead.

    Categories: Companies

    6 key challenges of mobile app testing

    Testlio - Community of testers - Tue, 09/30/2014 - 13:34

    Testing is a fast-paced industry that is constantly changing. The movement towards mobile devices has brought a whole different set of challenges to the testing world. Not only have consumer targeted apps set the trend, but enterprise apps have also made the move to mobile. Mobile users are not forgiving and finding an issue out in the wild might mean leaving the application for good. Mobile apps and websites need to be rock solid before they are released to the market. Testlio is focused on mobile app testing and we’ve identified 6 key challenges that app developers and testers are facing.

    1. Screen sizes. The Android world is not simple. The variety of different aspect ratios and pixel densities can be overwhelming. With the launch of iPhone 6, Apple brings new screen sizes to the iOS world as well. Though iOS developers are used to pixel perfect screen design, they now need to change their mindset to the adaptive screen design instead. For testing it means that we need to check on various devices that all the necessary screen elements are accessible with different screen sizes and aspect ratios.
    2. Connection types. There are several standards for mobile data connections (edge, umts, 3G, 4G) as well as for wifi (b, g, n). Sometimes there might be no connection available at all or the device is in flight mode. When users move around the connection type might change. Unfortunately some carriers filter the web on their own will, which results in the devices being connected without actually having connection with a specific service (messaging or calling through apps). Even though connection API’s on mobile platforms have been developed keeping those challenges in mind, the real world environment is still very much varying and interesting set of issues may occur. It’s important to test the bandwidth usage as not all carriers are supporting unlimited data volumes.
    3. Different OS versions. iOS users are known to be upgrading quickly to the newest versions (iOS 8.0 uptake has been around 50% during first two weeks). On the contrary, Android uptake has historically been very slow and the fragmentation is wide. This means that app developers need to support older OS versions and older API’s, and testers need to test for those.
    4. Power consumption and battery life. The innovation in the battery storage capacity field hasn’t been as quick as in the app consumption. We are running lots of apps during the day and several processes are running on background without us even noticing. This all requires cpu cycles which on it’s turn require power and thus the batteries tend to dry. When testing mobile apps we need to make sure that the power consumption is kept minimal and the app is developed by keeping the best practices in mind.
    5. Usability. Mobile device screens are relatively small and there are always more data we would like to present than possible to fit to the screen. It’s challenging to keep the interaction clean and simple for the user, and at the same time display all the necessary information. Font size and readability are other challenging factors of usability. When testing mobile apps it’s important to pay attention to the size of click areas and making sure that all texts are readable without lenses.
    6. Internationalisation. Most of the apps are designed to be used on international markets. Testing for translation is only one piece of the whole internationalisation testing. Testers should also take into account regional traits (locale settings, timezones) and target audience. Changing time while app is running might cause some interesting artefacts. Also some designs that are working in the western world might not work in the east and vice-versa. Right-to-left languages have always kept developers puzzled.

    Together with our community of expert testers and tailored tools, Testlio helps to overcome those challenges. Through community we have access to enormous pool of different devices and clients can let their apps to be tested exactly on the specified devices. With automation scripts we can check the main workflows on various emulated devices.

    Testlio’s strength comes from testing in the real world environment with real users on real devices. This means that testers catch issues that are not common in laboratory conditions. For example, we have experienced situations where some screen elements were not understandable in the bright sunlight and users had no idea on which button they should click. Our testers have internet connections that are different in their type and bandwith and change over time. This is what also happens with real users. Testlio helps you to discover those issues before your customers do.

    Testlio community is widespread all over the world. We have some testers in every corner of the world and they are the ones who know best their local traits and can make sure that apps under test take those details into account. It’s always a good idea to run your translations through a native speaker who also understand the local market.

    Additional reading:

    Categories: Companies

    Best Practices for Creating and Using Home Page Widgets

    The Seapine View - Tue, 09/30/2014 - 12:00

    TestTrack Home page widgetI wrote a previous blog post about how to create a widget. In this post, I’ll provide some best practices to help you make the most of TestTrack widgets. I’ll cover setting up security and sharing permissions, and provide recommendations on using colors to better call attention to key performance indicators (KPIs). Once you’ve set up a few widgets and users are starting to apply them to the Home page, you’ll likely get feedback on what’s working or not working. In the coming weeks, I’ll be providing a variety of sample widgets that you can pick and choose from based on the needs of your team. The Home page widgets are still relatively new, so be sure to check back often to make sure you’re making the most of them.

    Setting Up Security Permissions

    There are three permissions that impact creating and using widgets.

    Create and edit widgets

    In Security Groups, you can set/unset the Administration > Configure Home Widgets option to control who can create and edit widgets. If you’re upgrading from an older version of TestTrack, this option will be turned off by default. Make sure you you set that option for at least yourself to ensure someone can create widgets.

    Filter sharing

    The first step in creating a widget is to create a new filter or select an existing one. When someone clicks on the widget from their Home page, they’ll be taken to a list window with that filter
    applied. Make sure the filter is shared with everyone or matches the widget’s share permissions; otherwise your users will end up seeing an error message when they try to view the details of a widget.
    Error on widget drill-down

    Widget sharing

    Just like filters, widgets can be shared with one or more security groups. If you share an existing widget with a new group, be sure to review the associated filter to ensure it’s also shared with that group.

    Configure Color Mappings

    There are a few ways to use color with widgets. Here are some ways we’ve seen used successfully internally and by customers.

    Single color mapping

    Use a single color to identify item types or to highlight critical pieces of information, no matter what they’re showing. If you want to show urgency with one color, use the scaling capability by selecting the Scale color to show transitions between mappings checkbox when setting up the widget. This will maintain the single color but provide some context by scaling the color lighter or darker based on the KPI value. For example:

    • Red for blocked test runs, whether there’s 0 or 100 of them
    • Purple for metrics associated with requirements or user stories
    • Dark blue for “my” items showing requirements to review, tests to run, or defects to fix
    2-color mapping

    Use 2 colors for “binary” metrics, where things are either “good” or “bad.” For KPIs where anything greater than 0 is bad, use 2 colors to immediately call attention to them.

    • Security holes/defects, where 0 is green and anything greater than 0 shows red
    • “My” requirements for review, where 0 is white and anything greater than 0 shows green
    • P1 defects in the current sprint, where 0 is green and anything greater than 0 shows red
    3-color mapping

    Use 3 colors to create a classic “stoplight” KPI, where things can go from “good” to “concerned” to “not good.”

    Multi-color mapping

    TestTrack supports up to 10 different color bands in a single widget, but using more than three colors is challenging and doesn’t work very well. Typically users will struggle to remember what each color means, and in practice I’ve not seen many situations where interpreting the data is complicated enough to need more than 3 colors. If you think you need more than 3 colors, consider trying the scaling option between three colors first. This will result in each of the 3 colors being lighter or darker depending on how close the KPI value is to the color band.

    Share on Technorati . . Digg . Reddit . Slashdot . Facebook . StumbleUpon

    Categories: Companies

    Ranorex 5.1.3 Released

    Ranorex - Tue, 09/30/2014 - 10:59
    We are proud to announce that Ranorex 5.1.3 has been released and is now available for download. General changes/Features
    • Added support for Firefox 33
    Please check out the release notes for more details about the changes in this release.

    Download latest Ranorex version here.
    (You can find a direct download link for the latest Ranorex version on the Ranorex Studio start page.) 

    Categories: Companies

    Knowledge Sharing

    SpiraTest is the most powerful and affordable test management solution on the market today