Skip to content

Feed aggregator

Continuous Innovation with Dynatrace AppMon & UEM 6.5

Customer-centric innovation, more frequent deployments, the adoption of new stack and PaaS (Platform as a Service), as well as breaking up the monolith into micro-services, are several of the trends we can’t escape when following the hot topics at conferences or when talking with enterprise Innovation and Transformation Teams. What you typically don’t hear are the […]

The post Continuous Innovation with Dynatrace AppMon & UEM 6.5 appeared first on about:performance.

Categories: Companies

Software Testing Tools for Your QA Team

Sauce Labs - Tue, 09/27/2016 - 15:00

Ashley Hunsberger, Greg Sypolt and Chris Riley contributed to this post.

Software testing tools are a vital resource for every successful QA team. But with so many tools and testing frameworks out there – from Selenium and Protractor to Espresso and Xcode – how do you choose which are best? How should your toolset vary depending on whether you do desktop testing, mobile testing, or both? And how do you make the most of software testing tools?

Below are answers to these questions from the panelists of a recent Sauce Labs webinar focused on software testing and QA. The webinar was hosted by Chris Riley, with Ashley Hunsberger and Greg Sypolt serving as panelists. You can also find their recommendations on software testing tools below.

Which tools has Greg used for test automation at Gannett?

Greg: Here’s an inventory of testing frameworks and tooling used across Gannett products (technology alignment):

  • Ruby+Cucumber+Capybara
  • NightwatchJS
  • Behave Python
  • Mocha
  • Jasmine
  • Protractor
  • Polymer
  • Junit
  • Espresso for Android
  • EarlGrey for iOS
  • Minitest
  • Ruby+Rspec+Capybara
  • Selenium
  • Appium
  • Jenkins
  • TeamCity
  • Sauce Labs
  • Drone
  • Chef
  • Datadog KPI Dashboards

What are some tools used for automation across the industry? I’ve heard of Selenium, but is there anything else?

Greg: Selenium WebDriver is the industry standard for browser testing. However, I like to align the testing framework technology stack with the application stack. For example, web applications developed in AngularJS would align with Protractor as the testing framework. Selenium Webdriver comes in many flavors; Java, Python, Ruby, Javascript, etc.

Here are some testing frameworks to know:

  • Browsers: Capybara/Cucumber, Capybara/RSpec, NightwatchJS, Behave, and Protractor
  • API: NodeJS, Jasmine, and Mocha
  • Mobile: Android Espresso, iOS EarlGrey, Appium, KIF, and XCode 7

I often feel like the DevOps infrastructure problems have to be solved before I can do test automation. Is that true?

Greg: Check out this post about infrastructure planning. It discusses how QA and Dev should share responsibilities for infrastructure. The team also should share responsibilities for DevOps tasks. The modern QA position has become a technical role, the gatekeeper of quality, and QA engineers may continue to take on more DevOps responsibilities and tasks.

Ashley: Every company is different (such as mine and Greg’s), but I do work closely with our DevOps team more and more as we transition. We definitely still have our kinks, but we are still doing test automation. We are still working to be in the CI pipeline, but it doesn’t prohibit you from still having meaningful tests.

How do you get developers to use GUI testing tools? It seems like most devs instinctively dislike GUI tests.

Ashley: Once you have the right technology alignment, demonstrate how and why they are used. We want fewer GUI tests, but we have roughly 40 tests that we consider critical workflows that we always want passing. We are able to quickly identify when something breaks in the UI. Show that you have deterministic results. For example, we were able to identify which commit broke our tests quickly and discuss with the developer. Without these tests, this bug would not have been caught for two more weeks. Since this was still during the development period, overhead was low and we got a fix in within a few hours.

Greg: I agree with Ashley. The best buy-in from developers for me has been technology alignment. Now the developers can help write and review test code. The key to automated GUI testing is reliable processes for developing automated GUI tests. Work as a team to determine the right GUI tests needed, best practices for test code, and continue to focus on ways to eliminate flaky tests and build confidence in the test results.

Which analytics tools do you use?

Greg: We use Jenkins, Datadog, and CloudWatch to measure the health of the Android project to determine if it is on the track to success and identify where improvements need to be made to meet our goals and deadlines. It’s on our roadmap to explore open source Capital One Hygieia and New Relic Synthetic Monitoring.

Conclusion

There’s no shortage of software testing tools out there for both automated and manual testing. Selenium WebDriver remains a staple, but depending on your particular needs, you may want to take advantage of other testing tools, too. A major goal should be to seek technology alignment. That helps to assure that your testing strategy is as efficient as possible, while also facilitating better communication between QA and Development.

Chris Riley (@HoardingInfo) is a technologist who has spent 12 years helping organizations transition from traditional development practices to a modern set of culture, processes and tooling. In addition to being a research analyst, he is an O’Reilly author, regular speaker, and subject matter expert in the areas of DevOps strategy and culture. Chris believes the biggest challenges faced in the tech market are not tools, but rather people and planning.

Ashley Hunsberger is a Quality Architect at Blackboard, Inc. and co-founder of Quality Element. She’s passionate about making an impact in education and loves coaching team members in product and client-focused quality practices. Most recently, she has focused on test strategy implementation and training, development process efficiencies, and preaching Test Driven Development to anyone that will listen. In her downtime, she loves to travel, read, quilt, hike, and spend time with her family.

Greg Sypolt (@gregsypolt) is a senior engineer at Gannett and co-founder of Quality Element. The last 5 years focused on creation and deployment of automated test strategies, frameworks, tools, and platforms.

Categories: Companies

The Forgotten Agile Role – the Customer


Many Agile implementations tend to focus on the roles inside an organization – the Scrum Master, Product Owner, Business Owner, Agile Team, Development Team, etc.  These are certainly important roles in identifying and creating a valuable product or service.  However, what has happened to the Customer role?  I contend the Customer is the most important role in the Agile world.  Does it seem to be missing from many of the discussions?
While not always obvious, the Customer role should be front-and-center in all Agile methods and when working in an Agile context.  You must embrace them as your business partner with the goal of building strong customer relationships and gathering their valuable feedback.  Within an Agile enterprise, while customers should be invited to Sprint Reviews or demonstrations and provide feedback, they should really be asked to provide feedback all along the product development journey from identification of an idea to delivery of customer value.
Let's remind ourselves of the importance of the customer.  A customer is someone who has a choice on what to buy and where to buy it. By purchasing your product, a customer pays you with money to help your company stay in business.  For these factors, engaging the customer is of utmost importance.  Customers are external to the company and can provide the initial ideas and feedback to validate the ideas into working products.  Or if your customer is internal, are you treating them as part of your team and are you collecting their feedback regularly?
As you look across your Agile context, are customers one of your major Agile roles within your organization?  Are they front and center?  Are customers an integral part of your Agile practice?  Are you collecting their valuable feedback regularly?  If not, it may be time to do so.  
Categories: Blogs

StormRunner Load 2.1 release simplifies internal app testing with Docker Integration

HP LoadRunner and Performance Center Blog - Tue, 09/27/2016 - 02:05

Srl and docker teaser.png

The new HPE StormRunner Load version 2.1 has just been released. Keep reading to find out more about the new capabilities that are available with this new release.

Categories: Companies

Jenkins World 2016 Wrap-up - Scaling

This is a guest post by Liam Newman, Technical Evangelist at CloudBees. One of the great features of Jenkins is how far it can scale, not only from a software perspective, but also from an organizational one. From a single Jenkins master with one or two agents to a multiple master with thousands of agents, from a team of only a few people to a whole company with multiple disparate departments and organizations, you’ll find space where Jenkins is used. Like any software or organization, there are common challenges for increasing scale with Jenkins and some common best practices, but there are also some unique solutions. A big...
Categories: Open Source

Free Web Load Testing Tools & Services

Software Testing Magazine - Mon, 09/26/2016 - 10:00
The software development trend that shifts the target platform from the desktop to web, cloud and mobile applications has fostered the development of load testing services on the web. It is an obvious option to use web-based load testing tools for applications that can be accessed by web users. This article presents the free offers from commercial web load testing services providers. We have considered in this article only the tools that provides a load testing service that we define as the ability to simulate the access by multiple users on a defined time period. We will not mention tools that provides just a one-time assessment of your application performance, giving information such as the time needed to reach the server or to load the JavaScript library. We list also only the providers of free long term load testing services and not the vendors that offer only a limited time one time trial account. If you know a tool that is currently missing from this list, please use the contact form to let us know about it and we will update this article. Each free service comes with its limits. In the case of web load testing, they focus on the following criteria: the number of virtual users, the duration of the tests, the number of tests and the ability to test from multiple geographical locations. Some vendors put explicit values on each of these items, but other work with a credit system that you can apply to multiple configurations. [...]
Categories: Communities

TestExpo, London, UK, October 12 2016

Software Testing Magazine - Mon, 09/26/2016 - 09:00
TestExpo is a one-day conference structured around a series of software testing themes that takes place in London. TestExpo focuses on the ambitions, plans and goals of the software testing community in the UK for learning and improving software testing practices. In the agenda of the TestExpo conference, you can find topics like “Mis-Adventures in Test Automation”, “A Test Team’s disrupted journey to finding their middle earth in DevOps”, “Delivering successfully a ticketing system using BDD”, “User Acceptance Testing a blessing or a pain”, “Best Practices for Building Your Mobile Test Lab”, “How to Fit Performance Testing with Agile and DevOps”, “Daydreams versus Nightmares in testing”, “Automated UI Testing for iOS and Android Mobile Apps”, “Testing without boundaries – Dream or Reality”, “The Need For Speed: Tools to increase responsiveness, efficiency and performance”, “Test Rich but Cash Poor”, “Managing Crowdsourced App Testing”, “Balancing Quality and Speed: Moving to CD without Breaking Your Code”. Web site: http://www.testexpo.co.uk/ Location for TestExpo: Emirates Stadium, Hornsey Rd, London N7 7AJ, UK
Categories: Communities

EuroSTAR Conference, Stockholm, Sweden, October 31-November 3 2016

Software Testing Magazine - Mon, 09/26/2016 - 05:30
The EuroSTAR Conference is a four-day conference focused on software testing and software quality. Global and European software testing experts propose a program full with tutorials and presentations. In the agenda of the EuroSTAR Conference you can find topics like “Understanding Cultural & Linguistic Dimensions in Testing”, “Root Cause Analysis for Testers”, “Tips for Introvert and Extrovert Testers in Today’s Testing World”, “Testing Machine Learning; Learning Machine Testing”, “Testing within Large Scale Projects”, “How this Tester Learned to Write Code”, “Testing in the World of Startups”, “Testing the New Web – Tackling HTML5 with Selenium”, “Adapting Automation to the Available Workforce”, “Beacons of the Test Organisation”, “How We Transformed the Traditional Software QA by Getting Rid of the Central QA Group”, “Leading the Transition to Effective Testing in Your Agile Team”.”. Web site: http://www.eurostarconferences.com/ Location for the Eurostar conference: Stockholm, Sweden
Categories: Communities

Jenkins World 2016 Wrap-up - Pipeline

This is a guest post by Liam Newman, Technical Evangelist at CloudBees. As someone who has managed Jenkins for years and manually managed jobs, I think pipeline is fantastic. I spent much of the conference manning the Ask the Experts desk of the "Open Source Hub" and was glad to find I was not alone in that sentiment. The questions were not "Why should I use Pipeline?", but "How do I do this in Pipeline?" Everyone was interested in showing what they have been able to accomplish, learning about best practices, and seeing what new features were on the horizon. The sessions and demos on Pipeline that I saw were...
Categories: Open Source

Get the secrets to optimizing your Microservices-based Commerce Platform

HP LoadRunner and Performance Center Blog - Fri, 09/23/2016 - 21:46

Microservices teaser.png

In commerce, every millisecond of additional response time costs money due to lost sales. Users will walk away from transactions if they take too long to process.  Keep reading to learn how to improve your microservices-based platform.

Categories: Companies

Magic Buttons and Code Coverage

Sustainable Test-Driven Development - Fri, 09/23/2016 - 19:29
This will be a quickie.  But sometimes good things come in small packages. This idea came to us from Amir's good friend Eran Pe'er, when he was visiting Net Objectives from his home in Israel. I'd like you to imagine something, then I'm going to ask you a question.  Once I ask the question you'll see a horizontal line of dashes.  Stop reading at that point and really try to answer the question.
Categories: Blogs

Monitor UrbanCode Deploy with New Relic

IBM UrbanCode - Release And Deploy - Fri, 09/23/2016 - 16:30

We’re often asked how operational teams can monitor their UrbanCode Deploy servers. The proliferation of cloud based monitoring solutions makes this easier than ever. Most service providers supply very good agents for monitoring Java applications that are both powerful and easy to set up.

With New Relic Application Monitoring, you can see how the UrbanCode Deploy server uses resources and responds to requests over time. The charts and statistics that New Relic Application Monitoring provide show you how CPU, memory, server response time, throughput, database activity, or errors over time as users and scheduled deployments exercise the server.

You can see details about your UrbanCode deploy server including:

  • Application Performance Index (Apdex) score
  • Processor usage
  • Memory usage
  • Garbage collection
  • Response time
  • Throughput
  • Error rate

We will take you through the steps to get started monitoring UrbanCode Deploy with New Relic.

  1. Open the New Relic website and sign up to create an account and gain access to a time limited demonstration version of the New Relic Application Management agent.
  2. Get your New Relic license key:
    • On the New Relic home page, from the menu at the upper-right corner, open the Account Settings page.
    • Find the “License key” in the “Account information” column, and copy it.
  3. Download and install the New Relic agent from https://download.newrelic.com/newrelic/java-agent/newrelic-agent/current/newrelic-java.zip
  4. Extract the agent into your UrbanCode Deploy installation directory. The default location is /opt/ibm-ucd/server by default.
  5. Configure the agent:
    • Edit the New Relic configuration file, being careful not to alter the indentation: /opt/ibm-ucd/server/newrelic/newrelic.yml by default. Substitute your license key for the <%= license_key %> string. Be careful to keep the single quotation marks (‘) that surround your key. Replace the My Application default application name with UCD Server or some other name relevant to your environment.
    • Save the file.
    • Edit the /opt/ibm-ucd/server/bin/set_env.sh (or set_env.cmd) file.
      Add the following line to JAVA_OPTS:
      -javaagent:/opt/ibm-ucd/server/newrelic/newrelic.jar.
      Be sure that the path to the newrelic.jar file is correct. Do not include a space after -javaagent:.
    • Save the file.

Restart your UrbanCode server, and start monitoring! These two New Relic messages are included in the newrelic_agent.log file and indicate that the agent is installed.:

  • com.newrelic INFO: New Relic Agent: Loading configuration file "/opt/ibm-ucd/server/newrelic/./newrelic.yml"
  • com.newrelic INFO: New Relic Agent: Writing to log file: /opt/ibm-ucd/server/newrelic/logs/newrelic_agent.log

With many great monitoring options available in the marketplace, get started today!
Let us know your favorite tools and metrics by commenting on this blog entry, or publish your own blog about your favorite tool.

UrbanCode Deploy now includes MBeans: Introducing UrbanCode MBeans!

Categories: Companies

Enabling Performance Monitoring for Urban Code Deploy with IBM Performance Management on Cloud 8.1.3

IBM UrbanCode - Release And Deploy - Fri, 09/23/2016 - 16:28

You want automated deployments. You also want to know when and where problems like bottlenecks and performance issues occur. You have to keep tabs on your deployment servers. Even more, you want predictive insights and earlier warnings about application problems before users are affected. By using IBM Performance Management on Cloud with IBM® UrbanCode Deploy™, you can monitor your deployment environment, react to and resolve emerging issues, and get back to other priorities.

To take advantage of this monitoring software, get a 30-day trial subscription of IBM Performance Management on Cloud (https://www.ibm.com/marketplace/cloud/application-performance-management/us/en-us) or a use your paid subscription to it. IBM Urban Code Deploy is a full featured software application deployment solution. From an architecture standpoint, it uses a Tomcat web application server and a choice of relational databases, and can run on a number of popular operating systems, including Linux and Windows.

APM Dashboard monitoring IBM UrbanCode

To monitor Urban Code Deploy’s availability and performance with IBM Performance Management on Cloud, complete these steps:

  1. Download the appropriate monitors. For purposes of this article, IBM Urban Code Deploy is installed on x64 Linux.
  2. From the IBM Marketplace website, sign in with your IBM ID, and then select products and services from the menu that’s associated with your user profile. Your custom “Products and services” page opens and lists your IBM Performance Management subscription.
  3. Click the small black arrow on the subscription card to open an additional set of links that are associated with the subscription. One of these choices is Agent Installation Instructions, which is selected by default.
  4. Select Linux from the list of available Platforms and Packages.
  5. Select the IBM Application Performance Management on Cloud Agents radio button, and then click Download to begin the download of the IAPM_Agent_Install.tar file to your local system.
  6. After the download is complete, copy the .tar file to the Urban Code Deploy server, and follow the installation instructions, which are listed in the Agent Installation Instructions. You must run the agent installation as root or from a sudo context.
  7. At the prompt for which agents to install, enter the numbers that correspond to the Linux OS and Tomcat monitor choices. Let the installation finish. The Linux OS agent starts by default at the end of the installation. The Tomcat agent requires a bit of configuration
    before it can be started, which is covered next. Don’t worry about connecting the agents to your subscription. They are configured at download time with the knowledge of where your specific subscription is located and with credentials for securely connecting to it, so that as soon as they start, they can begin sending monitoring data.
  8. From the Agent Installation Instructions in the section labeled “Configure,” select Tomcat from the list of monitors. Look for this sentence: Follow the steps in the IBM Knowledge Center here. The word “here” is a link. Select it to open the Tomcat monitor configuration instructions. For most installations, all you have to do is ensure that Tomcat is configured without JMX authorization and that it’s using port 8686.
  9. Edit the /opt/ibm-ucd/server/bin/set_env.sh (or set_env.cmd) command, adding the following definitions to JAVA_OPTS: -Djava.rmi.server.hostname=localhost -Dcom.sun.management.jmxremote.port=8686 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false.
  10. Then call the default configuration script (in a sudo context) to connect the monitor to Tomcat. After the connection is made, you are ready to view the data in the IBM Performance Management console.
  11. Return to the “Products and services” page, and click the Launch push button that’s associated with your IBM Performance Management subscription. The Getting Started page of your subscription opens.
  12. 12. Select the Gauge icon from the menu bar (second one down), and then select Application Performance Dashboard. Select My Components from the Application Group, and then open the Components twistie from the Components group of the navigation tree. Entries for Linux OS and Tomcat components are displayed. By selecting either of these, you will see the specific instances of these monitors in the instances group at the bottom of the navigation tree. You can see data in the context of the application (in this case, simply My Components), in the context of all the components of a type (for example, all TOMCAT monitors for that application) or in the context of a specific instance. At the application and component levels, you will see summary level data and events. At the instance level, you will see deep-dive statistics.

To create an application specifically to represent Urban Code Deploy, click the plus sign (+) that’s above the All My Applications twistie. This opens the Application Composer. Give a name and a description to the application such as Urban Code Deploy, then click the large plus sign. From the list of monitor types, select Linux OS, then select the instance that corresponds to the OS that the Urban Code Deploy server is running on, and then click Add. Next, select Tomcat from the list, choose the instance of the Tomcat server that Urban Code Deploy uses, and select Add. Click Close and Save to exit. An application called Urban Code Deploy is included under All My Applications. By making this addition, you can focus on the health and performance of Urban Code Deploy.

UrbanCode Deploy now includes MBeans: Introducing UrbanCode MBeans!

Categories: Companies

Probing for Limits of Jenkins

At the end of my Jenkins World 2016 talk - “So you want to build the worlds largest Jenkins cluster” - I gave a brief demonstration of a Jenkins cluster with 100,000 concurrent builds to give people an idea of just how far Jenkins clusters can scale.

Sacha's Keynote Cluster

My talk did not have anywhere near the budget of Sacha’s keynote… where they were able to fire up a PSE cluster of over 2000 masters with 8000+ concurrent builds. The idle reader might be wondering how exactly I was able to achieve 100,000 concurrent builds and what exactly were the tricks I was playing to get there.

OK, let’s get this over… I did cheat a little, but only in the places where it was safe to do so.

If you want to have a Jenkins cluster with 100,000 concurrent builds, you need to ask yourself “what is it exactly that we want to show?”

I can think of two answers to that question:

  1. We are really good at burning money;

  2. The Jenkins masters can handle that level of workload.

Given my constrained budget, well I can only really try to answer the second question

Can a Jenkins cluster handle the workload of 100,000 concurrent builds

Most of the work that a Jenkins master has to do when a build is running on the agent can be broken down as follows:

  • Streaming the console log from the agent over the remoting channel and writing that log to disk
  • Copying any archived artifacts over the remoting channel onto the master’s disk when the build is completed
  • Fingerprinting files on the remote agent
  • Copying any test reports over the remoting channel onto the master’s disk when the build is completed

A well integrated Jenkins cluster might also include

  • Copying artifacts from upstream jobs into the build agent’s workspace (potentially from a different master in the cluster’s disk)
  • Triggering any downstream jobs (potentially on a different master in the cluster)

The rest of the workload of the build is actually compiling and running tests, etc. These all take place on the build agent and do not have any effect on the master.

So as long as:

  • the agent streams back a console log (at more than 60 lines per minute - based on my survey of typical builds)… potentially with I/O flushes for every line output
  • there are new files (with random content to defeat remoting stream compression) on the agent workspace to be archived and fingerprinted
  • there are new test results with different content each build written to the agent workspace

Then we don’t actually have to do a real build.

So in April 2014 I created the Mock Load Builder plugin. This plugin allows you to define a build step that will appear to the Jenkins master just like a regular build… but without generating nearly as much of a CPU requirement on the build agent.

However, when you are aiming for 100,000 concurrent builds, even the Mock Load Builder plugin is not enough as each build will fork a JVM to perform the “mock” build. Now, ok, we don’t need lots of memory in that JVM, but it’s still at least 128Mb… and that will add up to quite a lot of RAM when we have 100,000 of them running at the same time.

So I added another layer of mocking to the Mock Load plugin - fakeMockLoad - with this system property set the mock load will actually be generated directly on the agent JVM instead of in a JVM forked from the agent JVM.

We are still generating all the same console logs, build artifacts, test reports, etc. Only now we are not paying the cost of forking another JVM. Phew, that was 13Tb of RAM saved.

But hang on a second… each build agent is going to use at least 512Mb of RAM… that’s over 50Tb of RAM… or 25 x1.32xlarge AWS instances… almost $350/hr for On Demand instances just for the Agents… plus these are not exactly doing real work… we won’t have much to show other than a headline number.

Well as part of my load testing for the JNLP4 protocol I wrote a test client that can set up at least 4,000 JNLP connections from the same JVM maybe we could use a modified version of that to multi-tennant the JNLP build agents on the same JVM… The workload on the master is a function of how many remoting channels there are and how much data is being sent over those channels…

It turns out that with a special multi-tennant remoting.jar I can run nearly 10,000 build agents using fakeMockLoad per c4.8xlarge. At $1.675/hr that is a much more reasonable $16/hr… plus even better, we have fewer machines to set-up.

Everything else in my cluster is real. 500 real masters (running in Docker containers divided between a x1.32xlarge and a pair of c4.8xlarge) and a CloudBees Jenkins Operations Center (running naked on a dedicated c4.xlarge).

I was somewhat constrained by disk space packing all those masters into a small space, if I had divided the masters across a larger number of physical machines rather than trying to cram 400 masters onto the same x1.32xlarge I could have probably had the cluster run for more than 90 minutes.

There is a video I remembered to capture while spinning up the cluster just before my talk. Two of the build agent machines were running out of disk space at the time, which is why the masters I checked are running about 160 concurrent builds each.

TL;DR I had (for all of 90 minutes) a Jenkins cluster of 500 masters each with 200 build agents (per master) for a combined total concurrent built rate of 100,000 concurrent builds. Yes, there were issues keeping that cluster running within the budget I had available. Yes, there are challenges maintaining a system with that number of concurrent builds. Yes, I did make some cheats to get there. But Jenkins masters and Jenkins clusters can handle that workload - provided you have the hardware to actually support the workload in the first place!

Blog Categories: Jenkins
Categories: Companies

TEST Magazine Again Names Seapine as a Leading Testing Provider

The Seapine View - Fri, 09/23/2016 - 15:30

20-leading-providers-badge-2016Once again, TEST Magazine has listed Seapine as one of its 20 Leading Testing Providers.

Following on from previous years, TEST’s 2016 20 Leading Testing Providers’ guide outlines different, selected software testing and quality assurance products and services. The annual update on the marketplace serves as a good starting place for companies considering their testing solution options. It was published in the September 2016 issue.

This year, Seapine marked TestTrack’s 20th year as a Champion of Quality. Being included once again on TEST’s list proves that we’re continuing to meet our goal of providing development and testing teams with the tools, technologies, and support they need to deliver quality software on time and on budget.

We’re grateful and honored to be included on TEST’s list of leading testing providers for 2016, and we look forward to even greater things in 2017!

Categories: Companies

Integrate Automated Testing into Jenkins

Ranorex - Fri, 09/23/2016 - 12:00

In software engineering, continuous integration means the continuous application of quality control processes — small units of effort, applied frequently.

In this blog we’ll show you how to set up a CI job with Hudson/Jenkins that automatically builds and executes your Ranorex automation as well as automatically sends out the generated test reports for every committed change in a Subversion repository.

Advantages of Continuous Integration Testing

Continuous integration has many advantages:

  • When tests fail or bugs emerge, developers can revert the codebase to a bug-free state without wasting time for debugging
  • Developers detect and fix integration problems continuously – and thus avoid last-minute chaos at release dates
  • Early warning of broken/incompatible code
  • Early warning of conflicting changes
  • Immediate testing of all changes
  • Constant availability of a “current” build for testing, demo, or release purposes
  • Immediate feedback to developers on the quality, functionality, or system-wide impact of their written code
  • Frequent code check-ins push developers to create modular, less complex code
Infrastructure Continuous Integration Tool

You can find a download link and installation description for Hudson and Jenkins here:

In this blog post we are going to use Jenkins as CI tool. There shouldn’t be much of a difference when using Hudson.

As Jenkins or the nodes executing the CI jobs normally are started as Windows services, they do not have sufficient rights to start UI-applications.

Please make sure that Jenkins as master or its slave nodes, where the Ranorex automation should be triggered, are not started as a service.

For the Jenkins master, open the “Services” tool (which is part of the “Administrative Tools” in the control panel), choose “Jenkins” service, stop the service, and set the “Startup type” to disabled:

disable start as service

Use the following command to start Jenkins manually from the installation folder:

java -jar jenkins.war

manually start jenkins

After starting Jenkins, use this address to access the web interface:

http://localhost:8080/

To configure your Jenkins server, navigate to the Jenkins menu and select “Manage Jenkins” -> “Configure System”:

Configure System

Note: It is necessary to have the Ranorex main components – and a valid Ranorex license – installed on each machine you want to build and execute Ranorex code.

Source Code Management

As mentioned before, we are going to use a Subversion repository as base of our continuous integration process.

In this sample, we have two solutions in our repository: the application under test and as the automated Ranorex tests.

Repository

To start the application under test from your test project, simply add a new “Run Application” action to your action table in Ranorex Studio, which starts the application under test, using a relative path to the repository root:

Run Application Action

Plugins

As we want to build our code for each committed change within our SVN repository, we need a Subversion as well as a MS Build plugin for Jenkins. An additional mail plugin will make sure that a mail is sent with each build.

Install Plugins

Open the “Manage Plugins” section (“Manage Jenkins” -> “Manage Plugins”), choose the following plugins from the list of available plugins and install them if they are not installed already:

  • MSBuild Plugin
  • Email Extension Plugin
  • Subversion Plugin
Configure Plugins

The installed plugins also need to be configured. To do so

  • open the “Configure System” and configure the “Extended E-Mail Notification” plugin. To do so, set the recipients and alter the subject and content (adding the environment variable $BUILD_LOG to the content will add the whole console output of the build and the test to the sent mail),
    Configure Mails
  • configure the “E-mail Notification” plugin by setting the SMTP server.
  • and navigate to “Global Tool Configuraiton” and configure your “MSBuild” plugin by choosing the “msbuild.exe” installed on your machine.
    Configure MSBuild
Add New Job

Now, as the system is configured, we can add a new Jenkins job, which will update the checked out files from a SVN repository, build both the application under test and the Ranorex automation project, execute the application under test as well as the automation code and send a mail with the report file attached.

Start by creating a new item. Choose “Build free-style software project” as job type and enter a job name:

Add New Item

Configure Source Code Management

Next, we have to check out the source of both the application under test and our test automation project. Start with choosing Subversion as source code management tool. Then, enter the repository holding your application under test as well as your test automation project. Finally, choose “Use ‘svn update’ as much as possible” as check out strategy:

Configure SVN

With this configuration, the application under test as well as the test automation project will be checked out and updated locally.

Add Build Steps

Now, as the source code management is configured, we can start with processing the updated files.
First of all, let’s add MSBuild steps for both projects:

Add MSBuild Buildstep

Choose your configured MSBuild version and enter the path of the solution file relative to the repository root (which is the workspace folder of the Jenkins job) for both the automated and the automating project:

Added MSBuild Buildsteps

With adding these two build steps, the executables will be automatically built. Now the newly built application should be tested.
This can be accomplished by adding a new “Windows batch command” that starts the test suite executable:

Add Batch Buildstep

Added Batch Buildstep

As you can see, some command line arguments are passed to the test suite executable.

In this sample, the command line arguments “/zr”, which triggers the test suite executable to generate a zipped report file, and “/zrf:.ReportsReport-Build-%BUILD_NUMBER%.rxzlog”, which defines the name and the location of the generated zipped report file, are used.

You can find a list of all available command line arguments in the section “Running Tests without Ranorex Studio” in our user guide.
The test suite executable returns “0” on success and “-1” on failure. Based on this return value, Jenkins will mark the build as successful or failure.

Add Post-Build Action

After building and executing the application under test and the Ranorex test script, we will send an email which informs us about the success of the triggered build.
This email should include the zipped report file, mentioned before, as attachment.
To do so, add the new post-build action “Editable Email Notification”, choose the report file location defined before as attachment, and add triggers for each job status you want to be informed about. In this sample, an email will be sent if a job has failed or succeeded.

Added Mail Action

Run Job

Once you’ve completed these steps and saved your changes, check if everything works as expected by clicking “Build now”:

Build Now

After running the generated job, you will see all finished builds within the build hierarchy. Icons indicate the status of the individual builds.
You can view the zipped report files of all builds by opening them in the local workspace (“Workspace/Reports”):

Build History

As configured before, an email will be sent to the specified email address(es), including the console output in the email text as well as the generated zipped report file as attachment.

Add Repository Hook

Now we can manually trigger a build. As we are working with Subversion, it would be beneficial to trigger the script for each commit.
To do so, you can add a server side repository hook, which automatically triggers Jenkins to start a new build for each change committed, as described in the subversion plugin documentation.

Alternatively, you can activate polling of the source code management system as build trigger in your Jenkins job configuration.

As shown in following picture, you can define the interval, after which the source code management will be invoked (e.g. 5 minutes after every full hour):

Added Build Trigger

Conclusion

Following the steps above you will be able to easily setup a continuous integration process performing the automated test of the application you develop. Each commit will now trigger an automated test run. Once the test run has finished, you’ll instantly receive a mail with the Ranorex test report.

Note: This blog was originally published in July 2012 and has been revised to reflect recent technical developments.

Download Free 30-Day Trial

The post Integrate Automated Testing into Jenkins appeared first on Ranorex Blog.

Categories: Companies

Pair Testing

Agile Testing with Lisa Crispin - Fri, 09/23/2016 - 04:52
headinghome

Ernest and Chester, strong-style pairing

I’ve been meaning to write about pair testing for ages. It’s something I still don’t do enough of. Today I listened to an Agile Amped podcast about strong style pairing with Maaret Pyhäjärvi & Llewellyn Falco. I’ve learned about strong style pairing from Maaret and Llewellyn before, and even tried mob programming with them at various conferences. The podcast motivated me to try strong style pairing at work.

I’m fortunate that the other tester in our office, Chad Wagner, has amazing exploratory testing skills. We pair test a lot. Chad says that pair testing is like getting to ride shotgun versus having to drive the car. You have so much more chance to look around. He readily agreed to experiment with strong style pairing.

I’m going to oversimplify, I am sure, but in strong style pairing, if you have an idea, you give the keyboard to your pair and explain what you want to do. Chad and I worked from an exploratory testing charter using a template style from Elisabeth Hendrickson’s Explore It! We used a pairing station that has two monitors, two keyboards and two mice. It took a lot of conscious effort to not just take control, start typing and testing with our idea. Rather, if I had an idea, I would explain it to Chad and ask him to try it, and vice versa.

Since Chad is pretty new to our team, when we pair, I have a tendency to just take control and do stuff. But he has the better testing ideas. Strong style pairing was much more engaging than what we had been doing. Chad would tell me his great idea for something to try and I’d do it. An idea would spring to my head and I’d explain it to him.

One interesting outcome is we discovered we had different ways of hard refreshing a page, and neither of us knew the other way. I use shortcut keys, and Chad uses a menu that reveals itself only when you have developer tools open in Chrome. That in itself made the strong style pairing worthwhile to me!

We ended up finding four issues worthy of showing to the developers and product owner, and writing up as stories. Not a bad outcome for a couple of hours of pairing. More fun and more bugs than I would have found on my own.

Now, if only I could get my team to mob program…

The post Pair Testing appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

NUnit-Summary Becoming an “Official” NUnit Application

NUnit.org - Thu, 09/22/2016 - 23:39

NUnit-Summary is an “extra” that I’ve maintained personally for some time. It uses built-in or user-supplied transforms to produce summary reports based on the results of NUnit tests.

I have contributed it to the NUnit project and we’re working on updating it to recognize NUnit 3 test results. The program has never had a 1.0 release, but we expect to produce one soon.

This old post talks about the original nunit-summary program.

Categories: Open Source

An Engine Extension for Running Failed Tests – Part 1: Creating the Extension

NUnit.org - Thu, 09/22/2016 - 20:47

In a recent online discussion, one of our users talked about needing to re-run the NUnit console runner, executing just the failed tests from the previous run. This isn’t a feature in NUnit but it could be useful to some people. So… can we do this by creating an Engine Extension? Let’s give it a try!

The NUnit Test Engine supports extensions. In this case, we’re talking about a Result Writer extension, one that will take the output of a test run from NUnit and create an output file in a particular format. In this case, we want the output to be a text file with each line holding the full name of a failed test case. Why that format? Because it’s exactly the format that the console runner already recognizes for the --testlist option. We can use the file that is created as input to a subsequent test run.

Information about how to write an extension can be found on the Writing Engine Extensions page of the NUnit documentation. Details of creating a ResultWriter extension can be found on the Result Writers page.

To get started, I created a new class library project called failed-tests-writer. I made sure that it targeted .NET 2.0, because that allows it to be run under the widest range of runtime versions and I added a package reference to the NUnit.Engine.Api package. That package will be published on nuget.org with the release of NUnit 3.5. Since that’s not out yet, I used the latest pre-release version from the NUnit project MyGet feed by adding https://www.myget.org/F/nunit/api/v2 to my NuGet package sources.

Next, I created a class to implement the extension. I called it FailedTestsWriter. I added using statements for NUnit.Engine and NUnit.Engine.Extensibility and implemented the IResultWriter interface. I gave my class Extension and ExtensionProperty attributes. Here is what it looked like when I was done.

using System;
using System.IO;
using NUnit.Engine;
using NUnit.Engine.Extensibility

namespace EngineExtensions
{
    [Extension, ExtensionAttribute("Format", "failedtests")]
    public class FailedTestsWriter : IResultWriter
    {
        public void CheckWritability(string outputPath)
        {
            using (new StreamWriter(outputPath, false, Encoding.UTF8)) { }
        }

        public void WriteResultFile(XmlNode resultNode, string outputPath)
        {
            using (var writer = new StreamWriter(outputPath, false, Encoding.UTF8))
            {
                WriteResultFile(resultNode, writer);
            }
        }

        public void WriteResultFile(XmlNode resultNode, TextWriter writer)
        {
            foreach (XmlNode node in resultNode.SelectNodes("//test-case[@result='Failed']")) // (3)
                writer.WriteLine(node.Attributes["fullname"].Value);
        }
    }
}

The ExtensionAttribute marks the class as an extension. In this case as in most cases, it’s not necessary to add any arguments. The Engine can deduce how the extension should be used from the fact that it implements IResultWriter.

As explained on the Result Writers page, this type of extension requires use of the ExtensionPropertyAttribute so that NUnit knows the name of the format it implements. In this case, I chose to use “failedtests” as the format name.

The CheckWriteability method is required to throw an exception if the provided output path is not writeable. We do that very simply by trying to create a StreamWriter. The empty using statement is merely an easy way to ensure that the writer is closed.

The main point of the extension is accomplished in the second WriteResultFile method. A foreach statement selects each failing test, which is then written to the output file.

Testing the Extension

That explains how to write the extension. In Part 2, I’ll explain how to deploy it. Meanwhile, I’ll tell you how I tested my extension in it’s own solution, using nunit3-console.

First, I installed the package NUnit.ConsoleRunner from nuget.org. I used version 3.4.1. Next, I created a fake package subdirectory in my packages folder, so it ended up looking like this:

packages
    NUnit.ConsoleRunner.3.4.1
    NUnit.Engine.Api.3.5.0-dev-03211
    NUnit.Extension.FailedTestsWriter
        tools
            failed-tests-writer.dll

Note that the new extension “package” directory name must start with “NUnit.Extension.” in order to trick the console-runner and engine into using it.

With this structure in place, I was able to run the console with the --list-extensions option to see that my extension was installed and I could use a command like

nunit3-console mytests.dll --result:FailedTests.lst;format=failedtests

to actually produce the required output.

Categories: Open Source

UrbanCode Deploy Canary Sample Application

IBM UrbanCode - Release And Deploy - Thu, 09/22/2016 - 20:25

The canary sample is a simple application that deploys a file to your target environment. Use it when you are setting up new environments to verify end-to-end component flow, or as a one-time diagnostic tool to check that an existing environment is still healthy. You can schedule the canary to deploy the file daily for a recurring check of your end-to-end IBM UrbanCode environment. Use it as a way to detect inconsistencies in your scheduled deployments by reviewing its history.

The canary sample is available from github at https://github.com/IBM-UrbanCode/ucd-canary. Clone it, and try it in your environment. Submit pull requests to enhance its function further.

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today