Skip to content

Feed aggregator

A journey to Kubernetes on Azure

With the ongoing migration to Azure, I would like to share my thoughts regarding one of the biggest challenges we have faced thus far: orchestrating container infrastructure. Many of the Jenkins project’s applications are run as Docker containers, making Kubernetes a logical choice as far as running our containers, but it presents its own set of challenges. For example, what would the workflow from development to production look like? Before going deeper into the challenges, let’s review the requirements we started with: Git We found it mandatory to keep track of all the infrastructure changes in Git repositories, including secrets, in order to facilitate reviewing, validation, rollback, etc of all infra changes. Tests Infrastructure contributors...
Categories: Open Source

The Trump White House Takes Aim at Cybersecurity

Sonatype Blog - Fri, 05/12/2017 - 20:24
“The executive branch has for too long accepted antiquated and difficult–to-defend IT”, declared President Donald Trump in a new Executive Order released on Thursday, May 11th, 2017. The Magnitude of Risk and Importance of a Plan Over the past few years, we have witnessed mega-breaches that...

To read more, visit our blog at
Categories: Companies

Why Access to Devices is Still a Concern (and What to Do About It)

Testlio - Community of testers - Fri, 05/12/2017 - 19:28

Android application testing is notoriously challenging. There are at least 24,000 different Android devices in the world made available by over 1,300 brands. Even for developers well-versed in the conundrum of Android fragmentation, those stats are still shocking.

In this post, I talked about how QA teams can strategically tackle this issue. Rather than allow Android testing to either swallow the entire QA budget or ignore the issue (and risk lack of support for large amounts of users), teams need a strategy that’s realistic to achieve.

To narrow down the devices to test on and which test cases to run in which devices, QA teams need to incorporate in-app analytics, market data, and unique hardware concerns during the decision-making process.

And yet, even when narrowed, access to those devices is still a huge concern. Today I’m talking about why this issue persists and what the best solution is.

Why Android fragmentation will always exist

According to a survey conducted by Sencha and Forrester, developing and deploying for multiple devices is the biggest barrier to development. This survey pertained to app development of any kind, from an Amazon tablet to a desktop browser. Of all the other concerns—including security, cost, and short cycles—device fragmentation topped the list as the biggest blockade to app development as a whole.

Apple does have multiple products, product versions, and OS versions, but the combinations created by iOS are nothing when compared to those created by Android.

There are three main factors involved:

  • OEM custom features
  • Internal hardware
  • Android version being run

Not only do consumers often NOT update to the current version of Android, but manufacturers will develop and sell devices running on previous versions.

Those 1,300 brands all have various products and product versions, not to mention vastly different hardware features, such as screen size, resolution, and shape.

And more OEMs continue to enter the mobile space. Since manufacturers don’t have to create their own OSes, it’s much easier for them to get products to market.

That means there will only be more devices with more particularities.

The number of IT professionals reporting that they don’t have the right devices available for testing almost doubled in one year, from 26% to 44%. This jump is partly attributable to the need for IoT testing, which has sharply increased. The volume of devices that connect to the internet, mobile phone or otherwise, is clearly only going to grow.

The failings of device emulators

When QA teams manage to strategically decide which of the thousands of devices to support, they must then choose how they will support them. Will they emulate the device or test it manually?

For lower priority devices that still need coverage, emulators are a great option. This technology allows QA engineers to test unique devices on their PCs in one platform by working on various device profiles. Emulated devices are a smart choice during initial stages of testing for all identified important devices so that teams can test quickly and improve quality early on.

But there are some clear disadvantages to relying on device emulators:

  • They have increased processing power that does not match mobile processing speeds
  • Screen resolutions and image rendering aren’t guaranteed to be accurate
  • External conditions (like the effect of loud noise on sensors or slow network connections on task completion) can’t be tested
  • Internal hardware differences (like CPUs and GPUs) aren’t tested

Emulators lack so many of the differences that make strategic device coverage important in the first place.

The most reliable way to get Android device coverage

Humans! Mobile testing is synonymous with manual testing because of complex nature of mobile apps and how intimately they’re used. Testing on actual devices is a must for deploying for supported devices with absolute confidence.

Every app is different, so the needs of every team are different. Some apps (particularly games) require extensive manual testing to ensure that the product is interfacing with sensors properly and the graphics are on-point.

For most apps, responsiveness can’t be sacrificed, and the multitude of available screen sizes are reason enough to require manual testing.

But it really doesn’t make sense to collect hundreds or thousands of devices. The cost of devices is high, and they will need to be connected to a network to test outside of WiFi. Most QA teams don’t have the budget for maintaining a large collection of devices, or the time to actually test on them.

Plus, there are hundreds of mobile network operators in the world. Network technologies, standards, and infrastructures can all potentially affect the flow of information from your servers to a device, meaning that location also matters.

Testlio has expert testers around the world who love collecting devices so you don’t have to. QA managers keep track of which testers own which devices, so they can assemble the right team. This testing strategy mimics your user base and allows QA teams to breathe easy.

Why mobile testing should be left to the experts

In the World Quality Report 2016 – 2017, IT teams identify the lack of “mobile testing experts” as a struggle equal to that of the lack of devices. That’s because for many enterprises, mobile testing is still considered a “relatively new” skill.

The temptation to NOT cover all necessary devices means that the user experience goes unsupported. Similarly, the temptation to leave a certain extent of mobile testing to the users equals poor app ratings and low adoption.

Expert mobile testing isn’t just about finding bugs. It’s about verifying the overall quality of an app, whether it achieves its purpose, how seamless the experience is, and detailing the ways it can improve.

To access a global community of expert mobile testers, get in touch with us for the best demo ever.

Categories: Companies

What’s New in QA Wizard Pro 2017.1

The Seapine View - Fri, 05/12/2017 - 14:14

QA Wizard Pro 2017.1 includes some great new features to help improve your automated testing process.

Change the search method for multiple controls at the same time

If QA Wizard Pro cannot locate or distinguish between controls during playback, you may need to change the search method used. Now you can select multiple controls in the application repository and use the new Search Method shortcut menu to quickly switch them to a more accurate search method at the same time. Learn more.

Use test data from PostgreSQL and SQLite databases

You can now create external datasheets to import or link to existing data in PostgreSQL or SQLite databases for use in data-driven scripts. Learn more.

Improve the accuracy of optical character recognition

You can also improve the accuracy of text returned by optical character recognition (OCR) by setting default playback options. These new options control the contrast, image scale, grayscale conversion, and language file used to read graphical text in applications. You can also use new OCR statements to adjust these options during script playback. Learn more.

Manually add windows and controls to use in scripts

If QA Wizard Pro cannot capture windows or controls you need to test because they do not exist in the application or cannot be displayed during recording, you can now manually add them to the application repository to use them in scripts. This lets you create scripts that test new controls before they are implemented in the application, if you use a test-driven development process. Learn more.

Other enhancements
  • Use the SetAllFilesReadOnly and SetFileReadOnly statements to allow or prevent scripts from modifying files.
  • Use the EncryptString statement to conceal sensitive text entered in fields, such as passwords.
  • Use the Err.CallStack and Err.LineText statements to generate additional output that is helpful when debugging scripts.
  • Right-click a file in the workspace and choose Open Containing Folder to quickly locate the source file on your computer.
Want to know more?

Check out the release notes and help to learn more about QA Wizard Pro 2017.1.

Categories: Companies

Deploy applications to Bluemix and introduction to UrbanCode Deploy

IBM UrbanCode - Release And Deploy - Fri, 05/12/2017 - 08:17

I recently came across a common use case to deploy Liberty-application to Bluemix. This use case provides a great opportunity to introduce what UrbanCode Deploy (UCD) is and what can be done with it.

There is a lab in the Docs-section that:

  • Shows how to install UCD. Uses install-script to avoid manual installation.
  • Introduces the main concepts of UCD such as components, applications and processes.
  • Shows how to model software deployment by providing one solution to the use case: deploy Liberty application to Bluemix.

Prerequisites for the lab include a Bluemix account (free account here: and a Linux machine. The preferred Linux version is RedHat Enterprise Linux 7 because scripts have been tested with that. Any other Linux should work too, as well as Windows.

In the lab, the first step is to install UCD. The install script automates the installation and users can get started quickly with UCD. This UCD install script, and other scripts, are available at Github:

After the UCD installation, the lab continues to model software deployment as described in UCD Knowledge Center and shown in the image below (image is taken from the Knowledge Center).

Modeling software deployment

The Liberty application used in this lab is Daytrader application.

At the end of the lab, you will have good knowledge about UCD and a good starting point to create your own automated software deployments.

Categories: Companies

Meet the Bees: Amanda DeLuise

In every Meet the Bees blog post, you’ll learn more about a different CloudBees Bee. Let’s meet Amanda!​

Who are you? What is your role at CloudBees?

I’m Amanda DeLuise and I’m a Customer Success Manager here at CloudBees. I’m primarily responsible for managing our customer’s success! And less literally, CSMs are communication experts and project managers. We coordinate CloudBees resources to ensure our customers are meeting their business and technical goals. Whether it’s escalating a support ticket, contacting the Account Executive about purchasing extra entitlements, or setting up a customer with a Professional Services engagement, our goal is to maximize the value of the customer’s subscription.

In addition to my CSM duties, I’ve also taken to planning and running Jenkins Area Meetups (JAMs) around the Triangle (Raleigh, Durham, Chapel Hill).

When I’m not at work, I keep pretty busy. I work with a few local social justice organizations on the evenings and on weekends, and recently joined a comedy writing workshop. I’m sure my coworkers can attest to my hilarity (just kidding, I’m pretty quiet and I’m sure most people have never heard me speak aloud). I also have two cats (Lemon and Basil), so taking pictures of them takes up a decent chunk of my free time.

What does a typical day look like for you? What are CloudBees customers like?

A typical day starts with coffee. We have a lot of coffee connoisseurs in the Raleigh office, so someone is always grinding fresh beans and brewing something delicious. It’s honestly the best office coffee I’ve ever had.

When decently caffeinated, I go through my emails. I have meticulously filtered and color coded labels so it’s easy for me to determine what needs my immediate attention each morning. I start with direct emails from customers, move onto tickets that need my feedback, check to see if I have any meeting invites for the day. If it’s a relatively slow call day, I reach out to customers to see if we can get something on the calendar for a check-in call. It’s important to stay on top of the goals my customers have for each quarter, how close they are to achieving them, and how we can help them.

Lastly, I’m most likely always creating or updating a spreadsheet at some point in the day. Staying organized is super important when you’re handling 30+ unique accounts. I want to make sure I always know what my customers are doing, what they’re thinking about doing, and what features or plugins are important to them.

Do you have any advice for someone starting a career in the CI/CD/DevOps/Jenkins space?

Be patient! There are literally thousands of ways to use Jenkins, and learning about CI/CD was a bit like drinking out of a fire hose when I first started. But a little patience and a lot of Knowledgebase articles go a long way. And when you do make a mistake (which will be inevitable), look for the lessons in it.

Similarly, ask a lot of questions. Not only are people willing to help, but I’ve found they love talking about projects they’re working on or tips and tricks. It’s much easier to digest all of the information when you add a human element to it. Jenkins is more than just code running on a server somewhere out in the ocean, Jenkins is a community.

What has been the best thing you have worked on since joining CloudBees?

It’s been awesome getting to work with the marketing team organizing the JAMs. As you may have guessed from my mention of meticulously filtered labels and spreadsheets, I love organizing. I’m also a people person, so it’s been wonderful meeting local Jenkins users and customers face-to-face instead of over the phone or through email.

What is your favorite form of social media and why?

Vine (RIP)! I could watch Vine compilations for hours. It’s a mark of comedic genius to be able to make people laugh in a 6 second video.

Favorite TV show character - why is this character your favorite?

It’s a tie between Olivia Benson from Law and Order: Special Victims Unit, Annalise Keating from How to Get Away with Murder, and Linda Belcher from Bob’s Burgers. I have a thing for well-written, multi-dimensional women characters! 


Blog Categories: Jenkins
Categories: Companies

Parasoft Releases New API Testing & Service Virtualization Solutions

Software Testing Magazine - Thu, 05/11/2017 - 18:12
Parasoft has announced its latest enhancements of their API testing and service virtualization solutions. Parasoft SOAtest and Virtualize are tools that enables teams to quickly solve today’s...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

SmartBear Releases SoapUI NG Pro 2.0

Software Testing Magazine - Thu, 05/11/2017 - 17:35
SmartBear Software has announced SoapUI NG Pro 2.0, the latest version of the popular API testing tool. The new version of SoapUI NG Pro introduces an interactive dashboard for testers to gain...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Beta 1

Ranorex - Thu, 05/11/2017 - 16:43

The post Beta 1 appeared first on Ranorex Blog.

Categories: Companies

What’s New in Surround SCM 2017.1

The Seapine View - Thu, 05/11/2017 - 14:21

Surround SCM 2017.1 is here! This release includes some nice enhancements that you’ll want to get familiar with.

More options for reviewing files in code reviews

You now have more flexibility to see the exact changes you’re interested in and how changes look when reviewing files in code reviews. You can show differences for versions not included in a review, for more information. You can also ignore case and white-space differences, change the differences output, and change the font and tab width. Learn more.


Code review options


Get files with historical filenames

When getting files by label or timestamp, you can now retrieve them using the name they had when they were labeled or at the time specified by the get. Learn more.

More flexible text end-of-line formatting

Surround now supports additional file types when adding files, setting file properties, and setting server options to auto-detect or ignore files based on filename or extension:  Text (CR/LF), Text (LF), UTF-8 Text (CR/LF), and UTF-8 Text (LF).



When getting text files using the Surround SCM CLI, you can now override the default end-of-line format set in the user options. This is helpful when build scripts that run on one operating system get files used exclusively in builds for another operating system. Learn more.

And more!

This release also includes other enhancements, such as:

  • Support for the Jenkins Pipeline feature from the Surround SCM Jenkins plug-in
  • Better performance when switching branches because information is loaded in the background, allowing you to continue working
  • More options for administrators when analyzing and repairing issues in Surround SCM databases

Ready to check out Surround SCM 2017.1? If you have a current support and maintenance plan, upgrades are free. If you’re not already using Surround SCM, try it out today.

Categories: Companies

Hybrid Cloud Problem Patterns: Chasing DNS Lookup Times from AWS EC2

As a performance architect, I get called into various production performance issues. One of our recent production issues happened on Tomcat AppServer running on an AWS EC2 instance in a VPC. VPC is joined with an on-premise DNS server. This service calls another micro service. When service went live, we noticed a high response time from a downstream micro service, and the downstream service logs did not show any performance issue.

In this blog, I’ll walk through the steps taken by our tech arch Neeraj Verma to analyze this issue in our production environments, which tools were used, explaining background information DNS lookup, as well as how this problem was resolved. I hope you find this useful!

API High Response Time Analysis in Production Environment

Performance engineering is the science of discovering problem areas in applications under varying but realistic load conditions. It is not always easy to simulate real traffic and find all problems before going live. Therefore, it is advisable to determine out how to analyze performance problems, not only in test, but also in a real production environment. Having the right tools installed in production allows us to analyze issue and find root causes that are hard to simulate in testing.

The following diagram visualizes our service layout. Our eCommerce API’s call ShoppingHistory API using customerid. ShoppingHistory API calls DynamoDB and Customer- API-to-server requests from eCommerce.

 Transactional Flow when the eCommerce Frontend calls the OrderAPI and how it makes its way through different service layers deployed on AWSArchitectural Overview: Transactional Flow when the eCommerce Frontend calls the OrderAPI, and how it makes its way through different service layers deployed on AWS.

In order to monitor individual service health, we log entry and exit calls of each service invocation in a custom telemetry system. Each team uses Kibana/Grafana dashboards to measure health. Through the ShoppingHistory API dashboard, the team can see that time is being taken by Customer Service, even though the Customer Service dashboard did not show any issue at all. This is when the classical blame game would start. In our case we tasked the ShoppingHistory API team to find the actual root cause. And here is what we did.

Application Monitoring with Dynatrace AppMon

Our tool of choice was Dynatrace AppMon, which we already used to for live production performance monitoring of all our services in production. Let me walk you through the steps in Dynatrace AppMon on how we identified high response time and its root cause. In case you want to try it on your own I suggest you do the following:

  1. Get your own Dynatrace Personal License
  2. Watch the Dynatrace YouTube Tutorials
  3. Read up on Java Memory Management
Step #1: Basic Transaction

Once Dynatrace AppMon collects data you can decide whether to analyze it in the Dynatrace AppMon Diagnostics Client or go directly to the Dynatrace AppMon Web interface. With the recent improvements in Dynatrace AppMon 2017 May (v7) the Web Interface is even more convenient when analyzing PurePaths, which is why we go there. In the Web Interface we often start by looking at the Transaction Flow of our System. The Transaction Flow is dynamically generated by Dynatrace thanks to its capability to trace every single transaction, end-to-end, enabled through their PurePath technology.

Looking at the Transaction Flow, we could immediately see the most time (91%) was actually spent in ShoppingHistory JVM instead of Customer Service which we assumed until that point to be the problem as was indicated by our logging. Fortunately, Dynatrace AppMon told us otherwise!

The Dynatrace AppMon PurePath highlighted our Shopping History JVM as the response time hotspot.The Dynatrace AppMon PurePath highlighted our Shopping History JVM as the response time hotspot. Step 2: Drill Down into PurePath (show all nodes)

The detailed PurePath shows where most of the time is spent, down to the method itself. In our case we could spot that resolving the address of the backend microservice took about 2s. In the screenshot below you can see that when the frontend service tries to call the backend service it must first open a connection (HttpClient.connect method) which itself has to resolve the passed endpoint address. This method then calls the internal Java classes to do the actual DNS name resolution.

The PurePath Tree shows complete transaction flow, executed methods and how long they took to execute. Easy to spot the 2s execution time of the internal Java DNS name resolution methodThe PurePath Tree shows complete transaction flow, executed methods and how long they took to execute. Easy to spot the 2s execution time of the internal Java DNS name resolution method The high level performance overview that you get for each PurePath also gives a good indication on which component is currently our hotspot. Clearly indicates the same problem. All time spent when making web request calls.The high level performance overview that you get for each PurePath also gives a good indication on which component our hotspot resides, clearly indicating the same problem — the amount of time spent making web request calls. Solution

Based on the collected information from our production environment we tried finding a solution on the internet, and found an explanation for a similar issue on the IBM blog. This is where I found the answer to our issue:

The problem could be lookup issues between IPv6 versus IPv4. If the Domain Name System (DNS) server is not configured to handle IPv6 queries, the application may have to wait for the IPv6 query to timeout for IPv6 queries. By default, resolutions requiring the network will be carried out over IPv6 if the operating system supports both IPv4 and IPv6. However, if the name service does not support IPv6, then a performance issue may be observed as the first IPv6 query has prolonged until its timeout before a successful IPv4 query can be made.

To reverse the default behavior and use IPv4 over IPv6, add the following Generic JVM argument:

  1. ****

We added  in JVM param and restarted JVM.

Now the transaction executed much faster, as shown in Dynatrace:

PurePath overview shows us a total execution time of 128ms. Most of the time now is spent in other areas – such as database calls – but no longer in resolving DNS addressesPurePath overview shows us a total execution time of 128ms. Most of the time now is spent in other areas – such as database calls – but no longer in resolving DNS addresses. The PurePath Top Contributors tab makes this even clearer. HTTP calls now finish in millisecondsThe PurePath Top Contributors tab makes this even clearer. HTTP calls now finish in milliseconds. The API Breakdown also clearly shows that we solved this one problem. Now we can focus on the next hotspots if we want to improve performance furtherThe API Breakdown also clearly shows that we solved this one problem. Now we can focus on the other hotspots if we want to improve performance further.

This story showed us how important it is to monitor all your applications and services in all the different environments on which they run. We will see a larger push towards hybrid cloud which means we have to find a way to detect these problems. Dynatrace natively supports all these technologies and, thanks to its analytics capabilities, makes it easy to find and fix them.

The post Hybrid Cloud Problem Patterns: Chasing DNS Lookup Times from AWS EC2 appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Software Testing Strategy: New Model, Better Outcome

Software Testing Magazine - Wed, 05/10/2017 - 17:30
Pyramids? Quadrants? Cupcakes?! There are a wide array of models that describe approaches to software testing and test automation strategy and their possible positive (or negative) outcomes. This...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Accelerate Products Development at SonarSource

Sonar - Wed, 05/10/2017 - 16:53

We founded SonarSource 8 years ago with a dream to one day provide every developer the ability to measure the code quality of his projects. And we had a motto for this: “Democratize access to code quality tooling”. To make this dream come true we invested all our time and energy into developing the SonarQube platform, hiring a great team and building an open source business model to sustain the company growth and keep our freedom. We have also invested a lot in the relationship with our community; giving a lot and also getting back a lot.

Thanks to this approach, here are some examples of what we were able to deliver in the last few years:

  • on-the-fly Feedback in the IDE with SonarLint
  • analysis of 18 languages
  • deep analysis to cover the reliability and security domains
  • high availability and multi-tenancy of the platform to soon launch

After 8 years of effort, we believe we have built great products along with an awesome 60-person company, a solid business and a great brand. We are very proud of these, but we do not think our dream has come true yet. Why? Because Continuous Code Quality still isn’t a commodity the way SCM, Continuous Integration, and artifacts management are: every developer should benefit from the power of a path-sensitive, context-sensitive data flow analysis engine to detect the nastiest bugs and subtlest security vulnerabilities. should be a no-brainer for anyone who uses, VSTS, Travis CI… In other words, everyone writing code should want to benefit from the best analyzers to make sure each line produced is secure, reliable and maintainable.

To take up this challenge, we have made a choice to partner with Insight Venture Partners, one of the very best VCs in our domain. By leveraging their experience, we strongly believe we will be making our dream come true… way sooner than another 8 years!

Simon, Freddy & Olivier
The SonarSource Founders

Categories: Open Source

Cost & traffic control for mobile app monitoring

Dynatrace now provides a Cost and traffic control setting that you can use to reduce your session usage while monitoring your mobile apps.

By default, Dynatrace captures all user actions and user sessions for analysis. This approach ensures complete insight into your application’s performance and customer experience. With the new Cost and traffic control setting, you can optionally reduce the granularity of user-action and user-session analysis by capturing a lower percentage of user sessions.

While this setting can reduce monitoring costs, it also results in lower visibility into how your customers use your mobile applications. For example, a setting of 10% results in Dynatrace analyzing only every tenth user session.

To limit the number of user sessions that Dynatrace analyzes

  1. From the navigation menu, select Applications.
  2. Select the mobile application you want to configure.
  3. Click the Browse (…) button and select Edit.
  4. On the Settings page, click the General tab.
  5. Type a value of less than 100% into the Analyze % of user sessions text field.
    With this setting defined, Dynatrace will analyze an evenly distributed number of user sessions that equates to the percentage of user sessions that you’ve specified.

The post Cost & traffic control for mobile app monitoring appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Dynatrace Managed feature update, version 118

Manual configuration of IP addresses for cluster nodes

To simplify the installation of Dynatrace within complex networks (where auto-detected IP addresses typically aren’t usable as endpoints for OneAgent traffic or the Dynatrace web UI), Dynatrace now enables you to manually specify public IP addresses for cluster nodes. This feature is particularly valuable for installations of Managed cluster in cloud and hybrid cloud scenarios where multiple interfaces and IP-forwarding may be set up.

To manually configure a public IP address for a cluster node

  1. From the Dynatrace Managed deployment status page, click the cluster node tile of the infographic.
  2. On the cluster node page, type the correct public IP address into the OneAgent communication endpoint text field.
  3. Click the checkmark button to confirm that the endpoint is valid.
    Dynatrace Managed
Configure permission for viewing sensitive data

Because Dynatrace OneAgent can now potentially capture sensitive customer data, we’ve enhanced Dynatrace Managed permissions management with a new privilege that governs access to confidential request data. This setting can be configured either from the User groups settings page (see example below) or from the environment settings page.

To define which user groups can view sensitive data

  1. From the Dynatrace Managed navigation menu, go to User authentication  User groups and select a user group.
  2. Within the Sensitive request data permission section at the bottom of the page, click the Add button to add environments for which this user group has permission to view sensitive data.

Configure permissions

The post Dynatrace Managed feature update, version 118 appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

5 Common Challenges When Using Selenium

Ranorex - Wed, 05/10/2017 - 14:10

There’s no denying the importance of Selenium when it comes to web browser automation. While its many automation benefits are obvious, there are common challenges both testers and developers alike encounter when using Selenium. In this blog post, I want to address five common challenges and show you how you can solve them with the Selenium WebDriver integration in Ranorex Studio.

Identifying dynamic content The challenge

It can be tricky to identify content with dynamically generated attributes using Selenium. Dynamic content is based on dynamic identifiers (IDs). These IDs are generated anew every time an element is displayed, which makes it difficult to address the web element based on this attribute. For example, the IDs in web applications based on the “Yahoo User Interface Library” look something like “yui_3_8_1_1_13679224741219_543”. This ID changes every time the webpage reloads. In this case, you cannot use the ID locator to identify the element.

How to solve it

Ranorex comes with a set of RanoreXPath weight rules that automatically decide which attribute to use to identify objects based on particular web libraries. If you want to resolve a specific problem when identifying web elements, you can add a rule to this set. Please have a look at the blog post “Automated Testing and Dynamic IDs” for detailed instructions. Once added to the rule set, you can create your script-free or code-based test scenarios and run them on your WebDriver endpoints. As a result, the object repository will be automatically filled with robust repository items.

RanoreXPathWeight Editor

Dealing with timing issues The challenge

Another challenge, especially when testing dynamic web applications, is handling timing issues – for example, when a query takes longer to provide the desired output. In Selenium, you have to manually implement a wait mechanism in code to overcome this issue.

How to solve it

Ranorex automatically creates search time-outs for each item in the object repository, providing a built-in safety net for possible timing issues. You can edit the search time-out of a repository item in Ranorex Studio (context menu item Properties). For further details about repository time-outs, please have a look at the chapter Waiting for UI Elements – Repository Timeouts in our user guide. In addition to the automatically added search time-outs, you can explicitly wait for a specific element to appear or disappear using a Wait For Exist or a Wait For NotExist action in the actions table. You can get an overview of all possible actions as well as detailed information about specific actions in the section Types of Action Items in our user guide.

Wait For Action

Maintaining web elements The challenge

Test maintenance is dreary and, unfortunately, unavoidable. Especially in complex test scenarios, it can be difficult to maintain web elements addressed in your automated tests. When using a pure Selenium WebDriver implementation to automate your test scenarios, the same web element may be used multiple times. If this element changes, you then have to alter every occurrence of it manually. Even if you have used the page object pattern to manage web elements, you have to find the element that has changed and fix it in code.

How to solve it

In Ranorex, you can use the central object repository to manage your web elements. Every time an element in your application under test changes, you only have to edit it once and all changes will be automatically applied to all occurrences of this web element. To do so, select the desired element in the repository and open its RanoreXPath. Simply re-track the element using Ranorex Spy to successfully identify the element for future test runs. Implementing data-driven testing The challenge

Selenium WebDriver doesn’t have a built-in data-driven testing mechanism. This means you have to manually connect your automated tests to external data sources, read the data out of these sources and execute your test scenario with the added data if you want to data drive your tests.

How to solve it

Data-driven testing is already an integral aspect of Ranorex. Without any additional preparations, you can choose between several types of data connectors (simple data table, CSV file, SQL database and Excel file). Simply use the data from these external sources, to automatically execute your test cases with different data sets. You can find detailed instructions on how to perform data-driven testing in the dedicated user guide chapter.

Data-Driven Testing

Reporting The challenge

An important aspect of any test automation environment is getting a detailed and easily understandable report for each test execution. When you use Selenium WebDriver, there are several possibilities to achieve a reporting mechanism. All of which, however, have to be implemented in code using third-party integrations.

How to solve it

Using the Ranorex Selenium WebDriver integration, you don’t have to worry about reporting. When you execute a website test on a WebDriver endpoint, a detailed Ranorex report will be automatically generated. This report provides you with a comprehensive overview of the entire test execution flow. As it is a JUnit compatible report, you can easily integrate it into any CI process.


As you can see, the matchless Selenium WebDriver integration in Ranorex Studio 7 allows you to finally use the best of both frameworks to your advantage and address common Selenium WebDriver pain points.

How the Selenium integration works Download Trial

The post 5 Common Challenges When Using Selenium appeared first on Ranorex Blog.

Categories: Companies

Advice on Balancing Testers for Embedded Scrum Teams

Gurock Software Blog - Wed, 05/10/2017 - 01:49

Advice on Balancing Testers for Embedded Scrum Teams 3 Components Technical Tester, Tool Smith, The Generalist

The first scrum team I worked in was small. The developers sat on their side of the building and worked on new feature code. I sat on the other side with the testing group, looking for problems in that new code. Testers and developers met once each morning to talk about our progress, and what was preventing us from moving forward. Our version of scrum and agile was a waterfall process that took two weeks.

Each phase of the project took place in sequence, so that progress would flow steadily forward. Today I view this type of process as a “mini-waterfall” and therefore a transition state, and as a step in the right direction. Then again, in terms of improvement, everything is a transitional state. As testers, we needed to adapt our strategy, therefore our test team created a balanced skill-set to help smooth the flow of development.

Receive Popular Monthly Testing & QA Articles

Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.

We will never share your email. 1-click unsubscribes. articles

According to The Scrum Guide by Ken Schwaber and Jeff Sutherland, teams are self-organizing and cross-functional. The Scrum Team consists of a Product Owner, the Development Team, and a Scrum Master. Through my experience, I have observed that there are different roles within our test team too.

The Technical Tester

Advice on Balancing Testers for Embedded Scrum Teams 3 Components Technical Tester
Ron Jeffries describes the ideal agile team like this: A very small team with just enough skill to deliver one piece of code works on a change until it is ready to deliver to the customer. That’s it.

A balanced scrum team might have a developer or two to build the API, a front-end developer, someone that understands build-tools, and delivery, and a tester. While the back end is being built, the tester is making one API call at a time to explore the codes capabilities. The tester will also be busy building automation in an API testing framework. This is the behavior of a relatively “Technical Tester”.

There is a wide spectrum for how technical that tester needs to be; anything from being able to fake calls and debug the API in the browser, to writing production code.

Once the API is checked-in, the front-end developer can start wiring up their changes. This presents an opportunity to build a small amount of UI automation. UI automation can be a challenging task (see a previous blog, Consider “Reasonable” UI Test Automation), however building it at this point forces exploration. I can’t build UI automation without exploration. I might start with an idea of what I need to build, but not fully understand the workflow to get it done. Before I can write code, I open the browser and work out how the user would behave. That usually involves discovering different ways to perform a workflow, coming up with questions about the product, and probably discovering bugs. To really shift, and have an effective scrum team that delivers ‘done’ code at the end of each sprint, testers need to understand a little code.

Balanced scrum teams with Technical Testers don’t have sprint lags so that one team member can catch up on building automation, while the other developers move on to the next release. Features are done when they have been developed, explored, and have automation running in continuous integration. A complete feature is a ship-able feature.

The Tool Smith

Advice on Balancing Testers for Embedded Scrum Teams 3 Components Technical Tester
This person is a programmer with a deep interest in testing. Some might call them ‘test infected’. The Technical Tester is someone that can use frameworks, and write just enough code to build a test. For the most part, that isn’t a whole lot of code. Tool Smiths build the tools that help testers get their work done in a sprint. UI automation built directly with an API like WebDriver is good for a handful of tests, but after a while it becomes increasingly challenging. When code is duplicated, one UI change will require updating several tests. The Tool Smith alleviates these problems by building PageObjects and domain specific languages (DSL).

Without a page object, the tester will have to:

  • Write code that navigates to login page.
  • Wait for username field to be visible.
  • Wait for password field to be visible.
  • Enter username and password.
  • Click submit and wait for landing page.

With a page object, the test builder simply types logOn().

It is also worth noting that the Tool Smith is also helpful in the build and deploy pipeline.

I have started several automation projects over the course of my career. I usually started out with a request from management, focused on shrinking the time it takes to do pre-release testing. We would look at several different tools and build a proof-of-concept (POC) with each. After doing a demo of each tool, a manager would select the most suitable tool and we would move forward with that one. Then during the first couple of weeks we would build page objects or reusable libraries for an API suite. At some point, someone would ask “Hey, where are those tests running?”. No one would have a good answer, because we had been running the tests on our local machines’ the entire time. Each day we would ask our Ops person to help us get the tests running in Continuous Integration, and each day that person would come up with tasks that were more important. If we had a person that was familiar with build systems and empowered to make changes, our tests would have been running with every build from the first day.

The Generalist

Advice on Balancing Testers for Embedded Scrum Teams 3 Components Technical Tester

So far, I have discussed the technical skills required to balance a fast-paced scrum team. Those technical skills are not useful, unless you already have a solid foundation in testing. I mostly see this skill set in people that have moved between different roles in software development. I spent the first three years of my testing career doing mostly non-technical work in a waterfall company. It wasn’t glamorous, but we delivered software that people paid for. We had a new specification, new project deadline, and new features to test every release cycle. I developed some fundamental testing skills during this period; test techniques, list domain and scenario testing. I also developed an understanding on how specifications or user stories were useful, and how they could lead testers astray. My skills in observation and discovering what is important, improved. Eventually, I started working on a technical project building UI automation. Those skills helped me better understand what tests would be more useful to run repeatedly and why.

My personal experience has been that people with experience in support, product management, testing, and automation are best suited to the role of a tester on a scrum team.

Getting Real

Advice on Balancing Testers for Embedded Scrum Teams 3 Components Technical Tester

When people talk about scrum teams, I usually see a waterfall team plus daily stand-ups. This is the same silo-ed groups working through the same start-and-stop methodology as they always have. I see balanced scrum teams as something different. A balanced scrum team has a small group of developers, just enough to cover the skill set they need to produce features for this release cycle. This will also have arguably more importantly, a tester that is skilled in software testing and has a good technical foundation.

The next problem is the roles. Many Scrum teams don’t have a technical-tester, or a Tool Smith tester. They might not even have three testers, or even two. In those cases, I see the skill set fluctuating. The build-master might take over the role of the Tool Smith, or, at larger organizations, a single Tool Smith might support multiple teams. On some teams, the developers are Technical Testers, requiring stories to be tested by someone else before they are told they are done, and that person might be a programmer. In some cases, a single generalist test coach supports many generalist programmers-who-do-lots-of-things.

As I hinted at before with continuous improvement, the key is not to define some perfect end state the team should reach in eighteen months. Furthermore, the key certainly is not an immediate process cut-over with no skill development. Instead, the question is what to do next; a little more of this, a new responsibility here, a little less of that.

Get the balance right, and you can be ready to release software any time.

This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.

Categories: Companies

How to Convert Selenium Scripts to Virtual Browser Scripts

Web Performance Center Reports - Wed, 05/10/2017 - 00:46
Web Performance has discontinued direct support for Selenium/WebDriver in Load Tester. One of the limitations of load testing with Selenium/Web Driver is that it takes lots and lots of cloud machines to generate load. Virtual users, on the other hand, are very efficient, cheaply simulating up to millions of users. This blog post shows one possible option, playing back your selenium scripts directly into the Load Tester recorder where they can be edited and played back with lots of virtual users. Install Load Tester Download Load Tester and install it on your Windows machine if you haven’t already.  Double-click on the … Continue reading »Related Posts:
Categories: Companies

Learning from rejection: Getting proposals accepted

Agile Testing with Lisa Crispin - Tue, 05/09/2017 - 21:29

Donkey BrayingGood conferences generally receive far more worthy session proposals than they can cram into their program. I know it hurts to have a proposal rejected. I went through many iterations of that early in my presenting career. Spending hours of time and effort on a well-crafted proposal, only to have it rejected, makes one want to scream!

Turn that rejection into a positive learning experience. Here are a few quick tips based on my own experience. (This is a quick post, I don’t have a lot of time, so I apologize for it being rough and lacking illustrations!)

Get Feedback

When I chair a conference program track, I email each submitter whose proposal we have to reject with specific reasons we turned their proposal down. I think anyone who takes the trouble to submit a proposal deserves to know why it didn’t make the cut.

If you don’t get any information from the conference, see if you can find out who was on the review committee and approach them personally. Ask if they’d be comfortable sharing some feedback about your proposal. It can’t hurt to politely ask, and hopefully they will be willing to help you. The reason may be as simple as “We had four proposals on that topic, we only had room in our program for one, and we chose a different one because <fill in reason here>.”

Ask friends or social media contacts who have more experience presenting at conferences to review your proposal and give you feedback. Ideally you did this before you even submitted, but even so, perhaps they can speculate as to what might improve your proposal for the next conference.

I like to learn whether a topic I have in mind is engaging to enough people. Reach out to your local user group, community of practice or social media contacts to see if your topic has an audience.

Get a mentor

There are organizations that help newbie presenters put together proposals and conference sessions. Speak Easy has a terrific mentoring program. Reach out!

The Agile Alliance’s Women in Agile program is holding a half day session, open to everyone, the day before Agile 2017. They are offering mentoring to anyone who wants to submit a lightning talk for the closing keynote of the day. Seize this opportunity, or others like it.

Ask your friends or social media friends who already present at conferences if they will mentor you. The software community is generally a caring and giving place. Don’t be afraid to ask for help!


I pair for all my conference sessions. I try to pair with a newbie presenter to give them experience and help them build confidence. Even pairing with a newbie is a huge help to me. We have twice the ideas, twice the capability, and twice the discipline. I don’t want to let my pair down, so I put more effort into preparing and practicing for the session. We can come up with terrific exercises and make the session a much better learning experience for participants than if I were to do it all on my own. Pairing is more fun, too!

Improve your presenting skills

There is a lot of research into how we learn and how we can help our fellow humans learn. For me, reading Sharon Bowman’s Training from the Back of the Room transformed my ability to create a great learning experience for participants. Presenting is a lot of work. You’ll need a lot of prep time to create the right visual aids, exercises and other activities, and to rehearse your content. The hard work pays off.

Improve your proposal skills

There’s a lot of advice out there for improving your proposals. Natalie Warnert has some great tips on her blog. Ryan Ripley’s Agile for Humans podcast has a helpful episode on improving conference submissions.

One of the best ways to get insight into what makes a compelling proposal is to volunteer for conference program review committees. They all need help – go offer!

Don’t give up

I started submitting to conferences back in the 90s when you had to send a whole paper on your topic along with your abstract. After several rejections, I was talking to a technical writer where I worked at the time. She offered to edit my abstract and paper before I submitted it to another conference. Wow, professional writing! Not only was my paper accepted, I won 2nd prize for best paper. I have depended on professional editors for my writing ever since! If you don’t know any professional writers, seek out your friends who have good written communication skills and ask them to edit your proposal.

I present at conferences, despite that being WAY out of my comfort zone, because I like to go learn stuff and meet people. It’s worth all the effort. Keep trying! And please, let me know if I can help.

The post Learning from rejection: Getting proposals accepted appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Extend Dynatrace with custom monitoring plugins

Have a unique technology or custom application that Dynatrace doesn’t monitor out-of-the-box? No problem! We’re proud to announce the beta release of custom monitoring plugins for Dynatrace. Now you can build your own plugins to serve your unique monitoring needs.

With custom monitoring plugins you can:

  • Create and deploy new plugins that support your organization’s unique monitoring needs.
  • Enjoy total flexibility, decide what you need and how the results will be displayed.
  • Use the power of Dynatrace AI—your custom alerts are correlated and included in root cause analysis.
  • Use our Python SDK, which is equipped with advanced troubleshooting capabilities.
  • Take advantage of custom metrics for processes. Custom metrics are displayed alongside the standard set of OneAgent performance metrics.
Build your custom plugin

Custom plugins can be created for any process that exposes an interface, such as processes that are served over HTTP (for example, databases, applications, and load balancers). To begin, you need to create some Python code and write a JSON file that describes your metrics and how you want to display them. For complete instructions and examples, see How to write your first OneAgent plugin.

To download the Dynatrace SDK

  1. Go to Settings Monitoring  > Monitored technologies Custom plugins – beta tab.
  2. Click the Download SDK button. Custom plugins
Upload your custom plugin

Once you’ve downloaded the Dynatrace SDK (see above) and built your new plugin (as explained in How to write your first OneAgent plugin) it’s time to upload your plugin to your Dynatrace environment.

  1. Go to Settings Monitoring  > Monitored technologies Custom plugins – beta tab.
  2. Click the Upload plugin button.
  3. Select the ZIP file archive that contains your plugin’s Python and JSON files.
  4. Once successfully uploaded, your plugin will appear in the list on the Custom plugins – beta tab.

Note: Alternatively, the OneAgent upload plugin commandline tool can be used to perform the upload.

Note: If you make changes to your plugin in the future, remember to upload the updated plugin’s ZIP archive to Dynatrace.

View custom plugin metrics

To verify that your new custom plugin works, navigate to your application’s Host page and click the process you’re working on (see example Python process below).
Custom plugins

Your custom metrics will appear on the Further details tab (see Uncategorized metrics example below).

Custom plugins

Visual display of custom metrics

There are many more options for Visual display of custom metrics include the following:

  • Presentation of metric data on either the associated Process details page or the process’ Further details tab.
  • Grouping charts with tabs.
  • Presentation of chart dimensions beneath each chart.
  • Chart titles, descriptions, and more.

Ready to get started writing your own plugin? Have a look at these useful plugin examples to get started.

The post Extend Dynatrace with custom monitoring plugins appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today