Skip to content

Feed aggregator

Geek Choice Awards 2014

RebelLabs started annual Geek Choice Awards, and Jenkins was one of the 10 winners. See the page they talk about Jenkins.

My favorite part is, to quote, "Jenkins has an almost laughably dominant position in the CI server segment", and "With 70% of the CI market on lockdown and showing an increasing rate of plugin development, Jenkins is undoubtably the most popular way to go with CI servers."

If you want to read more about it and other 9 technologies that won, they have produced a beautifully formatted PDF for you to read.

Categories: Open Source

Sign Up for the First-Ever Appium Roadshow on August 20th in New York City

Sauce Labs - 8 hours 29 min ago

appium_logoWe don’t know if you heard, but mobile is kind of a big deal.

Naturally, Appium – the only open source, cross-platform test automation tool for native, hybrid, and mobile web apps – emerged out of the need to Test All The (Mobile) Things.  Last May, battle-tested Appium 1.0 was released, and now this Appium show is hitting the road!

Details and ticket links below. Hope to see you in New York!


Sign Up for the First-Ever Appium Roadshow on August 20th

Appium Roadshow – NYC is a two part, day-long event held on Wednesday, August 20 at Projective Space – LES in Manhattan’s Lower East Side.

Part 1 - Appium in the Wild

8:30 AM – 1:00 PM – Free

The morning session will showcase presentations from Gilt Groupe, Sharecare, Softcyrlic, and Sauce Labs. Topics will cover real-world examples, lessons learned, and best practices in mobile app test automation using Appium. Featured speakers include:

  • Matthew Edwards – Mobile Automation Lead, Aquent
  • Daniel Gempesaw – Software Testing Architect, Sharecare
  • Matt Isaacs – Engineer, Gilt Groupe
  • Jonathan Lipps – Director of Ecosystem and Integrations, Sauce Labs
  • Sundar Sritharan – Delivery Manager, Softcrylic

This event is free. Breakfast and lunch included. Reserve your seat now – register here.

Part 2 – Appium Workshop

1:30 PM – 5:30 PM – $100

Matthew Edwards, a leading contributor to the Appium project, will provide a hands-on workshop to help you kick start your Appium tests.  He’ll talk you through how to set up the environment needed for native iOS and Android automation with Ruby.  You’ll then download and configure the to enable test writing. Then, Matthew will demonstrate how to kick up an Appium server and then run a test.

This event is limited to just 40 participants. Reserve your seat now – register here.


Categories: Companies

Outnumbered, Again

Sonatype Blog - 8 hours 53 min ago
I remember it clearly. Sitting down for breakfast, I opened the Sydney Morning Herald to see the latest headlines in Australia for the day. As I shuffled through the paper, I finally landed upon the Technology section and then noticed pages and pages of “help wanted” adds.

To read more, visit our blog at
Categories: Companies

Connect With Your Favorite Testers With New Profile Features

uTest - 11 hours 19 min ago

Since the launch of the new uTest in early May, we haven’t paused to build new features and functionality that can add value to your software testing lives. We know that you’re busy and keeping on top of the latest news and information in the testing world can be a challenge. Therefore, we’re happy to launch two new features today: Follow Me and Activity Feed.

The Follow Me feature is located on all uTester profiles, allowing you to easily get updates from your favorite uTesters at the click of a button, viewing the Activity Feed of their latest contributions to blog posts, tool reviews, and more. follow button

Following your favorite uTester is easy — just look for the blue Follow Me button in the lower right corner of their banner image. With one click, you will now receive updates every time that uTester posts a new comment, pens a blog post or University course, or reviews a new tool. Don’t know the profile URL of the person you want to follow? Find it here.

The Activity Feed is your one stop to see the latest updates from the people you’re following. Your activity feed is sortable by blog comment, blog post, University course, University comment, and tool review, so you can control what types of updates you see.

Linda-Activity Feed

The Activity Feed page is also where you can view and manage your follower list. New users are added at the top of the list, so you can identify your newest followers. We’ve also made it as easy as possible to unfollow or block users within the same window.

Linda - manage users

Not sure where to start with the new Follow Me feature? Here’s a small sample of uTesters to get you started!

Remember, you can search for any uTest profile on the search page. Additionally, review more or contribute your own list of follow-worthy testers on the forums!

Categories: Companies

SSL Connectivity for all Central Repository users Underway

Sonatype Blog - 14 hours 6 min ago
We’ve had quite a bit of public scrutiny recently over how we’ve chosen to provide SSL access to Central for the last two years. At Sonatype, we have a history of investments in the Maven Central community, all of which are focused on improving the quality of the contents, increasing reliability...

To read more, visit our blog at
Categories: Companies

KISS method for bug reporting

Testlio - Community of testers - 16 hours 3 min ago

It’s essential to keep bug reports as clean and easy to read as possible. I love to use KISS method – Keep It Short Simple Stupid Straightforward – name it as you like :)
The point is that reporting bugs should be simple for testers and easy to read for developers.

Keep in mind for bug report

Short – keep words and sentences short, use as many words as needed and as less as possible.
Simple – reporting bugs won’t require knowing difficult words and terms, on the contrary use easy words and sentences.
Stupid – you are one tester among many, make it easy for everybody to understand what you mean.
Straightforward – go to the point!

Content in the bug report

In Testlio we help testers to write proper bugs through guidelines in the bug report. Here are some hints how I write bug reports.

  • Bug title
  • Title has to be as specific as possible. Title should include section where problem occurred. It’s not best practise to use the actual result as bug title. You could write the title after entering all the information about the bug.
    [Profile] Can not change profile picture

  • Environments
  • Include all background details about your testing – App version, os version, browser version, internet connection.

  • Steps to reproduce
  • It’s important to get the the core of the problem quickly. Use ‘>’ symbol to show navigation from one step to next.

    If “log in” is elementary for the issue there’s no reason to write it down as one step – too long. I use “log in” as a step, if the problem is log in/log out specific.
    Example how NOT TO write:
    Steps to reproduce:
    1. Open app
    2. Log in with valid credentials
    3. Tap to Settings
    4. On the displayed options select Profile
    5. Tap on Edit on the right top corner
    6. Tap on profile image
    7. Select any image
    8. Save changes

    Example how to write:
    Steps to reproduce:
    1. Settings > Profile > Edit
    2. Change Profile picture > Save

  • Expected Result
  • One sentence what you expected the functionality to do.

  • Actual Result
  • One sentence what the functionality did instead.

  • Attach a file
  • Regarding attachments, logs and videos, I believe we are all familiar with that old saying – A picture is worth a thousand words. That is 100% true.
    Bug report should include screenshot. If you are testing on a web browser, it’s best to create a screenshot with URL displayed, in mobile app just take the screenshot of the whole view. It’s important information for the developer.
    Crash report has to include crash report screenshot. Crashes are rarely reproducible and crash report has important details for developers.

Remember, if issue is reproducible, then it’s something that is fixable.

Written by Kristi – Account Manager at Testlio

Categories: Companies

StormRunner Load bringing you fast and simple performance testing on demand

HP LoadRunner and Performance Center Blog - Wed, 07/30/2014 - 05:23

When you think of the word “Cloud”, chances are that you don’t immediately think of weather. (This is an industry-blog after all). Your mind most likely thinks of the ways you can save money by moving to the cloudiStock_000018646128Small.jpg.


Now what do you think about when I say the word “Storm”? (Now I bet you are thinking about weather.)  As of July 24, my hope is that you will think performance testing when I say “storm”.  Keep reading to find out why you should look to the cloud for the latest in performance testing.





Categories: Companies

Release Managers Risking Irrelevancy

IBM UrbanCode - Release And Deploy - Wed, 07/30/2014 - 00:42

As Agile breaks through the WaterScrumFall format of rapid development wrapped in slow project initiation and release cycles to actually delivering more frequently, Release Managers have been put into something of a bind. Change is flowing at them faster, application change requests are being accepted later in the cycle, and dependencies between applications are only growing.

The risk they face is responding to this by trying to hold back flood. They can try to use their authority to slow things back down. That puts them on the wrong side of what the business wants (more stuff faster). They can look to experience of Project Managers (and the Project Management Institute) which were skeptical to outright hostile to Agile in the years after it burst onto the scene in the early 2000s. Project managers were often displaced and worked around by Agile teams and as whole organizations went Agile, project managers lost out. More recently, the PMI is offering Agile centric training and today one of the banners on their website proclaims “Embrace the speed of change and win!”

Release management shouldn’t follow that path to exile and back. While Release Managers are charged with protecting production first and foremost in most organizations, they need to facilitate the rapid change that will help the business win. Rather than fight Agile, they should embrace DevOps. Being at the hub between development and operations, release managers are well positioned (and trusted) to bridge these groups help bring them together in the name of quality at speed.

Now, if your organization is moving faster than it is capable of doing safely, the brakes may need to be applied. But when you slow things down, offer a plan for speeding up towards continuous delivery. You’re going to look for:

  • Fewer sign-offs and more automatically enforced quality gates
  • Better visibility into dependencies between applications (and their component pieces)
  • Fewer spreadsheets
  • More automation in testing, deployment, and provisioning
  • More collaboration between development, testers, security teams and ops

The business is going to want more change faster. Release Managers have a choice. They can fight this shift and end up getting run over, or they can take a leadership role in the transformation.

Categories: Companies

Configuration Manager 2012 Client Actions

Configuration Manager 2012 Client Actions can be run independently from schedules that are configured in Configuration Manager Console through Control Panel>Configuration Manager on the client machine.
1) Application Deployment Evaluation Cycle: This evaluation Cycle is applicable to software deployments (applications) .This action will re-evaluates the requirement rules for all deployments and make sure the application is installed on the computer. The default value is set to run every 7 days.
2) Discovery Data Collection Cycle:  This action can be considered as Heartbeat Discovery cycle and will resend the client information to site and keeping the client record Active. This is also responsible to submits a client's installation status to its assigned site(Status:Yes).If you are migrating the client from SP1 to R2 or R2 to CU1 ,it takes time to get the client version update in Console and update action is carried out by this Cycle. Heartbeat Discovery actions are recorded on the client in the  InventoryAgent.log.  Computers accidentally deleted from the configmgr console  will automatically "come back" if it is still active on the network. Wait for the next heartbeat inventory cycle, try running Discovery Data Collection Cycle manually, or use custom script. Refer to this link for more information about what is sent back
3) File Collection Cycle: This action is to search for a specific file that you have defined in the Client Agent settings (Software inventory > collect files).  If the software inventory client agent finds a file that should be collected, the file is attached to the inventory file and sent to the site. This action differs from software inventory in that it actually sends the file to the site, so that it can be later viewed using Resource Explorer. The site server collects the five most recently changed versions of collected files and stores them in the \Inboxes\\Filecol directory. The file will not be collected again if it has not changed since the last software inventory was collected.  Files larger than 20 MB are not collected by software inventory. Maximum size for all collected files (KB) in the Configure Client Setting dialog box displays the maximum size for all collected files.  When this size is reached, file collection will stop. Any files already collected are retained and sent to the site.
4) Hardware Inventory Cycle: The first and very important action to send client inventory information. This is where most time is spent troubleshooting  about why the client is not reporting inventory from X days .Many folks think that, hardware inventory is actually getting information about hardware but it is more than that. It inventory information about add and remove programs, OS info, RAM, disk and many things. Hardware inventory is WMI inventory that collects the information from WMI , based on the settings you defined in Client agent settings—>Hardware inventory .Configmgr client will collect only the information that you have selected/customized in client agent settings  and send it to server. Hardware inventory information will be logged into inventoryagent.log
5) ID MIF Collection Cycle Management Information Format (MIF) files can be used to extend hardware inventory information collected from clients by the Configuration Manager 2007 hardware inventory client agent. During hardware inventory, the information stored in MIF files is added to the client inventory report and stored in the site database, where you can use the data in the same ways that you use default client inventory data. Two MIF files can be used when performing client hardware inventories: NOIDMIF and IDMIF. By default, NOIDMIF and IDMIF file information is not inventoried by Configuration Manager 2007 sites. To enable NOIDMIF and IDMIF file information to be inventoried, NOIDMIF and IDMIF collection must be enabled. You can choose to enable one or both types of MIF file collection for Configuration Manager 2007 sites on the MIF Collection tab of the hardware inventory client agent properties. For more information about enabling MIF collection for Configuration Manager 2007 sites during hardware inventory, see Hardware Inventory Client Agent Properties: MIF Collection Tab.
6) Machine Policy retrieval and Evaluation Cycle: This action is to download policies assigned to the client computer. Anything that you assign to a collection (group of computers) like client agent settings or applications related to deployment. This action will be triggered based on a schedule defined in Client agent settings (Policy polling interval (minutes). This action results will be logged into policyagent.log, Policyevalutor.log, and policyagentprovider.log
7) Software Inventory Cycle: Unlike hardware inventory, software inventory, inventory information about file system data and file properties such as .EXE. You can customize what executable files to be inventoried which allows admins to report on software inventory. When this action runs, it inventories the information in the file header of the inventoried files and sends to the  site. This information will be logged into inventoryagent.log on the client. If you are experiencing slow software inventory issues, refer to this linkWhat is the difference between Hardware and Software Inventory? * Hardware Inventory uses WMI to get the information about computer  * Software Inventory works on files to get information in the file header
8) Software Metering Usage Report Cycle: The name itself says, metering which means, configmgr client monitor and collect software usage data for the software metering rules that are enabled on the Site .Client computers evaluate these rules during the machine policy interval and collect metering data and send it to site.
9) Software updates deployment evaluation Cycle: This action will initiate a scan for software update compliance. This action evaluates the state of new and existing deployments and their associated software updates. This includes scanning for software updates compliance, but may not always catch scan results for the latest updates. This is a forced online scan and requires that the WSUS server is available for this action to succeed.  This action results will be logged into couple of log files on the client: scanagent.log (scan requests for software updates), UpdatesStore.log(Status of patches like missing, Installed),UpdatesDeployment.log(update activation, evaluation, and enforcement, notify about reboot) etc. More info about software update compliance
10) Software Update Scan Cycle: This action scans for software updates compliance for updates that are new since the last scan. This action does not evaluate deployment policies as the Software Updates Deployment Evaluation Cycle does. This is a forced online scan and requires that the WSUS server is available for this action to succeed. This action results will be logged into WUAHandler.log (if Scan is succeeded or not), UpdatesStore.log(Status of patches like missing, Installed),and  scanagent.log (scan requests for software updates) etc
11) User Policy retrieval and Evaluation Cycle: This action is similar to Machine Policy Retrieval & Evaluation Cycle, but this will initiate an ad-hoc user policy retrieval from the client outside of its scheduled polling interval. This action results will be logged to policyagent.log, Policyevalutor.log, and policyagentprovider.log
12) Windows Installer Source list update cycle: This action also very important while installing MSI applications. This action causes the Product Source Update Manager to complete a full update cycle. When you install an application using Windows Installer, those Windows Installer applications try to return to the path they were installed from when they need to install new components, repair the application, or update the application. This location is called the Windows Installer source location. Windows Installer Source Location Manager can automatically search Configmgr 2012 distribution points for the source files, even if the application was not originally installed from a distribution point.
Categories: Blogs

After Five Years of Debate…Tester Certifications Still a Touchy Subject

uTest - Tue, 07/29/2014 - 21:24

Mention certifications to testers and you’77ba97b0c4ll run the gamut of responses, from those that have found valuable experience and advancement in their careers by being certified, to those that preach that a certification is no substitute for cold, hard experience.

We all know how testing luminary James Bach feels about them, going on to say that “The ISTQB and similar programs require your stupidity and your fear in order to survive,” and that “dopey, frightened, lazy people will continue to use them in hiring, just as they have for years.” Suffice to say that James won’t be sending the ISTQB a card this holiday season.

Rarely has a topic been as polarizing and heated in discussion, to the point of after five years of the initial topic being launched in our uTest Community on the subject, hundreds of responses have been logged, along with sequel/knockoff threads (sequels that were actually still engaging and not superfluous like A Good Day to Die Hard).

Here are just a few of our favorite viewpoints from these discussions:

Are certifications bad? Not necessarily.
Are certifications that base their exams on multiple choice bad? Most likely.
Do certifications meet the needs of my organization? Perhaps.
Is there even a best practice in Software Testing? Not likely.
Do certifications tell you how good you are as a tester? Hell no.
(Glory L.)

IMO, the Foundation cert does teach someone the basics of how to test (in addition to what testing is, where it fits in the SDLC, etc). The Advanced level certainly expands on how to test. And yes, it is my belief that these certs advance a tester in their skills and ability – the study material alone should be on any tester’s reading list.
(Shane D.)

I have no use for certifications. I think they are bunk. My last full-time job had a number of “certified” people in various roles that had no idea what they were doing. Basically they were able to pass a test and get a piece of paper. But it didn’t make them any better at their jobs. I would put more time into learning rather than trying to pass a test. Learn because you want to, not because you have to. You’ll be better for it.
(John K.)

I just recently sat for my CTFL certification, not because I saw the value in it, but I wasn’t getting interviews without it. I have only been testing software for 5 years, and primarily for a single company in a niche market. Therefore though I have the respect and ‘backing’ of people at my company, and in my particular industry, but was having great difficulty breaking into a new field.
(Derek C.)

Are ‘certifications’ always a dirty word when it comes to testing, or is there a time and a place for certifications, especially at the foundation level where it’s important for testers to have a baseline for core concepts? We’d love to hear from testers in the Comments below.


Categories: Companies

You are invited to a party in the cloud, a public beta of StormRunner Load

HP LoadRunner and Performance Center Blog - Tue, 07/29/2014 - 17:58

We have a BIG Announcement!


This is your opportunity to get in on the ground floor of a revolution.



And as you know, revolutions often start with the sound of thunder in the distance.



Prepare yourself…a storm is coming!



Categories: Companies

Today’s “Hyperconnected” Economy Creates an Agility Imperative for Retailers

We have been outnumbered for years now, with little hope of ever catching up. Not by competing nations or companies, but by billions of devices we can hold in our hands. The number of internet-connected devices first outnumbered the human population in 2008, and their numbers have been growing much faster than the human population […]
Categories: Companies

Appium Bootcamp – Chapter 3: Interrogating Your App

Sauce Labs - Tue, 07/29/2014 - 17:30

appium_logoThis is the third post in a series called Appium Bootcamp by noted Selenium expert Dave Haeffner. Click the links to read the first and second posts. 

Dave recently immersed himself in the open source Appium project and collaborated with leading Appium contributor Matthew Edwards to bring us this material. Appium Bootcamp is for those who are brand new to mobile test automation with Appium. No familiarity with Selenium is required, although it may be useful. This is the third of eight posts; a new post will be released each week.

Writing automated scripts to drive an app in Appium is very similar to how it’s done in Selenium. We first need to choose a locator, and use it to find an element. We can then perform an action against that element.

In Appium, there are two approaches to interrogate an app to find the best locators to work with. Through the Appium Console, or through an Inspector (e.g., Appium Inspector, uiautomatorviewer, or selendroid inspector).

Let’s step through how to use each of them to decompose and understand your app.

Using the Appium Console

Assuming you’ve followed along with the last two posts, you should have everything setup and ready to run.

Go ahead and startup your Appium server (by clicking Launch in the Appium GUI) and start the Appium Ruby Console (by running arc in a terminal window that is in the same directory as your appium.txt file). After it loads you will see an emulator window of your app that you can interact with as well as an interactive prompt for issuing commands to Appium.

The interactive prompt is where we’ll want to focus. It offers a host of readily available commands to quickly give us insight into the elements that make up the user interface of the app. This will help us easily identify the correct locators to automate our test actions against.

The first command you’ll want to know about is page. It gives you access to every element in the app. If you run it by itself, it will output all of the elements in the app, which can be a bit unwieldy. Alternatively you can specify additional arguments along with it. This will filter the output down to just a subset of elements. From there, there is more information available that you can use to further refine your results.

Let’s step through some examples of that and more for both iOS and Android.

An iOS Example

To get a quick birds eye view of our iOS app structure, let’s get a list of the various element classes available. With the page_class command we can do just that.

[1] pry(main)> page_class
get /source
13x UIAStaticText
12x UIATableCell
4x UIAElement
2x UIAWindow
1x UIATableView
1x UIANavigationBar
1x UIAStatusBar
1x UIAApplication

UIAStaticText and all of the others are the specific class names for types of elements in iOS. You can see reference documentation for UIAStaticText here. If you want to see the others, go here.

With the page command we can specify a class name and see all of the elements for that type. When specifying the element class name, we can either specify it as a string, or a symbol (e.g., 'UIAStaticText' or:UIAStaticText).

[2] pry(main)> page :UIAStaticText
get /context
post /execute
    :script => "UIATarget.localTarget().frontMostApp().windows()[0].getTree()"
   name, label, value: UICatalog
   name, label: Buttons, Various uses of UIButton
   id: ButtonsTitle   => Buttons
       ButtonsExplain => Various uses of UIButton
   name, label: Controls, Various uses of UIControl
   id: ControlsExplain => Various uses of UIControl
       ControlsTitle   => Controls
   name, label: TextFields, Uses of UITextField
   id: TextFieldExplain => Uses of UITextField
       TextFieldTitle   => TextFields

Note the get and post (just after we issue the command but before the element list). It is the network traffic that is happening behind the scenes to get us this information from Appium. The response to post /execute has a script string. In it we can see which window this element lives in (e.g., windows()[0]).

This is important because iOS has the concept of windows, and some elements may not appear in the console output even if they’re visible to the user in the app. In that case, you could list the elements in other pages (e.g.,page window: 1). 0 is generally the elements for your app, whereas 1 is where the system UI lives. This will come in handy when dealing with alerts.

Finding Elements

Within each element of the list, notice their properties — things like name, label, value, and id. This is the kind of information we will want to reference in order interact with the app.

Let’s take the first element for example.

   name, label, value: UICatalog

In order to find this element and interact with it, we can can search for it with a couple of different commands: find, text, or text_exact.

> find('UICatalog')
> text('UICatalog')
> text_exact('UICatalog')

We’ll know that we successfully found an element when we see a Selenium::WebDriver::Element object returned.

It’s worth noting that in the underlying gem that enables this REPL functionality, if we end our command with a semi-colon it will not show us the return object.

> find('UICatalog')
# displays returned value

> find('UICatalog');
# returned value not displayed

To verify that we have the element we expect, let’s access the name attribute for it.

> find('UICatalog').name

Finding Elements by ID

A better approach to find an element would be to reference its id, since it is less likely to change than the text of the element.

   name, label: Buttons, Various uses of UIButton
   id: ButtonsTitle   => Buttons
       ButtonsExplain => Various uses of UIButton

On this element, there are some IDs we can reference. To find it using these IDs we can use the id command. And to confirm that it’s the element we expect, we can ask it for its name attribute.

> id('ButtonsTitle').name
"Buttons, Various uses of UIButton"

For a more thorough walk through and explanation of these commands (and some additional ones) go here. For a full list of available commands go here.

An Android Example

To get a quick birds eye view of our Android app structure, let’s get a list of the various element classes available. With the page_class command we can do just that.

[1] pry(main)> page_class
get /source
12x android.widget.TextView
1x android.view.View
1x android.widget.ListView
1x android.widget.FrameLayout
1x hierarchy

android.widget.TextView and all of the others are the specific class names for types of elements in Android. You can see reference documentation for TextView here. If you want to see the others, simply do a Google search for the full class name.

With the page command we can specify a class name and see all of the elements for that type. When specifying the element class name, we can specify it as a string (e.g., 'android.widget.TextView').

[2] pry(main)> page 'android.widget.TextView'
get /source
post /appium/app/strings

android.widget.TextView (0)
  text: API Demos
  id: android:id/action_bar_title
  strings.xml: activity_sample_code

android.widget.TextView (1)
  text, desc: Accessibility
  id: android:id/text1

android.widget.TextView (2)
  text, desc: Animation
  id: android:id/text1

Note the get and post (just after we issue the command but before the element list). It is the network traffic that is happening behind the scenes to get us this information from Appium. get /source is to download the source code for the current view and post /appium/app/strings gets the app’s strings. These app strings will come in handy soon, since they will be used for some of the IDs on our app’s elements; which will help us locate them more easily.

Finding Elements

Within each element of the list, notice their properties — things like text and id. This is the kind of information we will want to reference in order interact with the app.

Let’s take the first element for example.

android.widget.TextView (0)
  text: API Demos
  id: android:id/action_bar_title
  strings.xml: activity_sample_code

In order to find that element and interact with it, we can search for it by text or by id.

> text('API Demos')
> id('android:id/action_bar_title')

We’ll know that we successfully found an element when we see a Selenium::WebDriver::Element object returned.

It’s worth noting that in the underlying gem that enables this REPL functionality, if we end our command with a semi-colon it will not show us the return object.

> text('API Demos')
# displays returned value

> text('API Demos');
# returned value not displayed

To verify we’ve found the element we expect, let’s access the name attribute for it.

> text('API Demos').name
"API Demos"

Finding Elements by ID

A better approach to find an element would be to reference its ID, since it is less likely to change than the text of the element.

In Android, there are a two types of IDs you can search with — a resource ID, and strings.xml. Resource IDs are best. But strings.xml are a good runner-up.

android.widget.TextView (10)
  text, desc: Text
  id: android:id/text1
  strings.xml: autocomplete_3_button_7

This element has one of each. Let’s search using each with the id command.

# resource ID
> id('android:id/text1')

# strings.xml
> id('autocomplete_3_button_7')

You can see a more thorough walk through of these commands here. For a full list of available commands go here.

Ending the session

In order to end the console session, input the x command. This will cleanly quit things for you. If a session is not ended properly, then Appium will think it’s still in progress and block all future sessions from working. If that happens, then you need to restart the Appium server by clicking Stop and then Launch in the Appium GUI.

x only works within the console. In our test scripts, we will use driver.quit to kill the session.

Using An Inspector

With the Appium Ruby Console up and running, we also have access to the Appium Inspector. This is another great way to interrogate our app to find locators. Simply click the magnifying glass in the top-right hand corner of the Appium GUI (next to the Launch button) to open it. It will load in a new window.

Once it opens, you should see panes listing the elements in your app. Click on an item in the left-most pane to drill down into the elements within it. When you do, you should see the screenshot on the right-hand side of the window auto-update with a red highlight around the newly targeted element.

You can keep doing this until you find the specific element you want to target. The properties of the element will be outputted in the Details box on the bottom right-hand corner of the window.

It’s worth noting that while the inspector works well for iOS, there are some problem areas with it in Android at the moment. To that end, the Appium team encourages the use of uiautomatorviewer (which is an inspector tool provided by Google that provides similar functionality to the Appium inspector tool). For more info on how to set that up, read this.

For older Android devices and apps with webviews, you can use the selendroid inspector. For more information on, go here.

There’s loads more functionality available in the inspector, but it’s outside the scope of this post. For more info I encourage you to play around with it and see what you can find out for yourself.


Now that we know how to locate elements in our app, we are ready to learn about automating some simple actions and putting them to use in our first test.

Read:  Chapter 1 | Chapter 2

About Dave Haeffner: Dave is a recent Appium convert and the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by thousands of testing professionals) as well as The Selenium Guidebook (a step-by-step guide on how to use Selenium Successfully). He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.

Follow Dave on Twitter - @tourdedave

Categories: Companies

Continuous Integration for node.js with Jenkins

This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Steven Christou, technical support engineer, CloudBees about a presentation given by Baruch Sadogursky, JFrog, at JUC Boston.

Fully automating a continuous integration system from development, to testing, to deployment in production servers for Node.js can be a challenge. Most Node.js developers are familiar with NPM, which I learned does not stand for “Node Package Manager,” but stands for a recursive bacronymic abbreviation for "npm is not an acronym." In other words, it contains packages that contain a program described by the package.json file. To compare with a Java developer, an NPM is similar to a jar, and NPM registry is similar to Maven central for Java developers. What would happen if the main rpm registry goes down? At that moment node.js developers would be stuck waiting for to return to normal status, or they could run their own private registry.

Baruch Sadogursky, JFrogThat sounds easier said than done, though. According to, the current size of the registry is 450.378 gigabytes of binaries. Out of all of those 450 gigabytes of information how many of the packages are going to be used by your developers?

Artifactory: a repository manager to bridge the gap between developers and rpm registry Artifactory acts as a proxy between your coders and Jenkins instances to the outside world. When I (a developer) require a new package and I declare a new dependency in my code, Artifactory will pull the necessary package dependency from and make it available. After the code has been committed with the new package dependency, Jenkins is then able to fetch the same package from Artifactory. In this scenario if ever goes down, testing using Jenkins will never hault because it will still be able to obtain the necessary dependencies from the Artifactory server.

Building code using an Artifactory server also eliminates the need for users to checkout and build their dependencies as it would be time consuming. Also dependencies could be in an unstable state if I build in my environment and it differs from other users or the Jenkins server. Another advantage is Jenkins could record information about the packages that were used during the build.

Overall, using a package manager like Artifactory to act as a proxy between your Jenkins instance and the NPM registry is beneficial in order to maintain true continuous integration. Your developers and Jenkins instances would not be impacted by any downtime issues if the NPM repository is down or unavailable. Thus, adding an Artifactory server to manage package dependencies would help maintain continuous integration.

Steven Christou
Technical Support Engineer

Steven works on providing bug fixes to CloudBees customers for Jenkins, Jenkins plugins and Jenkins enterprise plugins. He has a great passion for software development and extensive experience with Hudson and Jenkins. Follow him on Twitter.
Categories: Companies

Three Ways for Testers to Take Their Careers to the Next Level

uTest - Tue, 07/29/2014 - 16:40

6a00d8341c64d253ef0120a5aa8aa3970c-800wiI had lunch recently with a few recruiters that asked me for referrals for performance testing roles. They had a number of open roles and could not find anyone suitable.

The discussion reminded me of possible ways for a tester to take his or her career to the next level. There are a few things that can be done.

Staying the Course and Improving Your Skills

The first is to continue to do what you know well and aim at becoming as good as possible. Most of the testers take this road and choose to learn mostly about manual, back box testing.

One image that comes to mind for the tester that pursues this road is the small fish in a big bowl. Since the bowl is big, there are many other fish around, and the competition for space and food is high. For testers that do just manual testing, there is a high level of competition for new jobs, the rates are not that high (due to the size of the market) and the demand is up and down.

Taking the Less Popular Path

Another option for career advancement is to work on testing types that are less popular, like performance testing or test automation.

Coming from manual testing and moving to performance testing or test automation implies usually a long time for training and becoming proficient. But after that, the tester can access a niche in the market that is less populated, with steady demand, often with roles that cannot be filled and that pay much more than manual testing.

These types of testers look like the big fish in the small bowls.

The Combination Approach

The last option for career development is a combination of the first two. These fish are very rare to find.

Alex Siminiuc is a uTest Community member and has also been testing software applications since 2005…and enjoys it a lot. He lives in Vancouver, BC, and blogs occasionally at

Categories: Companies

Automated Testing With Cucumber JVM, Selenium & Mocha

Testing TV - Tue, 07/29/2014 - 16:26
This presentation provides an overview of behavior-driven development and test automation for, which aided in the production of a Visualforce/JavaScript application for an enterprise client. Using Cucumber JVM, Selenium, Jenkins, and Git – the team was able to catch regression errors during development. It offers an overview of the solution used and how it […]
Categories: Blogs

Breaking Down Requirements in TestTrack

The Seapine View - Tue, 07/29/2014 - 11:30

We’ve made breaking down requirements much easier with TestTrack 2014.1. Now you have the ability to create requirements and tasks from another requirement with a single click. Let’s take a simple example, where you’re creating system specs from product requirements and then breaking down each system spec into a set of tasks for the team. Using Item Mapping Rules, you can quickly create the logic necessary to enable single-click creation of a task from a system specification.


To get started, go to Tools > Administration > Item Mapping Rules and click Add to create a new mapping rule. At the top of that dialog, you select the item types to map, here we’re creating a Task from a Technical Specification.


There’s a default set of field mappings that will be created, but you’re free to change those however you like. Also, don’t skip over the options at the bottom. They might seem minor but they save a lot of frustration. So first, you’ll probably want to automatically add the task to the folder with the related specification. This keeps everything grouped together and makes managing the release schedule easier and more reliable. Second, be sure to select the appropriate link definition. Finally, you might want to turn off the prompt to save your users an extra click and avoid people making inappropriate changes to the linking.

When you’re finished with configuration, save your new mapping and then open a Technical Specification. Here’s what you’ll see-notice the new Create Task … button in the bottom-left corner. When users click this button, a task will be created using the data from the technical spec based on the mappings you set up.


Share on Technorati . . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

Sharing Data from One Ranorex Module to Another

Ranorex - Tue, 07/29/2014 - 09:06

Sometimes it is necessary to access the value of a specific variable in more than only one module in a test case.

An example for this would be any kind of converter: In one module the value to be converted is read in. In another module this value is converted. Needless to say, in the converting module the value from the previous module should be used.

This blog post will show how to do this step-by-step.

The Structure of the Ranorex Solution

As you can see in the screenshot above, the solution “Converter” created for demonstration purposes consists of four different modules. Two variables (varTemperatureCelsius and varTemperatureFahrenheit) should be shared between the modules “GetValue” and “ConvertValue”. The temperatures are read from a weather-website in the “GetValue”-module.

In the “ConvertValue”-module the Celsius-temperature is converted from Celsius to Fahrenheit using another website and compared to the Fahrenheit-temperature from the first website. The whole example is available for download here.

Step 1: Creating a Variable in Module 1

At the beginning an “Open Browser”-action is recorded in the first module. Then a new “Get Value”-action in the second module “GetValue” is added.

Note: Before initializing a new variable, a repository item representing the UI element that displays the temperature needs to be created. This can be done by using the “Track”-button in the Ranorex Repository or by using the Ranorex Spy.

The value of the current temperature is stored in a variable. It is going to be created by clicking on the drop-down menu below the heading “Variable” and choosing the option “As new variable…”.

A context menu will be opened which should look like this:

Here the desired name of the variable and (optionally) a default value can be defined.

Note: The default value is used if modules are running separately and not from the test suite view. It will also be used if a variable is unbound and the test is started from the test suite.

Afterwards the repository item representing the value on the website and the attribute of the “Get Value”-action should access are chosen. The attribute holding the given temperature value is “InnerText”.

This step needs to be done for both Celsius and Fahrenheit values. In the upper right corner of the website the unit of the temperature can be changed.

At the end of the first step the “GetValue” module should look like this:

Note: The temperature value should not be used as identification criteria! The test will only work if the temperature is the same as when it had initially been identified. This can easily be changed by using the Path Editor in Ranorex Spy: Uncheck the “innertext” attribute and check a different appropriate attribute, e.g. “class”.

Step 2: Creating a Variable in Module 2

The step of creating a new variable needs to be repeated for every module the value from the first module should be used in. Here it is the module “ConvertValue” where a value is going to be converted from Celsius to Fahrenheit.

Firstly, the value from the previous module is used for the Celsius text field on the mentioned website. After that the result is validated in a user code action.

After recording and creating the needed variable, the module should look similar to this:

Note: For identification purposes, it would be easier if variables belonging together have the same name. However, this is only a recommendation and not a requirement.

Step 3: Connecting Variables in different Modules to one Parameter

In order to connect all needed variables to each other, a parameter needs to be created.

This is done by right-clicking the test case in the test suite view and clicking “Data Binding” in the context menu.

The test case properties pane will be opened:

Here it is necessary to add two rows in the “Parameters”- section of the window – one for each shared variable.

By clicking the drop down menu “Module Variable”, the variables associated with this parameter (“varTemperature(GetValue)” and “varTemperature(Convert Value)”) can be checked.

Finally, the test suite should look like this:

Testing the Solution

Now it is time to test the solution. This can be done by pressing “Run” in the test suite.

Note: If “Play” is clicked in one of the modules, the variables won’t be bound. In this case the default values of the variables are used.

The report file should look like this:


In this blog post you learned how to share variables from one module to another. There is one main concept which is always the same for every new module added to the test suite: Firstly a new variable in this specific module is created and then it is connected to a parameter of a test case.

Note: If a variable needs to be shared across test cases, it is almost the same procedure. The only difference is that a global parameter or a parameter in a parent test case is taken. A global parameter can be created in the test suite properties pane.

If something was unclear in this blog post, feel free to ask in the comment section or have a look at the following chapters in our user guide:


Categories: Companies

Updates Coming to Default Selenium and Chrome Versions on Sauce (August 2)

Sauce Labs - Tue, 07/29/2014 - 01:02

On Saturday, August 2nd, we will update our Selenium and Chrome default versions to meet current, stable implementations. This update affects users that run automated Selenium tests on Sauce.

Default versions of Selenium and Chrome are used only for tests that don’t have a specified browser version. Users who choose to assign Selenium and Chrome versions to their tests will remain unaffected.

Below you’ll find more details about the updates.


Currently the default Selenium version is 2.30.0. Following the update on August 2, the new default Selenium version will be 2.42.2. We advise you to test the new version (2.42.2) in advance using the following desired capability:

"selenium-version": "2.42.2"

If you run into any issues with the new default, note you can continue using the previous version (2.30.0) after Saturday by making the test request the selenium-version desired capability referred to below:

"selenium-version": "2.30.0"


Currently the default Chrome versions are Chrome 27 and Chromedriver 1. Following the update on August 2, the new default Chrome versions will be Chrome 35 and Chromedriver 2.10. We advise you to test the new versions (Chrome 35, Chromedriver 2) in advance using the following desired capabilities:

"browserName": "chrome"
"version": "35"

By requesting Chrome 35, Chromedriver 2.10 will be used automatically.

If you run into any issues with the new default, you can continue using the previous versions (Chrome 27, Chromedriver 1) after Saturday by making the test request the “version” desired capabilities referred to below:

"browserName": "chrome"
"version": "27"

Troubleshooting Issues

If you see any issues after moving your tests to these new versions, we suggest checking for known issues on or contacting the Chromedriver and Selenium user groups.

Happy testing!

Categories: Companies

Knowledge Sharing

Telerik Test Studio is all-in-one testing solution that makes software testing easy. SpiraTest is the most powerful and affordable test management solution on the market today