Skip to content

Feed aggregator

Release Managers Risking Irrelevancy

IBM UrbanCode - Release And Deploy - Wed, 07/30/2014 - 00:42

As Agile breaks through the WaterScrumFall format of rapid development wrapped in slow project initiation and release cycles to actually delivering more frequently, Release Managers have been put into something of a bind. Change is flowing at them faster, application change requests are being accepted later in the cycle, and dependencies between applications are only growing.

The risk they face is responding to this by trying to hold back flood. They can try to use their authority to slow things back down. That puts them on the wrong side of what the business wants (more stuff faster). They can look to experience of Project Managers (and the Project Management Institute) which were skeptical to outright hostile to Agile in the years after it burst onto the scene in the early 2000s. Project managers were often displaced and worked around by Agile teams and as whole organizations went Agile, project managers lost out. More recently, the PMI is offering Agile centric training and today one of the banners on their website proclaims “Embrace the speed of change and win!”

Release management shouldn’t follow that path to exile and back. While Release Managers are charged with protecting production first and foremost in most organizations, they need to facilitate the rapid change that will help the business win. Rather than fight Agile, they should embrace DevOps. Being at the hub between development and operations, release managers are well positioned (and trusted) to bridge these groups help bring them together in the name of quality at speed.

Now, if your organization is moving faster than it is capable of doing safely, the brakes may need to be applied. But when you slow things down, offer a plan for speeding up towards continuous delivery. You’re going to look for:

  • Fewer sign-offs and more automatically enforced quality gates
  • Better visibility into dependencies between applications (and their component pieces)
  • Fewer spreadsheets
  • More automation in testing, deployment, and provisioning
  • More collaboration between development, testers, security teams and ops

The business is going to want more change faster. Release Managers have a choice. They can fight this shift and end up getting run over, or they can take a leadership role in the transformation.

Categories: Companies

Configuration Manager 2012 Client Actions

Configuration Manager 2012 Client Actions can be run independently from schedules that are configured in Configuration Manager Console through Control Panel>Configuration Manager on the client machine.
1) Application Deployment Evaluation Cycle: This evaluation Cycle is applicable to software deployments (applications) .This action will re-evaluates the requirement rules for all deployments and make sure the application is installed on the computer. The default value is set to run every 7 days.
2) Discovery Data Collection Cycle:  This action can be considered as Heartbeat Discovery cycle and will resend the client information to site and keeping the client record Active. This is also responsible to submits a client's installation status to its assigned site(Status:Yes).If you are migrating the client from SP1 to R2 or R2 to CU1 ,it takes time to get the client version update in Console and update action is carried out by this Cycle. Heartbeat Discovery actions are recorded on the client in the  InventoryAgent.log.  Computers accidentally deleted from the configmgr console  will automatically "come back" if it is still active on the network. Wait for the next heartbeat inventory cycle, try running Discovery Data Collection Cycle manually, or use custom script. Refer to this link for more information about what is sent back
3) File Collection Cycle: This action is to search for a specific file that you have defined in the Client Agent settings (Software inventory > collect files).  If the software inventory client agent finds a file that should be collected, the file is attached to the inventory file and sent to the site. This action differs from software inventory in that it actually sends the file to the site, so that it can be later viewed using Resource Explorer. The site server collects the five most recently changed versions of collected files and stores them in the \Inboxes\Sinv.box\Filecol directory. The file will not be collected again if it has not changed since the last software inventory was collected.  Files larger than 20 MB are not collected by software inventory. Maximum size for all collected files (KB) in the Configure Client Setting dialog box displays the maximum size for all collected files.  When this size is reached, file collection will stop. Any files already collected are retained and sent to the site.
4) Hardware Inventory Cycle: The first and very important action to send client inventory information. This is where most time is spent troubleshooting  about why the client is not reporting inventory from X days .Many folks think that, hardware inventory is actually getting information about hardware but it is more than that. It inventory information about add and remove programs, OS info, RAM, disk and many things. Hardware inventory is WMI inventory that collects the information from WMI , based on the settings you defined in Client agent settings—>Hardware inventory .Configmgr client will collect only the information that you have selected/customized in client agent settings  and send it to server. Hardware inventory information will be logged into inventoryagent.log
5) ID MIF Collection Cycle Management Information Format (MIF) files can be used to extend hardware inventory information collected from clients by the Configuration Manager 2007 hardware inventory client agent. During hardware inventory, the information stored in MIF files is added to the client inventory report and stored in the site database, where you can use the data in the same ways that you use default client inventory data. Two MIF files can be used when performing client hardware inventories: NOIDMIF and IDMIF. By default, NOIDMIF and IDMIF file information is not inventoried by Configuration Manager 2007 sites. To enable NOIDMIF and IDMIF file information to be inventoried, NOIDMIF and IDMIF collection must be enabled. You can choose to enable one or both types of MIF file collection for Configuration Manager 2007 sites on the MIF Collection tab of the hardware inventory client agent properties. For more information about enabling MIF collection for Configuration Manager 2007 sites during hardware inventory, see Hardware Inventory Client Agent Properties: MIF Collection Tab.
6) Machine Policy retrieval and Evaluation Cycle: This action is to download policies assigned to the client computer. Anything that you assign to a collection (group of computers) like client agent settings or applications related to deployment. This action will be triggered based on a schedule defined in Client agent settings (Policy polling interval (minutes). This action results will be logged into policyagent.log, Policyevalutor.log, and policyagentprovider.log
7) Software Inventory Cycle: Unlike hardware inventory, software inventory, inventory information about file system data and file properties such as .EXE. You can customize what executable files to be inventoried which allows admins to report on software inventory. When this action runs, it inventories the information in the file header of the inventoried files and sends to the  site. This information will be logged into inventoryagent.log on the client. If you are experiencing slow software inventory issues, refer to this linkWhat is the difference between Hardware and Software Inventory? * Hardware Inventory uses WMI to get the information about computer  * Software Inventory works on files to get information in the file header
8) Software Metering Usage Report Cycle: The name itself says, metering which means, configmgr client monitor and collect software usage data for the software metering rules that are enabled on the Site .Client computers evaluate these rules during the machine policy interval and collect metering data and send it to site.
9) Software updates deployment evaluation Cycle: This action will initiate a scan for software update compliance. This action evaluates the state of new and existing deployments and their associated software updates. This includes scanning for software updates compliance, but may not always catch scan results for the latest updates. This is a forced online scan and requires that the WSUS server is available for this action to succeed.  This action results will be logged into couple of log files on the client: scanagent.log (scan requests for software updates), UpdatesStore.log(Status of patches like missing, Installed),UpdatesDeployment.log(update activation, evaluation, and enforcement, notify about reboot) etc. More info about software update compliance
10) Software Update Scan Cycle: This action scans for software updates compliance for updates that are new since the last scan. This action does not evaluate deployment policies as the Software Updates Deployment Evaluation Cycle does. This is a forced online scan and requires that the WSUS server is available for this action to succeed. This action results will be logged into WUAHandler.log (if Scan is succeeded or not), UpdatesStore.log(Status of patches like missing, Installed),and  scanagent.log (scan requests for software updates) etc
11) User Policy retrieval and Evaluation Cycle: This action is similar to Machine Policy Retrieval & Evaluation Cycle, but this will initiate an ad-hoc user policy retrieval from the client outside of its scheduled polling interval. This action results will be logged to policyagent.log, Policyevalutor.log, and policyagentprovider.log
12) Windows Installer Source list update cycle: This action also very important while installing MSI applications. This action causes the Product Source Update Manager to complete a full update cycle. When you install an application using Windows Installer, those Windows Installer applications try to return to the path they were installed from when they need to install new components, repair the application, or update the application. This location is called the Windows Installer source location. Windows Installer Source Location Manager can automatically search Configmgr 2012 distribution points for the source files, even if the application was not originally installed from a distribution point.
Categories: Blogs

After Five Years of Debate…Tester Certifications Still a Touchy Subject

uTest - Tue, 07/29/2014 - 21:24

Mention certifications to testers and you’77ba97b0c4ll run the gamut of responses, from those that have found valuable experience and advancement in their careers by being certified, to those that preach that a certification is no substitute for cold, hard experience.

We all know how testing luminary James Bach feels about them, going on to say that “The ISTQB and similar programs require your stupidity and your fear in order to survive,” and that “dopey, frightened, lazy people will continue to use them in hiring, just as they have for years.” Suffice to say that James won’t be sending the ISTQB a card this holiday season.

Rarely has a topic been as polarizing and heated in discussion, to the point of after five years of the initial topic being launched in our uTest Community on the subject, hundreds of responses have been logged, along with sequel/knockoff threads (sequels that were actually still engaging and not superfluous like A Good Day to Die Hard).

Here are just a few of our favorite viewpoints from these discussions:

Are certifications bad? Not necessarily.
Are certifications that base their exams on multiple choice bad? Most likely.
Do certifications meet the needs of my organization? Perhaps.
Is there even a best practice in Software Testing? Not likely.
Do certifications tell you how good you are as a tester? Hell no.
(Glory L.)

IMO, the Foundation cert does teach someone the basics of how to test (in addition to what testing is, where it fits in the SDLC, etc). The Advanced level certainly expands on how to test. And yes, it is my belief that these certs advance a tester in their skills and ability – the study material alone should be on any tester’s reading list.
(Shane D.)

I have no use for certifications. I think they are bunk. My last full-time job had a number of “certified” people in various roles that had no idea what they were doing. Basically they were able to pass a test and get a piece of paper. But it didn’t make them any better at their jobs. I would put more time into learning rather than trying to pass a test. Learn because you want to, not because you have to. You’ll be better for it.
(John K.)

I just recently sat for my CTFL certification, not because I saw the value in it, but I wasn’t getting interviews without it. I have only been testing software for 5 years, and primarily for a single company in a niche market. Therefore though I have the respect and ‘backing’ of people at my company, and in my particular industry, but was having great difficulty breaking into a new field.
(Derek C.)

Are ‘certifications’ always a dirty word when it comes to testing, or is there a time and a place for certifications, especially at the foundation level where it’s important for testers to have a baseline for core concepts? We’d love to hear from testers in the Comments below.

 

Categories: Companies

You are invited to a party in the cloud, a public beta of StormRunner Load

HP LoadRunner and Performance Center Blog - Tue, 07/29/2014 - 17:58

We have a BIG Announcement!

 

This is your opportunity to get in on the ground floor of a revolution.

 

 

And as you know, revolutions often start with the sound of thunder in the distance.

 

 

Prepare yourself…a storm is coming!

 

StormRunnerLoad-2.png

Categories: Companies

Today’s “Hyperconnected” Economy Creates an Agility Imperative for Retailers

We have been outnumbered for years now, with little hope of ever catching up. Not by competing nations or companies, but by billions of devices we can hold in our hands. The number of internet-connected devices first outnumbered the human population in 2008, and their numbers have been growing much faster than the human population […]
Categories: Companies

Appium Bootcamp – Chapter 3: Interrogating Your App

Sauce Labs - Tue, 07/29/2014 - 17:30

appium_logoThis is the third post in a series called Appium Bootcamp by noted Selenium expert Dave Haeffner. Click the links to read the first and second posts. 

Dave recently immersed himself in the open source Appium project and collaborated with leading Appium contributor Matthew Edwards to bring us this material. Appium Bootcamp is for those who are brand new to mobile test automation with Appium. No familiarity with Selenium is required, although it may be useful. This is the third of eight posts; a new post will be released each week.

Writing automated scripts to drive an app in Appium is very similar to how it’s done in Selenium. We first need to choose a locator, and use it to find an element. We can then perform an action against that element.

In Appium, there are two approaches to interrogate an app to find the best locators to work with. Through the Appium Console, or through an Inspector (e.g., Appium Inspector, uiautomatorviewer, or selendroid inspector).

Let’s step through how to use each of them to decompose and understand your app.

Using the Appium Console

Assuming you’ve followed along with the last two posts, you should have everything setup and ready to run.

Go ahead and startup your Appium server (by clicking Launch in the Appium GUI) and start the Appium Ruby Console (by running arc in a terminal window that is in the same directory as your appium.txt file). After it loads you will see an emulator window of your app that you can interact with as well as an interactive prompt for issuing commands to Appium.

The interactive prompt is where we’ll want to focus. It offers a host of readily available commands to quickly give us insight into the elements that make up the user interface of the app. This will help us easily identify the correct locators to automate our test actions against.

The first command you’ll want to know about is page. It gives you access to every element in the app. If you run it by itself, it will output all of the elements in the app, which can be a bit unwieldy. Alternatively you can specify additional arguments along with it. This will filter the output down to just a subset of elements. From there, there is more information available that you can use to further refine your results.

Let’s step through some examples of that and more for both iOS and Android.

An iOS Example

To get a quick birds eye view of our iOS app structure, let’s get a list of the various element classes available. With the page_class command we can do just that.

[1] pry(main)> page_class
get /source
13x UIAStaticText
12x UIATableCell
4x UIAElement
2x UIAWindow
1x UIATableView
1x UIANavigationBar
1x UIAStatusBar
1x UIAApplication

UIAStaticText and all of the others are the specific class names for types of elements in iOS. You can see reference documentation for UIAStaticText here. If you want to see the others, go here.

With the page command we can specify a class name and see all of the elements for that type. When specifying the element class name, we can either specify it as a string, or a symbol (e.g., 'UIAStaticText' or:UIAStaticText).

[2] pry(main)> page :UIAStaticText
get /context
post /execute
{
    :script => "UIATarget.localTarget().frontMostApp().windows()[0].getTree()"
}
UIAStaticText
   name, label, value: UICatalog
UIAStaticText
   name, label: Buttons, Various uses of UIButton
   id: ButtonsTitle   => Buttons
       ButtonsExplain => Various uses of UIButton
UIAStaticText
   name, label: Controls, Various uses of UIControl
   id: ControlsExplain => Various uses of UIControl
       ControlsTitle   => Controls
UIAStaticText
   name, label: TextFields, Uses of UITextField
   id: TextFieldExplain => Uses of UITextField
       TextFieldTitle   => TextFields
...

Note the get and post (just after we issue the command but before the element list). It is the network traffic that is happening behind the scenes to get us this information from Appium. The response to post /execute has a script string. In it we can see which window this element lives in (e.g., windows()[0]).

This is important because iOS has the concept of windows, and some elements may not appear in the console output even if they’re visible to the user in the app. In that case, you could list the elements in other pages (e.g.,page window: 1). 0 is generally the elements for your app, whereas 1 is where the system UI lives. This will come in handy when dealing with alerts.

Finding Elements

Within each element of the list, notice their properties — things like name, label, value, and id. This is the kind of information we will want to reference in order interact with the app.

Let’s take the first element for example.

UIAStaticText
   name, label, value: UICatalog

In order to find this element and interact with it, we can can search for it with a couple of different commands: find, text, or text_exact.

> find('UICatalog')
...
#
> text('UICatalog')
...
#
> text_exact('UICatalog')
...
#

We’ll know that we successfully found an element when we see a Selenium::WebDriver::Element object returned.

It’s worth noting that in the underlying gem that enables this REPL functionality, if we end our command with a semi-colon it will not show us the return object.

> find('UICatalog')
# displays returned value

> find('UICatalog');
# returned value not displayed

To verify that we have the element we expect, let’s access the name attribute for it.

> find('UICatalog').name
...
"UICatalog"

Finding Elements by ID

A better approach to find an element would be to reference its id, since it is less likely to change than the text of the element.

UIAStaticText
   name, label: Buttons, Various uses of UIButton
   id: ButtonsTitle   => Buttons
       ButtonsExplain => Various uses of UIButton

On this element, there are some IDs we can reference. To find it using these IDs we can use the id command. And to confirm that it’s the element we expect, we can ask it for its name attribute.

> id('ButtonsTitle').name
...
"Buttons, Various uses of UIButton"

For a more thorough walk through and explanation of these commands (and some additional ones) go here. For a full list of available commands go here.

An Android Example

To get a quick birds eye view of our Android app structure, let’s get a list of the various element classes available. With the page_class command we can do just that.

[1] pry(main)> page_class
get /source
12x android.widget.TextView
1x android.view.View
1x android.widget.ListView
1x android.widget.FrameLayout
1x hierarchy

android.widget.TextView and all of the others are the specific class names for types of elements in Android. You can see reference documentation for TextView here. If you want to see the others, simply do a Google search for the full class name.

With the page command we can specify a class name and see all of the elements for that type. When specifying the element class name, we can specify it as a string (e.g., 'android.widget.TextView').

[2] pry(main)> page 'android.widget.TextView'
get /source
post /appium/app/strings

android.widget.TextView (0)
  text: API Demos
  id: android:id/action_bar_title
  strings.xml: activity_sample_code

android.widget.TextView (1)
  text, desc: Accessibility
  id: android:id/text1

android.widget.TextView (2)
  text, desc: Animation
  id: android:id/text1
...

Note the get and post (just after we issue the command but before the element list). It is the network traffic that is happening behind the scenes to get us this information from Appium. get /source is to download the source code for the current view and post /appium/app/strings gets the app’s strings. These app strings will come in handy soon, since they will be used for some of the IDs on our app’s elements; which will help us locate them more easily.

Finding Elements

Within each element of the list, notice their properties — things like text and id. This is the kind of information we will want to reference in order interact with the app.

Let’s take the first element for example.

android.widget.TextView (0)
  text: API Demos
  id: android:id/action_bar_title
  strings.xml: activity_sample_code

In order to find that element and interact with it, we can search for it by text or by id.

> text('API Demos')
...
#
> id('android:id/action_bar_title')
...
#

We’ll know that we successfully found an element when we see a Selenium::WebDriver::Element object returned.

It’s worth noting that in the underlying gem that enables this REPL functionality, if we end our command with a semi-colon it will not show us the return object.

> text('API Demos')
# displays returned value

> text('API Demos');
# returned value not displayed

To verify we’ve found the element we expect, let’s access the name attribute for it.

> text('API Demos').name
...
"API Demos"

Finding Elements by ID

A better approach to find an element would be to reference its ID, since it is less likely to change than the text of the element.

In Android, there are a two types of IDs you can search with — a resource ID, and strings.xml. Resource IDs are best. But strings.xml are a good runner-up.

android.widget.TextView (10)
  text, desc: Text
  id: android:id/text1
  strings.xml: autocomplete_3_button_7

This element has one of each. Let’s search using each with the id command.

# resource ID
> id('android:id/text1')
...
#

# strings.xml
> id('autocomplete_3_button_7')
...
#

You can see a more thorough walk through of these commands here. For a full list of available commands go here.

Ending the session

In order to end the console session, input the x command. This will cleanly quit things for you. If a session is not ended properly, then Appium will think it’s still in progress and block all future sessions from working. If that happens, then you need to restart the Appium server by clicking Stop and then Launch in the Appium GUI.

x only works within the console. In our test scripts, we will use driver.quit to kill the session.

Using An Inspector

With the Appium Ruby Console up and running, we also have access to the Appium Inspector. This is another great way to interrogate our app to find locators. Simply click the magnifying glass in the top-right hand corner of the Appium GUI (next to the Launch button) to open it. It will load in a new window.

Once it opens, you should see panes listing the elements in your app. Click on an item in the left-most pane to drill down into the elements within it. When you do, you should see the screenshot on the right-hand side of the window auto-update with a red highlight around the newly targeted element.

You can keep doing this until you find the specific element you want to target. The properties of the element will be outputted in the Details box on the bottom right-hand corner of the window.

It’s worth noting that while the inspector works well for iOS, there are some problem areas with it in Android at the moment. To that end, the Appium team encourages the use of uiautomatorviewer (which is an inspector tool provided by Google that provides similar functionality to the Appium inspector tool). For more info on how to set that up, read this.

For older Android devices and apps with webviews, you can use the selendroid inspector. For more information on, go here.

There’s loads more functionality available in the inspector, but it’s outside the scope of this post. For more info I encourage you to play around with it and see what you can find out for yourself.

Outro

Now that we know how to locate elements in our app, we are ready to learn about automating some simple actions and putting them to use in our first test.

Read:  Chapter 1 | Chapter 2

About Dave Haeffner: Dave is a recent Appium convert and the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by thousands of testing professionals) as well as The Selenium Guidebook (a step-by-step guide on how to use Selenium Successfully). He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.

Follow Dave on Twitter - @tourdedave

Categories: Companies

Continuous Integration for node.js with Jenkins

This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Steven Christou, technical support engineer, CloudBees about a presentation given by Baruch Sadogursky, JFrog, at JUC Boston.



Fully automating a continuous integration system from development, to testing, to deployment in production servers for Node.js can be a challenge. Most Node.js developers are familiar with NPM, which I learned does not stand for “Node Package Manager,” but stands for a recursive bacronymic abbreviation for "npm is not an acronym." In other words, it contains packages that contain a program described by the package.json file. To compare with a Java developer, an NPM is similar to a jar, and NPM registry is similar to Maven central for Java developers. What would happen if the main rpm registry https://www.npmjs.org/ goes down? At that moment node.js developers would be stuck waiting for npmjs.org to return to normal status, or they could run their own private registry.



Baruch Sadogursky, JFrogThat sounds easier said than done, though. According to http://isaacs.iriscouch.com/registry, the current size of the registry is 450.378 gigabytes of binaries. Out of all of those 450 gigabytes of information how many of the packages are going to be used by your developers?

Artifactory: a repository manager to bridge the gap between developers and rpm registry npmjs.org. Artifactory acts as a proxy between your coders and Jenkins instances to the outside world. When I (a developer) require a new package and I declare a new dependency in my code, Artifactory will pull the necessary package dependency from npmjs.org and make it available. After the code has been committed with the new package dependency, Jenkins is then able to fetch the same package from Artifactory. In this scenario if npmjs.org ever goes down, testing using Jenkins will never hault because it will still be able to obtain the necessary dependencies from the Artifactory server.



Building code using an Artifactory server also eliminates the need for users to checkout and build their dependencies as it would be time consuming. Also dependencies could be in an unstable state if I build in my environment and it differs from other users or the Jenkins server. Another advantage is Jenkins could record information about the packages that were used during the build.

Overall, using a package manager like Artifactory to act as a proxy between your Jenkins instance and the NPM registry npmjs.org is beneficial in order to maintain true continuous integration. Your developers and Jenkins instances would not be impacted by any downtime issues if the NPM repository is down or unavailable. Thus, adding an Artifactory server to manage package dependencies would help maintain continuous integration.

Steven Christou
Technical Support Engineer
CloudBees

Steven works on providing bug fixes to CloudBees customers for Jenkins, Jenkins plugins and Jenkins enterprise plugins. He has a great passion for software development and extensive experience with Hudson and Jenkins. Follow him on Twitter.
Categories: Companies

Three Ways for Testers to Take Their Careers to the Next Level

uTest - Tue, 07/29/2014 - 16:40

6a00d8341c64d253ef0120a5aa8aa3970c-800wiI had lunch recently with a few recruiters that asked me for referrals for performance testing roles. They had a number of open roles and could not find anyone suitable.

The discussion reminded me of possible ways for a tester to take his or her career to the next level. There are a few things that can be done.

Staying the Course and Improving Your Skills

The first is to continue to do what you know well and aim at becoming as good as possible. Most of the testers take this road and choose to learn mostly about manual, back box testing.

One image that comes to mind for the tester that pursues this road is the small fish in a big bowl. Since the bowl is big, there are many other fish around, and the competition for space and food is high. For testers that do just manual testing, there is a high level of competition for new jobs, the rates are not that high (due to the size of the market) and the demand is up and down.

Taking the Less Popular Path

Another option for career advancement is to work on testing types that are less popular, like performance testing or test automation.

Coming from manual testing and moving to performance testing or test automation implies usually a long time for training and becoming proficient. But after that, the tester can access a niche in the market that is less populated, with steady demand, often with roles that cannot be filled and that pay much more than manual testing.

These types of testers look like the big fish in the small bowls.

The Combination Approach

The last option for career development is a combination of the first two. These fish are very rare to find.

Alex Siminiuc is a uTest Community member and has also been testing software applications since 2005…and enjoys it a lot. He lives in Vancouver, BC, and blogs occasionally at test-able.blogspot.ca.

Categories: Companies

Automated Testing With Cucumber JVM, Selenium & Mocha

Testing TV - Tue, 07/29/2014 - 16:26
This presentation provides an overview of behavior-driven development and test automation for Saleforce.com, which aided in the production of a Visualforce/JavaScript application for an enterprise client. Using Cucumber JVM, Selenium, Jenkins, and Git – the team was able to catch regression errors during development. It offers an overview of the solution used and how it […]
Categories: Blogs

Breaking Down Requirements in TestTrack

The Seapine View - Tue, 07/29/2014 - 11:30

We’ve made breaking down requirements much easier with TestTrack 2014.1. Now you have the ability to create requirements and tasks from another requirement with a single click. Let’s take a simple example, where you’re creating system specs from product requirements and then breaking down each system spec into a set of tasks for the team. Using Item Mapping Rules, you can quickly create the logic necessary to enable single-click creation of a task from a system specification.

ReqBreakdown

To get started, go to Tools > Administration > Item Mapping Rules and click Add to create a new mapping rule. At the top of that dialog, you select the item types to map, here we’re creating a Task from a Technical Specification.

ItemMappingRules

There’s a default set of field mappings that will be created, but you’re free to change those however you like. Also, don’t skip over the options at the bottom. They might seem minor but they save a lot of frustration. So first, you’ll probably want to automatically add the task to the folder with the related specification. This keeps everything grouped together and makes managing the release schedule easier and more reliable. Second, be sure to select the appropriate link definition. Finally, you might want to turn off the prompt to save your users an extra click and avoid people making inappropriate changes to the linking.

When you’re finished with configuration, save your new mapping and then open a Technical Specification. Here’s what you’ll see-notice the new Create Task … button in the bottom-left corner. When users click this button, a task will be created using the data from the technical spec based on the mappings you set up.

TechSpec

Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

Sharing Data from One Ranorex Module to Another

Ranorex - Tue, 07/29/2014 - 09:06

Sometimes it is necessary to access the value of a specific variable in more than only one module in a test case.

An example for this would be any kind of converter: In one module the value to be converted is read in. In another module this value is converted. Needless to say, in the converting module the value from the previous module should be used.

This blog post will show how to do this step-by-step.

The Structure of the Ranorex Solution

As you can see in the screenshot above, the solution “Converter” created for demonstration purposes consists of four different modules. Two variables (varTemperatureCelsius and varTemperatureFahrenheit) should be shared between the modules “GetValue” and “ConvertValue”. The temperatures are read from a weather-website in the “GetValue”-module.

In the “ConvertValue”-module the Celsius-temperature is converted from Celsius to Fahrenheit using another website and compared to the Fahrenheit-temperature from the first website. The whole example is available for download here.

Step 1: Creating a Variable in Module 1

At the beginning an “Open Browser”-action is recorded in the first module. Then a new “Get Value”-action in the second module “GetValue” is added.

Note: Before initializing a new variable, a repository item representing the UI element that displays the temperature needs to be created. This can be done by using the “Track”-button in the Ranorex Repository or by using the Ranorex Spy.

The value of the current temperature is stored in a variable. It is going to be created by clicking on the drop-down menu below the heading “Variable” and choosing the option “As new variable…”.

A context menu will be opened which should look like this:

Here the desired name of the variable and (optionally) a default value can be defined.

Note: The default value is used if modules are running separately and not from the test suite view. It will also be used if a variable is unbound and the test is started from the test suite.

Afterwards the repository item representing the value on the website and the attribute of the “Get Value”-action should access are chosen. The attribute holding the given temperature value is “InnerText”.

This step needs to be done for both Celsius and Fahrenheit values. In the upper right corner of the website the unit of the temperature can be changed.

At the end of the first step the “GetValue” module should look like this:

Note: The temperature value should not be used as identification criteria! The test will only work if the temperature is the same as when it had initially been identified. This can easily be changed by using the Path Editor in Ranorex Spy: Uncheck the “innertext” attribute and check a different appropriate attribute, e.g. “class”.

Step 2: Creating a Variable in Module 2

The step of creating a new variable needs to be repeated for every module the value from the first module should be used in. Here it is the module “ConvertValue” where a value is going to be converted from Celsius to Fahrenheit.

Firstly, the value from the previous module is used for the Celsius text field on the mentioned website. After that the result is validated in a user code action.

After recording and creating the needed variable, the module should look similar to this:

Note: For identification purposes, it would be easier if variables belonging together have the same name. However, this is only a recommendation and not a requirement.

Step 3: Connecting Variables in different Modules to one Parameter

In order to connect all needed variables to each other, a parameter needs to be created.

This is done by right-clicking the test case in the test suite view and clicking “Data Binding” in the context menu.

The test case properties pane will be opened:

Here it is necessary to add two rows in the “Parameters”- section of the window – one for each shared variable.

By clicking the drop down menu “Module Variable”, the variables associated with this parameter (“varTemperature(GetValue)” and “varTemperature(Convert Value)”) can be checked.

Finally, the test suite should look like this:


Testing the Solution

Now it is time to test the solution. This can be done by pressing “Run” in the test suite.

Note: If “Play” is clicked in one of the modules, the variables won’t be bound. In this case the default values of the variables are used.

The report file should look like this:



Conclusion

In this blog post you learned how to share variables from one module to another. There is one main concept which is always the same for every new module added to the test suite: Firstly a new variable in this specific module is created and then it is connected to a parameter of a test case.

Note: If a variable needs to be shared across test cases, it is almost the same procedure. The only difference is that a global parameter or a parameter in a parent test case is taken. A global parameter can be created in the test suite properties pane.

If something was unclear in this blog post, feel free to ask in the comment section or have a look at the following chapters in our user guide:

Share

Categories: Companies

Updates Coming to Default Selenium and Chrome Versions on Sauce (August 2)

Sauce Labs - Tue, 07/29/2014 - 01:02

On Saturday, August 2nd, we will update our Selenium and Chrome default versions to meet current, stable implementations. This update affects users that run automated Selenium tests on Sauce.

Default versions of Selenium and Chrome are used only for tests that don’t have a specified browser version. Users who choose to assign Selenium and Chrome versions to their tests will remain unaffected.

Below you’ll find more details about the updates.

Selenium

Currently the default Selenium version is 2.30.0. Following the update on August 2, the new default Selenium version will be 2.42.2. We advise you to test the new version (2.42.2) in advance using the following desired capability:

"selenium-version": "2.42.2"



If you run into any issues with the new default, note you can continue using the previous version (2.30.0) after Saturday by making the test request the selenium-version desired capability referred to below:

"selenium-version": "2.30.0"


Chrome

Currently the default Chrome versions are Chrome 27 and Chromedriver 1. Following the update on August 2, the new default Chrome versions will be Chrome 35 and Chromedriver 2.10. We advise you to test the new versions (Chrome 35, Chromedriver 2) in advance using the following desired capabilities:

"browserName": "chrome"
"version": "35"



By requesting Chrome 35, Chromedriver 2.10 will be used automatically.


If you run into any issues with the new default, you can continue using the previous versions (Chrome 27, Chromedriver 1) after Saturday by making the test request the “version” desired capabilities referred to below:

"browserName": "chrome"
"version": "27"


Troubleshooting Issues

If you see any issues after moving your tests to these new versions, we suggest checking for known issues on https://code.google.com/p/selenium/issues/list or contacting the Chromedriver and Selenium user groups.

Happy testing!

Categories: Companies

Ruby - Array: Adding and removing data with Pop, Push, Shift, Unshift

Yet another bloody blog - Mark Crowther - Mon, 07/28/2014 - 22:58

In an earlier post we looked at basic Arrays and some tricks around delimited input (%W and %W etc.) What we saw were a few ways arrays could be built out and data added to them - when being created.

With that understanding however, we need to see how we'd add and remove data once the array is already set-up. Fortunately, Ruby provides a way to do that, albeit in a slightly cryptic way.



YouTube Channel: [WATCH, RATE, SUBSCRIBE]http://www.youtube.com/user/Cyreath

Let's say out array looks like this:

testArray = ["a", "b", "c", "d"]

We'll want to do one of the following: 
  • REMOVE data from the START (Shift)
  • ADD data to the START (Unshift)
  • REMOVE data from the END (Pop)
  • ADD data to the END (Push)
Graphically we can represent this as per the below:



To experiment with this, let's set up and array then ask a user what they want to do with it. The array needs to be accessible in the If statement we're going to use in a second, so for each make it's scope up from local to instance by adding @


@testArray = ["one", "two", "three", "four"]
puts "\nWhat would you like to do? (shift, unshift or push, pop)"action = gets.chomp.downcase

Here the user can select one of the option of shift, unshift for working with the start of the array or push, pop for the end of the array.

Next let's build out an If statement framework;


if action == "pop"

  elsif action == "push"

  elsif action == "shift"

  elsif action == "unshift"

end


Under each we then need to call the correct method for each action selected.


if action == "pop"
    @testArray.pop


Can you complete the rest?

To help the user see what's going on, let's give a message about what's going to happen, then print out the contents of the array so we can see the result.


if action == "pop"    puts "\nPoping a value OFF the END of the array (#{@testArray[-1]}).\n"    @testArray.pop    puts "The current array looks like this:\n"    puts @testArray

Here we use some interpolation to return the end value before we remove it.

As above, have a go at writing the rest of the script for each of the methods.

Bonus

Using these methods, we don't have to modify one element at a time. We can specify the amount of elements to be modified by adding a value to the end of the method, for example;

    @testArray.pop(2)

Mark.

---------------------------

@testArray = ["one", "two", "three", "four"]
puts @testArray
puts "\nWhat would you like to do? (pop, push or shift, unshift)"action = gets.chomp.downcase
if action == "pop"    puts "\nPoping a value OFF the END of the array (#{@testArray[-1]}).\n"    @testArray.pop    puts "The current array looks like this:\n"    puts @testArray
  elsif action == "push"    puts "\nPushing a value ONTO the END of the array.\n"    @testArray.push "five"    puts "The current array looks like this:\n"    puts @testArray
  elsif action == "shift"    puts "\nShifting a value OFF the START of the array. (#{@testArray[0]})\n"    @testArray.shift    puts "The current array looks like this:\n"    puts @testArray
  elsif action == "unshift"    puts "Unshifting a value ONTO the START of the array.\n"    @testArray.unshift "zero"    puts "The current array looks like this:\n"    puts @testArray
  else puts "That wasn't an option"
end

Read Morehttp://cyreath.blogspot.co.uk/2014/07/ruby-if-ternary-until-while.htmlhttp://cyreath.blogspot.co.uk/2014/07/ruby-nested-if-statements.htmlhttp://cyreath.blogspot.co.uk/2014/07/ruby-if-statements.htmlhttp://cyreath.blogspot.co.uk/2014/07/ruby-case-statements.htmlhttp://cyreath.blogspot.co.uk/2014/05/ruby-w-vs-w-secrets-revealed.html
http://cyreath.blogspot.co.uk/2014/05/ruby-variables-and-overview-of-w-or-w.htmlhttp://cyreath.blogspot.co.uk/2014/02/ruby-constants.htmlhttp://cyreath.blogspot.co.uk/2014/02/ruby-global-variables.htmlhttp://cyreath.blogspot.co.uk/2014/02/ruby-local-variables.htmlhttp://cyreath.blogspot.co.uk/2014/02/ruby-variables-categories-and-scope.htmlhttp://cyreath.blogspot.co.uk/2014/01/ruby-variables-part-1.htmlhttp://cyreath.blogspot.co.uk/2014/01/ruby-getting-and-using-user-input.htmlhttp://cyreath.blogspot.co.uk/2014/01/download-and-install-ruby.html



YouTube Channel: [WATCH, RATE, SUBSCRIBE]http://www.youtube.com/user/Cyreath

Categories: Blogs

Latest Testing in the Pub Podcast Takes on Security

uTest - Mon, 07/28/2014 - 21:39

Testing in the PubStephen Janaway and Dan Ashby discuss many testing topics over a pint at the local watering hole in their Testing in the Pub podcasts, but security is one that hasn’t been brought up just yet — until now.

The latest podcast features a chat with Dan Billing, a.k.a. the Test Doctor, and gets into what has been a very active subject as of late at the uTest Blog. As we’ve previously mentioned, data breaches, hacking, and other security leaks have been in the news for months now, not limited to instances including New York suffering 900 data breaches last year.

In other words, the subject of this latest podcast couldn’t have come at a more opportune time. Be sure to check out Episode 8 of Testing in the Pub right now.

Categories: Companies

[Re-Blog] Dev Chat: Vlad Filippov of Mozilla

Sauce Labs - Mon, 07/28/2014 - 21:38

Last week Sauce Labs’ Chris Wren took a moment to chat with Vlad Filippov of Mozilla on his blog. Topics covered all things open source and front-end web development, so we thought we’d share. Click the image below to read the full interview, or just click here.

Dev Chat: Vlad Filippov of Mozilla

 

Categories: Companies

Jenkins figure is available in shapeways

Some time ago, we've built Jenkins bobble head figures. This was such a huge hit that everywhere I go, I get asked about them. The only problem was that it cannot be individually ordered, and we didn't have enough cycles to individually sell and ship them for those who wanted them.

So I decided to have the 3D model of Mr.Jenkins built, which would allow anyone to print them via 3D printer. I comissioned akiki, a 3D model designer, to turn our beloved butler into a fully-digital color-printable figure. He was even kind enough to discount the price with the understanding that this is for an open-source project.

The result was IMHO excellent, and when I finally came back to my house yesterday from a two-weeks trip, I found it delivered to my house:

With the red bow tie, a napkin, a blue suit, and his signature beard, it is instantly recognizable as Mr.Jenkins. He's mounted on top of a red base, and is quite stable. I think the Japanese sensibility of the designer is really showing! Note that the material has a rough surface and it is not very strong, but that's what you trade to get full color.

I've put it up on Shapeways so that you can order it yourself. The figure is about 2.5in/6cm tall. The price includes a bit of markup toward recovering the cost of the design. My goal is to sell 25 of them, which will roughly break it even. Any excess, if it ever happens, will be donated back to the project.

Likewise, once I hit that goal, I will make the original data publicly available under CC-BY-SA, so that other people can modify the data or even print it on their own 3D printers.

Categories: Open Source

Tasktop Sync 3.6 Extends Test Synchronization

Software Testing Magazine - Mon, 07/28/2014 - 17:58
Tasktop has announced Tasktop Sync 3.6 with continued innovations for support of exceptionally complex integration scenarios. Building on the advanced Artifact Relationship Management capabilities in the previous release, Sync 3.6 supports the intricate requirements management needs of embedded software development, as well as the disparate models found in test management. Tasktop Sync 3.6 extends its ecosystem to support IBM Rational DOORS and Jama, two requirements management tools found in systems engineering organizations. By adding this support, organizations that define requirements in these tools can allow their software engineering teams to use ...
Categories: Communities

Age discrimination in the workplace

Yet another bloody blog - Mark Crowther - Mon, 07/28/2014 - 17:40
On Linkedin there's a spirited, 3000+ reply, about hiring those who are 50+ and needles to say most of the responses are positive about it. I've of course not read anything over about 100 replies, but my reading of the posts is the experience and stability brought are worth various quantities of gold.. For me it's almost a mute point, but then perhaps I 'get' it that you can't just rule out someone for a role base on age. It's ludicrous to think that age alone qualifies or disqualifies someone for a role in our software development and testing profession. I like the fact the UK is pretty hot on age discrimination. When I was in manufacturing virtually all the candidates were 50+ women looking for part or full-time roles on the production line for a bit of extra money. I guess my early working years just normalised the idea of hiring older members of staff. In truth, the 'youngsters' were seen as a liability and sad to say, provide themselves to be repeatedly. As the process engineer it was my role to set-up the manufacturing line, optimise layout and £££ return per square foot, research tool selection, get us through ISO9001, etc.

Looking back now, it was fun having these 50+ people (mostly ladies) work there, I mean it, I looked forward to getting into work and wandering over to the line, saying "good morning ladies" in a charming / slightly comedic way and having them take the micky with their "ooh, young maaan" response. I'm pretty much always happy, I don't laugh that much as a rule though, but I laughed every day with them. Thinking about it now I have a sense of colour, sunlight and fresh air, memories are funny eh? What really made it work was the mutual respect. As a 20 something I was in awe at the incredible experience they had, I would constantly ask their advice about how they'd set the line up, write up the instructions, sequence the build and so on. In turn, they produced so little bad quality product, the company let the 'quality controller' go, woops! We ended up just throwing the few defective units into recycling.

Now I'm in software testing and development, I don't see a massive amount of age discrimination. However, there are certainly roles that naturally fit with a person of a certain age. Senior Programme Managers are rarely 22, agile Ruby on Rails developers are rarely 55, except where these individuals are exceptional. But then, hey... they're exceptional so you'd expect them to break the typical patterns. Not that it isn't tricky to get exactly the role you may want for the salary or day rate you'd like. I'd really like to get hands on for 12 to 18 months with Ruby, Cucumber, RSpec, Selenium, etc. but there's younger more technically focused talent out there. I'm sure some of the younger talent would love to be at an investment bank running automation programmes through multiple off-shore teams, but they don't have the experience. That's not age discrimination, it's fitting the best person to the job, sometimes age means you aren't in the right place experience or career stage wise, but it's not the age that's the problem as such.

Talking of exceptions, the youngsters I come across today are a class apart from those I encountered 15 years ago. Our profession certainly breeds them like this to some degree, but I think da yoof have changed. The, let's say 22-25 year olds that I work with now, are in the main pretty inspiring. I consider myself to be the most educated, experienced, erudite, etc. that I have ever been, but I'm not convinced I was as bright and well educated as they are at their age. On the reverse, I don't think the 40 and 50 years olds of today are as old as they were 15 or 20 years ago either. My parents seemed ancient to me when they were 35, at 42 I feel like I've just left university. A general youthfulness of perspective permeates society more so than ever. Daily, I'll be asked for some guidance and advice from a slightly panicked more junior member of staff and I can barely feel a flicker of worry in my mind. Conversely, I turn clueless to the same people and ask for technical guidance and mentoring on how to best approach a problem, to see them answer the problem as if it were the simplest thing in the world. There's definitely a greater equality age wise, it's not perfect, but I don't see it as the disaster some would like to paint.

The 'age' problem exists when an employer assumes if you're over 50 you can't do the role, without any qualification than 'but you're over 50'. It's just like saying you can't do it because you're too young, you're black, an immigrant, a woman. Your blood should rightly boil in all cases. If you apply to a company like this or work for one, do everyone a favour and walk away as fast as you can. Their time is done anyway, they'll fade away into the black hole they deserve to be in soon enough, get away before you and others get sucked in.

The main question when applying for a role or hiring for the same, should of course be whether the person represents the best talent for the role. Career paths to date, education, experience, attitude, interests and outlook are all factors that play hugely into whether you're the right candidate for the role. The youthful and mature alike cry foul at not being able to get the roles they want.That the 50+ crowd shout that it's unfair is a great sign that times and expectations have changed. It's fantastic that people refuse to be limited by a false sense of limitation due to age, that a Linkedin post can get 3000+ responses agreeing age is irrelevant. Finally we're starting to see the first sparks of (some portions) of human society refusing to be held back. It can only be for the good.

However, despite my positive view on this point, there is a very real workplace issue that is not getting the notice it should.

A seriously bad issue. Bad as in as bad as racial and sexual discrimination. A problem so serious, so abhorrent, that no right minded individual should be staying quiet about it.

That problem, is workplace and pay inequality for women.

I'll talk about that in a future post.

Mark
Categories: Blogs

Neotys Adds Free Edition to NeoLoad Load Testing Tool

Software Testing Magazine - Mon, 07/28/2014 - 17:38
Neotys has announced NeoLoad 5.0, an enhanced version of its load and performance testing solution. The release of NeoLoad 5.0, now available with a Free Edition download, adds powerful new capabilities for mobile applications and web applications to increase testing speed, agility and resulting insights. The solution allows users to test more quickly, efficiently and frequently, enabling faster deployment of Internet, intranet and mobile applications, supporting all web architectures including the newest technologies such as, HTML5, WebSocket, Ajax Push and SPDY. Key Enhancements Complete Mobile End User Experience – NeoLoad 5.0 integrates ...
Categories: Communities

Knowledge Sharing

Telerik Test Studio is all-in-one testing solution that makes software testing easy. SpiraTest is the most powerful and affordable test management solution on the market today