Skip to content

Feed aggregator

The Rules Have Changed

Sonar - Wed, 09/10/2014 - 22:34

If you’ve already taken a look at SonaQube 4.4, the title of this post wasn’t any news to you. The new version introduces two major changes to the way SonarQube presents data: the new rules space and the changes to the source viewer.

If you’ve been keeping up version to version, you’ve noticed new styling creeping in to the design. We formed a Web team this year to focus on transforming SonarQube’s interface into something as sexy as the underlying functionality, and the team is starting to hit its stride.

The new rules space is a reimagining of how to interact with rules. Previously, they were accessed only within the context of their inclusion (or not) in a single profile. Want to know if a given rule is present in multiple profiles? Previously, you had to hunker down because it could take a while.

Now rules have their own independent presentation, with multi-axis search.

All the search criteria from the old interface are still available, and several new ones have been added. The rule tags introduced in SonarQube 4.2 become searchable in 4.4, as do SQALE characteristics. And for most criteria you can search for multiple values. For example, it’s now easy to find rules in both “MyFirstProfile” and “MySecondProfile” simply by checking them both off in the profile dropdown.

Conversely, if you want to see all the profiles that include the rule “Unused method parameters should be removed”, simply pull it up in the search.

At the bottom of the rule listing, you’ll see all the profiles it’s included in, along with the severity and any parameters for the profile. If you’re an administrator, you’ll have controls here to change a rule in its current profiles and to add it to new profiles. The search results pane on the left also features bulk change operations for administrators, allowing them to toggle activation in a profile for all the rules in the search results.

It’s also easy now to find clone-able rules such as XPath and Architectural Constraint in Java; they’re called “templates” starting in 4.4, and they get their own search criterion.

I shouldn’t forget to mention the second tier below the search criteria. It represents the categories the search results span: languages, repositories, and tags, and the search results can be further filtered by clicking on the entries there. (A second click deselects and unfilters). For instance, here’s the default search filtered to show C rules that have been tagged for MISRA C++:

The point of this radical overhaul is to give you, the user, a better way to explore rules; to see what rules are available, which rules are used where, and which rules you might want to turn on or ask your administrator to turn on.

One interesting aspect of this is the new ability to explore rule activation across languages. For rules that are implemented directly within a plugin, as opposed to coming from 3rd party tools like FxCop or FindBugs, you’ll see that when the same rule is implemented in multiple languages, it usually has the same key (there are a few historical exceptions.)

So, for example, now you can easily see whether the same standards are being enforced across all languages in your organization.

The new rules space is just one piece of our new attitude toward data. Next time I’ll talk about the complete rework of the component viewer. It’s a reimagining that’s just as radical as this one.

Categories: Open Source

Latest Testing in the Pub Podcast: Views on Testing Communities

uTest - Wed, 09/10/2014 - 21:50

Testing in the PubThe latest Testing in the Pub podcast takes advantage of summer — really, the waning days of summer at this point — by having a pint in the beer garden and discussing testing with community leader and organizer of London Tester Gatherings Tony Bruce.

uTester and podcast host Steve Janaway sits down with Tony to discuss, amongst other things, an especially pertinent topic for anyone reading this blog right now as a uTester — the need for testing communities in software development and testing. We agree, Tony!

Be sure to check out the full podcast right here.

Categories: Companies

Advanced Git with Jenkins

This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Harpreet Singh, VP Product Management, CloudBees about a presentation given by Christopher Orr of iosphere GmbH at JUC Berlin.

Git has become the repository of choice for developers everywhere and Jenkins supports git very well. In the talk, Christopher shed light on advanced configuration options for the git plugin. Cloning extremely large repositories is an expensive proposition and he outlined a solution for speeding up builds with large repositories.

Advanced Git Options
The are three main axes for building projects: What, When and How.
Git plugin optionsWhat to build:
The refspec option on Jenkins lets you choose on what you need to build. By default, the plugin will build the master branch, this option can be supplanted by wildcards to build specific features or tags. For example:

  • */feature/* will build a specific feature branch
  • */tags/beta/* will build a beta version of a specific tag
  • +refs/pull/*:refs/remotes/origin/pull/* will build pull requests from GitHub

The default strategy is usually to build particular branches. So for example if refspec is */release/*, branches release/1.0release/2.0 will be built while branches feature/123bugfix/123 will be ignored. To build feature/123/ and bugfix/123, you can flip this around by choosing the Inverse strategy.

Choosing the build strategy
When to build:
Generally, polling should not be used and webhooks are the preferred options when configuring jobs. OTOH, if you have a project that needs to be built nightly only if a commit made it to the repository during the day, it can be easily setup as follows:

How to build:
A git clone operation is performed to clone the repository before building it. The clone operation can be speeded up by using shallow clone (no history is cloned). Furthermore by using the "reference repo" during the clone operation, builds can be speeded up. In the reference repo option, the repository is cloned to a local directory and from there on, this local repository is used for subsequent clone operations. A network access is made only if the repository is unavailable. Ideally, you line these up, so shallow clone for the first clone (fast clone) and reference repo for faster builds subsequently.

Equivalent to git clone --reference option

Working with Large RepositoriesThe iosphere team uses the reference repository approach to speed up builds. They have augmented this approach by inserting a proxy server (git-webhook-proxy [1]) between the actual repo and Jenkins. Thus, a clone happens to this proxy server. The slave setup plugin copies the workspace over to the slaves (over NAS) and builds proceed there on. Since network access is restricted to the proxy server and each slave does a local copy, this speeds up builds considerably. 

git-webhook-proxy: to speed up workspace clones
The git-webhook-proxy option seems a compelling solution, well worth investigating if your team is trying to speed up builds.

[1] git-webhook-proxy

-- Harpreet
Harpreet is vice president of product management at CloudBees. 
Follow Harpreet on Twitter

Categories: Companies

Sharecare Scales Mobile Automated Testing With Sauce Labs [VIDEO]

Sauce Labs - Wed, 09/10/2014 - 21:06

We took a whirlwind trip to New Haven, CT to sit down with Daniel Gempesaw, Software Testing Architect at Sharecare. Sharecare is a wellness platform founded in part by Dr. Oz and Oprah.

We learned that after Sharecare started automating their testing process while using Sauce, time spent for each deploy fell from 7 days to 1 day. With more free time, the team is able to focus on scaling their mobile testing with Appium and employing new best practices such as mobile CI.

Watch this video to learn how they scaled their mobile testing with Sauce Labs.

Be sure to check out the case study, too.

Happy testing!

Categories: Companies

Pulsed Agile for Fast IT

Assembla - Wed, 09/10/2014 - 18:47

iOS 8 was unboxed today for developers, and some of our customers have a of batch mobile apps that need to be updated.  Some of our customers maintain batches of Websites or microsites, and they update different sites at different times. These customers need to be agile and responsive, but they don't fit the traditional "agile" idea of a team that works on one project for a long series of releases.  They switch their attention to the most important project at any given time, and deliver one improvement (a milestone, in the Assembla system). Then, they often switch to a different task.  They work in pulses.  Each milestone or pulse is like a small waterfall project with budgeting and scheduling negotiations, planning, implementation, and delivery.

We can make a study of this pulsed agile process and figure out how to make it run more smoothly. This article is a place to start.  Let's figure it out.

Traditional agile processes are not a good guide for pulsing, because they assume that you have a team that works full time on one project for a long time.  In a Scrum agile process, your team has periodic sprint planning meetings, and they get good at estimating about their one project.  It’s not fair to ask for the same level of bottom-up estimating when picking up a dormant project.  If you have more than one project running, you get pathological behavior like one person going to multiple sprint and standup meetings.

At Assembla we recommend a continuous agile approach, like Scrumban or Kanban.  We skip the planning meeting and go straight to pulling tasks out of the milestone and finishing them.  This is efficient. It skips estimating, and many other things that take time away from getting tasks done.  However, this process does not help you figure out if you can deliver a pulsed milestone at the time you promised.

The continuous process will get the work done.  However, to ensure delivery of the pulse, it needs to be monitored through various stages.  Here are some recommendations that I would make.

* Negotiate the time, budget, and scope.  I think that it is a good practice to freeze the time and cost (allocating a fixed team size) and make the scope variable.  This is exactly the opposite of the normal budgeting process, which starts with the deliverable, and then negotiates a variable budget and has time overruns. But, wouldn't you rather get something good (but not exactly matching some frozen document) in the time and budget you expect?

* Identify required and optional parts of the deliverable.  In order to be sure we can deliver on time, we need to have some optional stuff to squeeze in, or not.

* Set up all of the components so that you can do incremental deployment.  I think it is a best practice to deploy everything, even if it has no content or code, in "hello world" form.  Then, you incrementally fill it in.  In the agile incremental process, deployment comes at the beginning and not at the end.

* Tactical agreement.  Make some choices and inform the team.  For example: Are we going to make a prototype first, makes sure  that it works mechanically, and then improve the design, or are we going to make a design first, and then implement it?

* 50% checkpoint.  Usually it takes about twice as long to make a fully released system as it takes to make a prototype.  So, halfway through the schedule, you can see what will be in the finished release.  A burnup chart can also improve your estimate.  Under this system, the deliverable is uncertain only during the first half of the project.  The goal is to reduce this window of uncertainty, not eliminate it.

* Stakeholder engagement.  As the delivery approaches, stakeholders should be providing daily feedback.

This is a lot of stuff to do.  For a small Website upgrade or mobile app upgrade, you will not need to do everything.  But, you should do enough to be comfortable with your customer and your team.

You can add your own recommendations.  Let's figure this out.

Categories: Companies

TestTrack 2014.2 Sneak Peek: Interactive Task Boards

The Seapine View - Wed, 09/10/2014 - 17:30

Looking for more project management capabilities within TestTrack? The interactive task board, shipping with TestTrack 2014.2 later this year, will bring cutting-edge project planning capabilities to TestTrack—whether you’re using Waterfall, Agile, or any other product development methodology.

  • Use cards, columns, and swimlanes to organize and visualize work
  • Drag and drop tasks to update and prioritize work
  • Collaborate with the entire team during stand-up or project review meetings
  • Configure multiple boards to match each team’s processes and workflow

Learn more and sign up to be notified via email when the Task Board Sneak Peek is available!


Share on Technorati . . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

6 Things You Need to Know About the iPhone 6

uTest - Wed, 09/10/2014 - 16:07

This story was originally published on the Applause App Quality Blog by Dan Rowinski.

Bigger and bolder, Apple has finally embraced the large screen. Apple latest iPhones were announced on Tuesday and it comes in two variants: the iPhone 6 and the iPhone 6 Plus. Each is bigger and more powerful than any iPhone Apple has ever made.

In its announcement, Apple referred to the iPhone 6 and the iPhone 6 Plus at the greatest phones ever made. It is a bit of hyperbole that Apple has been prone towards in its iPhone announcements through history, a legacy of the late Steve Jobs. But nearly everything about the iPhone 6 and iPhone 6 Plus is bigger and badder, a worthy successor to Apple’s smartphone franchise and likely to be the most sought-after gift this coming holiday shopping season.

What do you need to know about the new iPhone 6 and iPhone 6 Plus? Let’s break it down.

Screen Size And Resolution

Apple has finally broken out of its mold and listened to what people want. Consumers want bigger screens on smartphones. Thus, mobile app developers want bigger screens on because that is what consumers want.

Well, Apple has delivered.

The iPhone 6 has a 4.7-inch screen with a 4.7-inch, 1334-by-750 screen that translates to pixels-per-inch (ppi). Good news for developers, this is the exact same pixel count as the iPhone 4S, iPhone 5, iPhone 5C, iPhone 5S and iPad Mini with Retina Display.


The iPhone 6 Plus has a 5.5-inch screen with a 1920-by-1080 resolution with 401-ppi. The new pixels-per-inch count will be what developers are going to focus on because it is this metric that will directly effect what their existing apps will look like on larger screens. To this end, Apple has created an desktop-class scaler in the Xcode integrated developer environment to deal with all the new screen sizes and (limited) pixel variation among iOS devices. Apple also employs the Adaptive Layout feature introduced in iOS 7 (and advanced in iOS 8) to help developers make apps that fit any of its device sizes.

More good news for developers, Apple has stayed consistent with the aspect ratio of the iPhone with the new models, continuing its use of 16:9 it introduced in the iPhone 5. Previous versions of the iPhone had 4:3 or 3:2 aspect ratios.

Near Field Communications & Apple Pay

After years of speculation (and disappointment from mobile payment advocates), Apple has finally embraced Near Field Communications (NFC), a short-range communications standard that has long been a part of Android and Windows Phone devices.

Ostensibly, NFC will be used to push Apple into the mobile payments space as it made deals with Visa, MasterCard, American Express and Discover to handle financial transactions at physical store locations with the iPhone.


With NFC, Apple has introduced Apple Pay, a new way to make transactions with the iPhone 6 and iPhone 6 Plus and the Apple Watch. Apple Pay is a secure way to make transactions using NFC and it is integrated with your biometric fingerprint signature through Touch ID and Apple’s PassBook cards app. To entice adoption of Apple Pay, Apple has partnered with some of the largest retailers in the United States to allow them to accept the mobile payments, including McDonalds, Macy’s, Whole Foods, Walgreens and more.

Health Sensors & Software

In the iPhone 5S, Apple announced the M7 “motion coprocessor” to keep track of motion data like running and walking in the iPhone. For the iPhone 6 series, Apple has taken it a … step … further.


Apple announced the M8 motion that not only measures steps but has the ability to measure distance and elevations. The iPhone 6 and iPhone 6 Plus also have a built in barometer to measure relative air pressure to see determine the elevation you have traveled.

Camera With iSight Sensor

In iOS 8, Apple has introduced PhotoKit, a new set of application programming interfaces to handle photo and video assets with the iPhone and iPad. In conjunction with PhotoKit, Apple has improved the camera on the iPhone 6 for quicker, better performance.


The cameras in the iPhone 6 and iPhone 6 Plus do not blow the camera from the iPhone 5S out of the water, coming in at an Apple standard 8-megapixels with a 1.5-micron sensor and f/2.2 aperture. Apple says that it made improvements in the cameras sensors with the new iSight sensor that has the ability to focus pixels for clear, more precise shots.

Video on the iPhone improves with up to 60-frames-per-second for 1080p video and 240fps for slow motion, which was originally introduced in the iPhone 5S.

The A8 Chip Is Apple’s Best

Apple customizes its own version of ARM architecture chips for its iOS products. The A8 chip for the iPhone 6 and iPhone 6 Plus is 64-bit with new graphics technology to take advantage of some of the more ambitious aspects of iOS 8. The A8 chip has over two billion transistors and has a new signal processor to handle photo and video data.

Faster LTE And Voice

Apple was once thought to be behind the times in adopting LTE (also known as “4G”) for its iPhones. It is now ahead of the times in adopting some of its newest features.

Apple will be one of the first smartphone manufacturers to roll out VoLTE: Voice over LTE that allows for Internet Protocol (IP)-based phone calls, such as those you would make over Google Hangouts or Skype. It is the cellular variation of VoIP (Voice over IP) that has been available on the Internet for years. LTE has not been able to support voice until now because it is an IP and not digitally switched cellular network. VoLTE will work initially with T-Mobile an then through other carriers as it becomes available in the United States and Europe.

Apple is also embracing the notion of carrier aggregation, which is the first step to LTE-Advanced (or what is called “true 4G”). It improves peak data performance and increased bitrate throughput for data connections.

What makes you most excited about the iPhone 6? Let us know in the comments.

Categories: Companies

Renaming Compuware APM to Dynatrace – What It Means to You

Most of you reading this blog have probably seen our recent announcements, Compuware going private from last week, and yesterday naming Compuware APM business unit Dynatrace. Quite a few of you reached out to me with questions as to what it meant and what has/will change. So I thought I would address the majority of […]

The post Renaming Compuware APM to Dynatrace – What It Means to You appeared first on Compuware APM Blog.

Categories: Companies

Coverity Scan: Behind the Scenes

The Kalistick Blog - Wed, 09/10/2014 - 15:25

Before I shed more light on the Coverity Scan service, I would first like to thank the thousands of developers that have inspired us and helped us share static analysis with the open source community.

The Technology of Scan
Coverity Scan’s frontend Ruby on Rails application provides management, sign up and other capabilities required for a user’s workflow. On the backend, Scan is based on a subset of the Coverity enterprise product where developers are able to view defects and triage them.

Tools and Infrastructure
There are now 10 servers, four dedicated for the Coverity backend product for Triage and view defects, four analyzing the more than 500 million lines of code we have currently running on the Scan service, and two servers for the workflow application. We will continue to add more servers as Coverity Scan grows and, at this rate, we expect our server size will likely double by next year!

We also leverage GitHub for SCM and task management and Travis CI for builds, and we use New Relic and HP SiteScope for monitoring.

The People
Coverity Scan service is maintained by Dakshesh Vyas, alongside Doug Puchalski. However, the entire Coverity R&D team is continuing to improve on the algorithms and feedback from the open source community.

We’re always thinking about what we can do to improve quality and security for open source, and which API’s we can share to improve the quality of open source code. Any advice you might have is always welcome!

The post Coverity Scan: Behind the Scenes appeared first on Software Testing Blog.

Categories: Companies

Data mining helps reveal history of invention

Kloctalk - Klocwork - Wed, 09/10/2014 - 15:00

Data mining has the potential to reveal tremendous amounts of previously hidden information and insight. In countless areas, organizations are leveraging data mining solutions to revolutionize their operations.

One particularly interesting and potentially valuable ongoing effort in this area can be found at Oxford University. There, Hyejin Youn and several colleagues are using data mining tools to examine hundreds of years of patent office records to better understand the nature of innovation and invention, the MIT Technology Review reported.

Old records, new insights
As the news source pointed out, many technologists see the nature of invention as one of ongoing combinations. An innovator does not come up with a new idea with no precedent, but rather finds new combinations of existing notions to create something that did not exist previously.

Through their data mining project, Youn and his colleagues aim to explore the truth and limits of this hypothesis, the MIT Technology Review explained. The researchers are conducting data mining and analytics upon information collected by the U.S. Patent Office, whose records date back to 1790. In the Patent Office's system, every new invention receives a code to identify which pre-existing technologies it relies upon. A device that depends on a single technology receives a single code while a gadget that uses multiple pre-existing technologies receives a combination code. According to the researchers, this system created the possibility to effectively explore the deeper nature of how inventions relate to one another and how innovation evolves over time.

Youn's analysis has revealed that approximately 40 percent of all new inventions registered in the U.S. Patent Office rely on previously existing combinations of technology. Sixty percent represented a new, unseen combination of technologies, the news source reported.

As the MIT Technology Review noted, these findings have significant implications.

"The huge gap between the possible and the actual number of combinations indicates that only a small subset of combinations become inventions," Youn and his colleagues explained in their study, the source reported.

Perhaps even more importantly, these findings may hold significance for biological evolution. For both new technologies and organisms, existing combinations inevitably play a major role in determining the future path of development.

"Studying patent, comparative and systemic records of inventions will open a way to make quantitative assessments for a counterpart of these features of biological evolution in technological evolution," the report explained.

Data mining impact
This study, along with countless others, demonstrates the simple fact that for all of the ways that data mining has already been implemented, there are still many new applications to be discovered, and these efforts can yield extremely valuable insight.

For data mining efforts to prove useful, though, organizations must consistently apply best practices to their data mining projects. Choosing effective commercially available mathematical and statistical functions is key to speeding up development and reducing risk. Leveraging such resources can have a major impact on the ultimate potential of any given data mining project.

Categories: Companies

New Guide: Analyzing the Impact of Requirement Changes

The Seapine View - Wed, 09/10/2014 - 12:00

impact analysis cover blogWhen a requirement changes, dependent items may be impacted. Without a full understanding of a requirement’s dependencies, there is an increased risk of making uninformed decisions about implementing changes. An overlooked dependency can quickly cause a ripple effect of missed changes, ultimately resulting in schedule overruns and scope creep.

TestTrack’s impact analysis capabilities can take the guesswork out of understanding and approving requirement changes by helping you quickly understand the scope of changes in the context of the entire project.

Download this guide and learn:

  • Why to perform impact analysis
  • When to perform impact analysis
  • How to perform impact analysis with TestTrack

Download your free copy and learn how to make informed decisions with TestTrack’s impact analysis capabilities.

Share on Technorati . . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

BugBuster v3.0 is about to come out!

BugBuster - Wed, 09/10/2014 - 11:58

BugBuster - FancyWe are preparing a new major release v3.0 that will change the way you write automated tests for your web application. Among the new feature, we will include a Chrome extension recorder, full support for environments, and demo projects to showcase BugBuster’s capabilities.

The new version is set to go live at the end of September. If you wish to get early access to the preview of BugBuster v3, please contact us at support at bugbuster dot com.

A scenario recorder that really works

Creating a scenario with BugBuster is now even simpler with the introduction of the BugBuster recorder. Using it is very straightforward: go to your application and start recording; all you do during the recording session will be saved, just use your web application as you would normally do!

While recording, you can add various checks (or assertions) to your test. This way, the recorded scenario can for instance verify that appropriate text is displayed, that your website properly sends emails, etc… Once finished recording, exporting the test scenario so that it can be replayed by BugBuster is done in a single click.

Once exported to BugBuster, augmenting or customizing your scenario with functional validation is a piece of cake too! Simply open BugBuster and edit the exported script using our powerful JavaScript API. You can then replay and schedule your scenario just like any other scenarios in your test plan.


Environments allow to execute your test scenarios in different contexts, like for instance development, staging and production. An environment is a way to abstract how a scenario will access an application by overwriting default URLs and/or variables. Scenarios can be selectively assigned to different environments and environments can be scheduled individually. Maintaining test scenarios for multiple versions of your app is much easier with this new feature.

Demo applications

We have included a set of demo applications with some example scenarios that will run on our KitchenSink showcase web application. Play around with them as much as you want to see some of the coolest features of BugBuster in action!

Stay tuned! Follow us on our Twitter account or start using BugBuster!

The post BugBuster v3.0 is about to come out! appeared first on BugBuster.

Categories: Companies

Chrome - Firefox WebRTC Interop Test - Pt 2

Google Testing Blog - Tue, 09/09/2014 - 22:09
by Patrik Höglund

This is the second in a series of articles about Chrome’s WebRTC Interop Test. See the first.

In the previous blog post we managed to write an automated test which got a WebRTC call between Firefox and Chrome to run. But how do we verify that the call actually worked?

Verifying the CallNow we can launch the two browsers, but how do we figure out the whether the call actually worked? If you try opening two tabs in the same room, you will notice the video feeds flip over using a CSS transform, your local video is relegated to a small frame and a new big video feed with the remote video shows up. For the first version of the test, I just looked at the page in the Chrome debugger and looked for some reliable signal. As it turns out, the property will go from 0 to 1 when the call goes up and from 1 to 0 when it goes down. Since we can execute arbitrary JavaScript in the Chrome tab from the test, we can simply implement the check like this:

bool WaitForCallToComeUp(content::WebContents* tab_contents) {
// Apprtc will set to 1 when the call comes up.
std::string javascript =
return test::PollingWaitUntil(javascript, "1", tab_contents);

Verifying Video is PlayingSo getting a call up is good, but what if there is a bug where Firefox and Chrome cannot send correct video streams to each other? To check that, we needed to step up our game a bit. We decided to use our existing video detector, which looks at a video element and determines if the pixels are changing. This is a very basic check, but it’s better than nothing. To do this, we simply evaluate the .js file’s JavaScript in the context of the Chrome tab, making the functions in the file available to us. The implementation then becomes

bool DetectRemoteVideoPlaying(content::WebContents* tab_contents) {
if (!EvalInJavascriptFile(tab_contents, GetSourceDir().Append(
return false;
if (!EvalInJavascriptFile(tab_contents, GetSourceDir().Append(
return false;

// The remote video tag is called remoteVideo in the AppRTC code.
StartDetectingVideo(tab_contents, "remoteVideo");
return true;

where StartDetectingVideo and WaitForVideoToPlay call the corresponding JavaScript methods in video_detector.js. If the video feed is frozen and unchanging, the test will time out and fail.

What to Send in the CallNow we can get a call up between the browsers and detect if video is playing. But what video should we send? For chrome, we have a convenient --use-fake-device-for-media-stream flag that will make Chrome pretend there’s a webcam and present a generated video feed (which is a spinning green ball with a timestamp). This turned out to be useful since Firefox and Chrome cannot acquire the same camera at the same time, so if we didn’t use the fake device we would have two webcams plugged into the bots executing the tests!

Bots running in Chrome’s regular test infrastructure do not have either software or hardware webcams plugged into them, so this test must run on bots with webcams for Firefox to be able to acquire a camera. Fortunately, we have that in the WebRTC waterfalls in order to test that we can actually acquire hardware webcams on all platforms. We also added a check to just succeed the test when there’s no real webcam on the system since we don’t want it to fail when a dev runs it on a machine without a webcam:

if (!HasWebcamOnSystem())

It would of course be better if Firefox had a similar fake device, but to my knowledge it doesn’t.

Downloading all Code and Components Now we have all we need to run the test and have it verify something useful. We just have the hard part left: how do we actually download all the resources we need to run this test? Recall that this is actually a three-way integration test between Chrome, Firefox and AppRTC, which require the following:

  • The AppEngine SDK in order to bring up the local AppRTC instance, 
  • The AppRTC code itself, 
  • Chrome (already present in the checkout), and 
  • Firefox nightly.

While developing the test, I initially just hand-downloaded these and installed and hard-coded the paths. This is a very bad idea in the long run. Recall that the Chromium infrastructure is comprised of thousands and thousands of machines, and while this test will only run on perhaps 5 at a time due to its webcam requirements, we don’t want manual maintenance work whenever we replace a machine. And for that matter, we definitely don’t want to download a new Firefox by hand every night and put it on the right location on the bots! So how do we automate this?

Downloading the AppEngine SDK
First, let’s start with the easy part. We don’t really care if the AppEngine SDK is up-to-date, so a relatively stale version is fine. We could have the test download it from the authoritative source, but that’s a bad idea for a couple reasons. First, it updates outside our control. Second, there could be anti-robot measures on the page. Third, the download will likely be unreliable and fail the test occasionally.

The way we solved this was to upload a copy of the SDK to a Google storage bucket under our control and download it using the depot_tools script This is a lot more reliable than an external website and will not download the SDK if we already have the right version on the bot.

Downloading the AppRTC Code
This code is on GitHub. Experience has shown that git clone commands run against GitHub will fail every now and then, and fail the test. We could either write some retry mechanism, but we have found it’s better to simply mirror the git repository in Chromium’s internal mirrors, which are closer to our bots and thereby more reliable from our perspective. The pull is done by a Chromium DEPS file (which is Chromium’s dependency provisioning framework).

Downloading Firefox
It turns out that Firefox supplies handy libraries for this task. We’re using mozdownload in this script in order to download the Firefox nightly build. Unfortunately this fails every now and then so we would like to have some retry mechanism, or we could write some mechanism to “mirror” the Firefox nightly build in some location we control.

Putting it TogetherWith that, we have everything we need to deploy the test. You can see the final code here.

The provisioning code above was put into a separate “.gclient solution” so that regular Chrome devs and bots are not burdened with downloading hundreds of megs of SDKs and code that they will not use. When this test runs, you will first see a Chrome browser pop up, which will ensure the local apprtc instance is up. Then a Firefox browser will pop up. They will each acquire the fake device and real camera, respectively, and after a short delay the AppRTC call will come up, proving that video interop is working.

This is a complicated and expensive test, but we believe it is worth it to keep the main interop case under automation this way, especially as the spec evolves and the browsers are in varying states of implementation.

Future Work

  • Also run on Windows/Mac. 
  • Also test Opera. 
  • Interop between Chrome/Firefox mobile and desktop browsers. 
  • Also ensure audio is playing. 
  • Measure bandwidth stats, video quality, etc.

Categories: Blogs

Get $300 Off Your STPCon Registration with uTest Discount Code

uTest - Tue, 09/09/2014 - 20:00

The Fall edition of Software Test Professionals Conference & Expo (STPCon) is coming up in November and we are so excited to offer uTesters a special discount to the show. STPCON-APPLAUSE-AD

STPCon is the leading conference on software testing and covers test leadership, management and strategy. Attendees can hear industry experts like Mark Tomlinson, Alessandra Moreira, and Mile Lyles share their knowledge and experience. Featured sessions include “In the Cloud and On the Ground: Real-World Performance Testing Stories” and “Tips for Painless API Testing.” 

As a special offer to our testing community, you can use our special discount code to receive $300 off your registration for the show, including early bird pricing! Book before early bird pricing ends September 19, and the price for the main conference drops to $995, the conference plus workshop to $1295 and the conference plus two-day certification class to $2095 with our code.

In addition to STPCon, we have other special uTester discounts to upcoming shows:

  • Receive a 5% discount for new registrations to the 2nd annual User Conference on Advanced Automated Testing (UCAAT) in Munich, Germany from September 16-18, 2014. The European conference, jointly organized by the “Methods for Testing and Specification” (TC MTS) ETSI Technical Committee, QualityMinds, and German Testing Day, will focus exclusively on use cases and best practices for software and embedded testing automation.
  • Receive a 20% discount for new registrations to the International Conference on Software Quality and Test Management (SQTM), which focuses on providing practical methods that consistently produce good results. The show runs from September 29-October 3 in San Diego, California.
  • Receive $200 off your registration to STARWEST, which runs from October 12-17 in Anaheim, California. STARWEST is the premier event for software testers and quality assurance professionals offering 100+ learning and networking opportunities.

Email us at to receive the uTester-exclusive discount codes to these upcoming shows!

Categories: Companies

Portfolio Improvements for Managing Multiple Digital Projects

Assembla - Tue, 09/09/2014 - 18:31

We've been working hard to deliver good stuff to our Portfolio/Enterprise clients. In this particular case, they are more interesting for the enterprise users that have multiple workspaces (or commonly called by us as "spaces") and projects going on under the same roof. If you're not that kind of user but are interested in becoming one, check out our Enterprise package

Lets talk about what matters: the improvements! The recent improvements are a piece of a bigger roadmap we're going through right now. For now, these are the improvements and features we released to our portfolio members:

  1. Active spaces management
  2. Archived spaces management
  3. Templates management
  4. Spaces creation workflow
  5. Templates creation workflow
Active Spaces Page

From now on you're going to get only active spaces in this list:


 As you can see above, you now have some valuable information for each space, among some other features:

  • Ability to see the progress of the closest to date upcoming milestone, their names and due dates
  • Tags that you can manage (just click on "+" sign in the column) to classify your spaces by clients, teams, types of projects and so forth (we had a similar feature: Space Groups -- we migrated it to the tags concept)
  • Ability to sort spaces by milestones due dates by clicking on the table header column (same concept applies for space names and tags)
  • Quckly archive or delete spaces via the 'Actions' controller 
  • New filter bar on the top of the list that lets you filter spaces by specific tag (which is automatically used as criterion to filter the reports in portfolio: tickets, time entries, stand-ups, users list and stream) and quick filter spaces by manually typing in the space names


You can classify each space with tags in the same page.

Archived Spaces Page

For all the spaces that are not active for any reason (out of budget, temporarily suspended, etc), you can archive them. You find those in this page:


You can reactivate them whenever you want, through the actions control.

Templates Page

In this page, you're able to list and manage your custom templates. Custom templates are preconfigured spaces with the desired tools, permissions, and configurations. 


You can add, edit or remove tools inside each template space so you can get the right environment recipes to quickly create new workspaces as needed for new projects. So when you have a new mobile app, digital marketing project, website, or so on, a template is ready with the click of a button. 

Creating Spaces Inside Portfolio

Now that we've introduced you to this idea of working with Templates, we turned spaces creation process more streamlined. Once you create your own templates, you can use them to quickly create new spaces:



By default, we provide the most common templates preloaded. As you add templates, they will be available in this process.

Creating Templates

The workflow to create templates is basically the same to create new spaces. You'll be asked to pick a base template that you will be able to customize. Provide a template name and description so others on your team understand what the given template does:



Wrapping Up

If you want to give a try, you must have a Portfolio/Enterprise subscription. Visit your portfolio's Spaces tab at and explore. 

That's all folks! We really hope you enjoy the recent updates. If you have any questions or suggestions, please drop an e-mail to us at or visit our help desk page. If you would like to try our Portfolio/Enterprise package, click to learn more or contact us with any questions. 

Categories: Companies

Best Practices for Recording a Testcase to Work the First Time

Web Performance Center Reports - Tue, 09/09/2014 - 18:21
Sometimes, HTTP testcase don’t work immediately after being recorded. Your application may require special configuration, or your workflow may need some special data entry in order to work in a repeatable fashion. However, sometimes the problem can be compounded by easily avoidable conditions. Recommendation 1: Close unnecessary applications while recording During recording, Load Tester will capture HTTP and HTTPS network traffic from your workstation as you record. This allows Load Tester to observe your recorded browser window, and child windows that may be spawned from it. If you have other browser windows open, e-mail clients, etc, these can all interfere with the … Continue reading »Related Posts:
Categories: Companies

Tackling cross-cutting concerns with a mediator pipeline

Jimmy Bogard - Tue, 09/09/2014 - 18:17

Originally posted on the Skills Matter website

In most of the projects I’ve worked on in the last several years, I’ve put in place a mediator to manage the delivery of messages to handlers. I’ve covered the motivation behind such a pattern in the past, where it works well and where it doesn’t.

One of the advantages behind the mediator pattern is that it allows the application code to define a pipeline of activities for requests, as opposed to embedding this pipeline in other frameworks such as Rails, node.js, ASP.NET Web API and so on. These frameworks have many other concerns going on besides the very simple “one model in, one model out” pattern that so greatly simplifies conceptualizing the system and realizing more powerful patterns.

As a review, a mediator encapsulates how a series of objects interact. Our mediator looks like:

public interface IMediator
    TResponse Send<TResponse>(IRequest<TResponse> request);
    Task<TResponse> SendAsync<TResponse>(IAsyncRequest<TResponse> request);
    void Publish<TNotification>(TNotification notification) where TNotification : INotification;
    Task PublishAsync<TNotification>(TNotification notification) where TNotification : IAsyncNotification;

This is from a simple library (MediatR) I created (and borrowed heavily from others) that enables basic message passing. It facilitates loose coupling between how a series of objects interact. And like many OO patterns, it exists because of missing features in the language. In other functional languages, passing messages to handlers is accomplished with features like pattern matching.

Our handler interface represents the ability to take an input, perform work, and return some output:

public interface IRequestHandler<in TRequest, out TResponse>
    where TRequest : IRequest<TResponse>
    TResponse Handle(TRequest message);

With this simple pattern, we encapsulate the work being done to transform input to output in a single method. Any complexities around this work are encapsulated, and any refactorings are isolated to this one method. As systems become more complex, isolating side-effects becomes critical for maintaining overall speed of delivery and minimizing risk.

We still have the need for cross-cutting concerns, and we’d rather not pollute our handlers with this work.

These surrounding behaviors become implementations of the decorator pattern. Since we have a uniform interface of inputs and outputs, building decorators around cross-cutting concerns becomes trivial.

Pre- and post-request handlers

One common request I see is to do work on the requests coming in, or post-process the request on the way out. We can define some interfaces around this:

public interface IPreRequestHandler<in TRequest> {
    void Handle(TRequest request);

public interface IPostRequestHandler<in TRequest, in TResponse> {
    void Handle(TRequest request, TResponse response);

With this, we can modify inputs before they arrive to the main handler or modify responses on the way out.

In order to execute these handlers, we just need to define a decorator around our main handler:

public class MediatorPipeline<TRequest, TResponse> 
    : IRequestHandler<TRequest, TResponse> 
    where TRequest : IRequest<TResponse> {

    private readonly IRequestHandler<TRequest, TResponse> _inner;
    private readonly IPreRequestHandler<TRequest>[] _preRequestHandlers;
    private readonly IPostRequestHandler<TRequest, TResponse>[] _postRequestHandlers;

    public MediatorPipeline(
        IRequestHandler<TRequest, TResponse> inner,
        IPreRequestHandler<TRequest>[] preRequestHandlers,
        IPostRequestHandler<TRequest, TResponse>[] postRequestHandlers
        ) {
        _inner = inner;
        _preRequestHandlers = preRequestHandlers;
        _postRequestHandlers = postRequestHandlers;

    public TResponse Handle(TRequest message) {

        foreach (var preRequestHandler in _preRequestHandlers) {

        var result = _inner.Handle(message);

        foreach (var postRequestHandler in _postRequestHandlers) {
            postRequestHandler.Handle(message, result);

        return result;

And if we’re using a modern IoC container (StructureMap in this case), registering our decorator is as simple as:

cfg.For(typeof (IRequestHandler<,>))
   .DecorateAllWith(typeof (MediatorPipeline<,>));

When our mediator builds out the handler, it delegates to our container to do so. Our container builds the inner handler, then surrounds the handler with additional work. If this seems familiar, many modern web frameworks like koa include a similar construct using continuation passing to define a pipeline for requests. However, since our pipeline is defined in our application layer, we don’t have to deal with things like HTTP headers, content negotiation and so on.


Most validation frameworks I use validate against a type, whether it’s validation with attributes or delegated validation to a handler. With Fluent Validation, we get a very simple interface representing validating an input:

public interface IValidator<in T> {
    ValidationResult Validate(T instance);

Fluent Validation defines base classes for validators for a variety of scenarios:

public class CreateCustomerValidator: AbstractValidator<CreateCustomer> {
  public CreateCustomerValidator() {
    RuleFor(customer => customer.Surname).NotEmpty();
    RuleFor(customer => customer.Forename).NotEmpty().WithMessage("Please specify a first name");
    RuleFor(customer => customer.Discount).NotEqual(0).When(customer => customer.HasDiscount);
    RuleFor(customer => customer.Address).Length(20, 250);
    RuleFor(customer => customer.Postcode).Must(BeAValidPostcode).WithMessage("Please specify a valid postcode");

  private bool BeAValidPostcode(string postcode) {
    // custom postcode validating logic goes here

We can then plug our validation to the pipeline as occurring before the main work to be done:

public class ValidatorHandler<TRequest, TResponse>
    : IRequestHandler<TRequest, TResponse>
    where TRequest : IRequest<TResponse> {

    private readonly IRequestHandler<TRequest, TResponse> _inner;
    private readonly IValidator<TRequest>[] _validators;
    public ValidatorHandler(IRequestHandler<TRequest, TResponse> inner,
        IValidator<TRequest>[] validators) {
        _inner = inner;
        _validators = validators;

   public TResponse Handle(TRequest request) {
        var context = new ValidationContext(message);

        var failures = _validators
            .Select(v => v.Validate(context))
            .SelectMany(result => result.Errors)
            .Where(f => f != null)

        if (failures.Any()) 
            throw new ValidationException(failures);

        return _inner.Handle(request);

In our validation handler, we perform validation against Fluent Validation by loading up all of the matching validators. Because we have generic variance in C#, we can rely on the container to inject all validators for all matching types (base classes and interfaces). Having validators around messages means we can remove validation from our entities, and into contextual actions from a task-oriented UI.

Framework-less pipeline

We can now push a number of concerns into our application code instead of embedded as framework extensions. This includes things like:

  • Validation
  • Pre/post processing
  • Authorization
  • Logging
  • Auditing
  • Event dispatching
  • Notifications
  • Unit of work/transactions

Pretty much anything you’d consider to use a Filter in ASP.NET or Rails that’s more concerned with application-level behavior and not framework/transport specific concerns would work as a decorator in our handlers.

Once we have this approach set up, we can define our application pipeline as a series of decorators around handlers:

var handlerType = cfg.For(typeof (IRequestHandler<,>));

handlerType.DecorateAllWith(typeof (LoggingHandler<,>));
handlerType.DecorateAllWith(typeof (AuthorizationHandler<,>));
handlerType.DecorateAllWith(typeof (ValidatorHandler<,>));
handlerType.DecorateAllWith(typeof (PipelineHandler<,>));

Since this code is not dependent on frameworks or HTTP requests, it’s easy for us to build up a request, send it through the pipeline, and verify a response:

var handler = container.GetInstance<IHandler<CreateCustomer>>();

var request = new CreateCustomer {
    Name = "Bob"

var response = handler.Handle(request);


Or if we just want one handler, we can test that one implementation in isolation, it’s really up to us.

By focusing on a uniform interface of one model in, one model out, we can define a series of patterns on top of that single interface for a variety of cross-cutting concerns. Our behaviors become less coupled on a framework and more focused on the real work being done.

All of this would be a bit easier if the underlying language supported this behavior. Since many don’t, we rely instead of translating these functional paradigms to OO patterns with IoC containers containing our glue.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Happy Testers Day: How Will You Celebrate?

uTest - Tue, 09/09/2014 - 17:04

A sharp-eyed tester in our community has reminded me that it’s Testers Day. No, we didn’t make that up.ladybug-clipart-celebrate

Developers get a lot of the limelight, but it’s about time that testers get their day in the sun, and what better day than September 9 to celebrate that fact!

Wait, so what significance does September 9 have to testers, you say? Well, let’s say we just wouldn’t be using the term “bug” or “debugging” without this date or the influential woman associated with this date.

According to the Computer History Museum, on September 9, 1947, American computer scientist and United States Navy Rear admiral Grace Murray Hopper recorded the first computer bug in history while working on the Harvard Mark II computer. The problem was traced to a moth stuck between a relay in the machine, which Hopper logged in Mark II’s log book with the explanation: “First actual case of bug being found.”

So there you have it, folks. A momentous event deserves celebration and commemoration. How will you celebrate Testers Day? With a cake? By finding a bug in Grace Hopper’s honor? Be sure to let us know in the Comments below. In the meantime, be sure to give your colleague a high-five and wish them a Happy Testers Day.

Categories: Companies

Load Testing Survey Results

Software Testing Magazine - Tue, 09/09/2014 - 16:34
A recent survey from the Methods & Tools software development magazine shows that the load testing activity is performed only in a minority of software development organizations. The poll asked the question: “Does your organization perform load / performance testing?” Number of respondents: 217 End of survey: July 2014 Despite the growing adoption of open source load testing tools like JMeter and the large availability of free load testing services on the web, this survey reveals that load testing is still considered as an optional step in the software development lifecycle by two-third ...
Categories: Communities

UrbanCode with Bluemix for Hybrid Cloud Deployment

IBM UrbanCode - Release And Deploy - Tue, 09/09/2014 - 15:04

There is a great deal of interest with Bluemix, IBM’s key addition to its cloud portfolio. IBM Bluemix is a PaaS system that allows developers to quickly develop and deploy cloud first applications that are composed from a suite of services that are made available in the Bluemix service catalog. For example, Synchrony Systems worked with the BART transit system to go from idea to released application in fifteen days. Check out their video.

The IBM DevOps Services for Bluemix provide a fully hosted cloud environment for managing source code, automating builds, and automating deployments to Bluemix. While IBM DevOps Services provides incredible value to those teams that want and can use a fully integrated DevOps environment in the cloud, there are teams that will need more.

Often times enterprise teams have complex, multi-platform systems that require a bit of synchronization to ensure deployments are correctly executed. Updating an application requires a coordinated release of services changes in the home data-center as well as updates to the Bluemix application that uses them. Fortunately we have an excellent set of deployment and release solutions as part of the IBM UrbanCode portfolio that can be used with Bluemix when organizational constraints or system complexity demands the value provided by the IBM UrbanCode solutions.

mobile bluemix mainframe 300x167 UrbanCode with Bluemix for Hybrid Cloud Deployment

With IBM UrbanCode Deploy a single click can trigger an orchestrated deployment spanning back-end systems and your Bluemix application. You also get a unified view of what versions of the back-end code and front-end Bluemix applications are being tested against each-other. With this automation, not only will the release day be a smoother event needing less coordination across teams, but the back-end teams will be better able to keep pace with fast moving Bluemix teams.

Video 1: Deploying to BlueMix with UrbanCode Deploy

In this video,  Dan Berg, IBM Distinguished Engineer and CTO DevOps Tools & Strategy, demonstrates how IBM UrbanCode Deploy can be used on-premise to perform a rolling deployment of a Bluemix application. He utilizes the newly available Cloud Foundry plugin to automate component level deployment processes. In addition, he will show how simple it is to create a fully automated rolling deployment process that leverages UrbanCode Deployment capabilities to accurately keep track of the application versions that are deployed, effectively manage configuration values across multiple spaces, and ensure just the right amount of governance exists within the deployment process. By leveraging UrbanCode Deploy, you can achieve automated deployments for Bluemix application while also being able to update related internal components at the same time.


Video 2: Mobile to Mainframe

This is a more complex scenario. In it a number of tools are used to achieve a more  real-world result. A back-end service needs to be exposed to a front-end mobile app driven off Bluemix due to an integration bug. In 12 minutes it looks at defect capture and tracking, exposing services, and deploying front back. The tools and their roles are:

  • Captures a problem in a mobile application using IBM Mobile Quality Assurance
  • Tracks the problem in IBM DevOps Services
  • Deploys changes to the front and back end using IBM UrbanCode Deploy
  • Simulates the back-end using IBM Rational Test Virtualization Server (aka GreenHat)
  • Exposes internal services publicly using IBM WebSphere CastIron

Post updated September 15th, adding the second video.

Categories: Companies

Knowledge Sharing

Telerik Test Studio is all-in-one testing solution that makes software testing easy. SpiraTest is the most powerful and affordable test management solution on the market today