Skip to content

Feed aggregator

Chrome - Firefox WebRTC Interop Test - Pt 1

Google Testing Blog - Tue, 08/26/2014 - 23:09
by Patrik Höglund

WebRTC enables real time peer-to-peer video and voice transfer in the browser, making it possible to build, among other things, a working video chat with a small amount of Python and JavaScript. As a web standard, it has several unusual properties which makes it hard to test. A regular web standard generally accepts HTML text and yields a bitmap as output (what you see in the browser). For WebRTC, we have real-time RTP media streams on one side being sent to another WebRTC-enabled endpoint. These RTP packets have been jumping across NAT, through firewalls and perhaps through TURN servers to deliver hopefully stutter-free and low latency media.

WebRTC is probably the only web standard in which we need to test direct communication between Chrome and other browsers. Remember, WebRTC builds on peer-to-peer technology, which means we talk directly between browsers rather than through a server. Chrome, Firefox and Opera have announced support for WebRTC so far. To test interoperability, we set out to build an automated test to ensure that Chrome and Firefox can get a call up. This article describes how we implemented such a test and the tradeoffs we made along the way.

Calling in WebRTC Setting up a WebRTC call requires passing SDP blobs over a signaling connection. These blobs contain information on the capabilities of the endpoint, such as what media formats it supports and what preferences it has (for instance, perhaps the endpoint has VP8 decoding hardware, which means the endpoint will handle VP8 more efficiently than, say, H.264). By sending these blobs the endpoints can agree on what media format they will be sending between themselves and how to traverse the network between them. Once that is done, the browsers will talk directly to each other, and nothing gets sent over the signaling connection.

Figure 1. Signaling and media connections.
How these blobs are sent is up to the application. Usually the browsers connect to some server which mediates the connection between the browsers, for instance by using a contact list or a room number. The AppRTC reference application uses room numbers to pair up browsers and sends the SDP blobs from the browsers through the AppRTC server.

Test DesignInstead of designing a new signaling solution from scratch, we chose to use the AppRTC application we already had. This has the additional benefit of testing the AppRTC code, which we are also maintaining. We could also have used the small peerconnection_server binary and some JavaScript, which would give us additional flexibility in what to test. We chose to go with AppRTC since it effectively implements the signaling for us, leading to much less test code.

We assumed we would be able to get hold of the latest nightly Firefox and be able to launch that with a given URL. For the Chrome side, we assumed we would be running in a browser test, i.e. on a complete Chrome with some test scaffolding around it. For the first sketch of the test, we imagined just connecting the browsers to the live apprtc.appspot.com with some random room number. If the call got established, we would be able to look at the remote video feed on the Chrome side and verify that video was playing (for instance using the video+canvas grab trick). Furthermore, we could verify that audio was playing, for instance by using WebRTC getStats to measure the audio track energy level.

Figure 2. Basic test design.
However, since we like tests to be hermetic, this isn’t a good design. I can see several problems. For example, if the network between us and AppRTC is unreliable. Also, what if someone has occupied myroomid? If that were the case, the test would fail and we would be none the wiser. So to make this thing work, we would have to find some way to bring up the AppRTC instance on localhost to make our test hermetic.

Bringing up AppRTC on localhostAppRTC is a Google App Engine application. As this hello world example demonstrates, one can test applications locally with
google_appengine/dev_appserver.py apprtc_code/

So why not just call this from our test? It turns out we need to solve some complicated problems first, like how to ensure the AppEngine SDK and the AppRTC code is actually available on the executing machine, but we’ll get to that later. Let’s assume for now that stuff is just available. We can now write the browser test code to launch the local instance:
bool LaunchApprtcInstanceOnLocalhost() 
// ... Figure out locations of SDK and apprtc code ...
CommandLine command_line(CommandLine::NO_PROGRAM);
EXPECT_TRUE(GetPythonCommand(&command_line));

command_line.AppendArgPath(appengine_dev_appserver);
command_line.AppendArgPath(apprtc_dir);
command_line.AppendArg("--port=9999");
command_line.AppendArg("--admin_port=9998");
command_line.AppendArg("--skip_sdk_update_check");

VLOG(1) << "Running " << command_line.GetCommandLineString();
return base::LaunchProcess(command_line, base::LaunchOptions(),
&dev_appserver_);
}

That’s pretty straightforward [1].

Figuring out Whether the Local Server is Up Then we ran into a very typical test problem. So we have the code to get the server up, and launching the two browsers to connect to http://localhost:9999?r=some_room is easy. But how do we know when to connect? When I first ran the test, it would work sometimes and sometimes not depending on if the server had time to get up.

It’s tempting in these situations to just add a sleep to give the server time to get up. Don’t do that. That will result in a test that is flaky and/or slow. In these situations we need to identify what we’re really waiting for. We could probably monitor the stdout of the dev_appserver.py and look for some message that says “Server is up!” or equivalent. However, we’re really waiting for the server to be able to serve web pages, and since we have two browsers that are really good at connecting to servers, why not use them? Consider this code.
bool LocalApprtcInstanceIsUp() {
// Load the admin page and see if we manage to load it right.
ui_test_utils::NavigateToURL(browser(), GURL("localhost:9998"));
content::WebContents* tab_contents =
browser()->tab_strip_model()->GetActiveWebContents();
std::string javascript =
"window.domAutomationController.send(document.title)";
std::string result;
if (!content::ExecuteScriptAndExtractString(tab_contents,
javascript,
&result))
return false;

return result == kTitlePageOfAppEngineAdminPage;
}

Here we ask Chrome to load the AppEngine admin page for the local server (we set the admin port to 9998 earlier, remember?) and ask it what its title is. If that title is “Instances”, the admin page has been displayed, and the server must be up. If the server isn’t up, Chrome will fail to load the page and the title will be something like “localhost:9999 is not available”.

Then, we can just do this from the test:
while (!LocalApprtcInstanceIsUp())
VLOG(1) << "Waiting for AppRTC to come up...";

If the server never comes up, for whatever reason, the test will just time out in that loop. If it comes up we can safely proceed with the rest of test.

Launching the Browsers A browser window launches itself as a part of every Chromium browser test. It’s also easy for the test to control the command line switches the browser will run under.

We have less control over the Firefox browser since it is the “foreign” browser in this test, but we can still pass command-line options to it when we invoke the Firefox process. To make this easier, Mozilla provides a Python library called mozrunner. Using that we can set up a launcher python script we can invoke from the test:
from mozprofile import profile
from mozrunner import runner

WEBRTC_PREFERENCES = {
'media.navigator.permission.disabled': True,
}

def main():
# Set up flags, handle SIGTERM, etc
# ...
firefox_profile =
profile.FirefoxProfile(preferences=WEBRTC_PREFERENCES)
firefox_runner = runner.FirefoxRunner(
profile=firefox_profile, binary=options.binary,
cmdargs=[options.webpage])

firefox_runner.start()

Notice that we need to pass special preferences to make Firefox accept the getUserMedia prompt. Otherwise, the test would get stuck on the prompt and we would be unable to set up a call. Alternatively, we could employ some kind of clickbot to click “Allow” on the prompt when it pops up, but that is way harder to set up.

Without going into too much detail, the code for launching the browsers becomes
GURL room_url = 
GURL(base::StringPrintf("http://localhost:9999?r=room_%d",
base::RandInt(0, 65536)));
content::WebContents* chrome_tab =
OpenPageAndAcceptUserMedia(room_url);
ASSERT_TRUE(LaunchFirefoxWithUrl(room_url));

Where LaunchFirefoxWithUrl essentially runs this:
run_firefox_webrtc.py --binary /path/to/firefox --webpage http://localhost::9999?r=my_room

Now we can launch the two browsers. Next time we will look at how we actually verify that the call worked, and how we actually download all resources needed by the test in a maintainable and automated manner. Stay tuned!

1The explicit ports are because the default ports collided on the bots we were running on, and the --skip_sdk_update_check was because the SDK stopped and asked us something if there was an update.

Categories: Blogs

Announcing Austin Code Camp 2014

Jimmy Bogard - Tue, 08/26/2014 - 22:35

It’s that time of year again to hold our annual Austin Code Camp, hosted by the Austin .NET User Group:

Austin 2014 Code Camp

We’re at a new location this year at New Horizons Computer Learning Center Austin, as our previous host can no longer host events. Big thanks to St. Edwards PEC for hosting us in the past!

Register for Austin Code Camp

We’ve got links on the site for schedule, registration, sponsorship, location, speaker submissions and more.

Hope to see you there!

And because I know I’m going to get emails…

Charging for Austin Code Camp? Get the pitchforks and torches!

In the past, Austin Code Camp has been a free event with no effective cap on registrations. We could do this because the PEC had a ridiculous amount of space and could accommodate hundreds of people. With free registration, we would see 50% drop-off attending from registrations. Not very fun to plan food with such uncertainty!

This year we have a good amount of space, but not infinite space. We can accommodate the typical number of people that come to our Code Camp (150-175), but for safety reasons we can’t put an unlimited cap on registrations as we’ve done in the past.

Because of this, we’re charging a small fee to reserve a spot. It’s not even enough to cover lunch or a t-shirt or anything, but it is a small enough fee to ensure that we’re fair to those that truly want to come.

Don’t worry though, if you can’t afford the fee, send me an email, and we can work it out.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

The Appium Philosophy

Sauce Labs - Tue, 08/26/2014 - 17:30

Appium is a rising star in the mobile test automation landscape, and since a few of our developers here at Sauce Labs are regular committers to the project, it’s pretty close to our hearts. You’ll often find Sauce devs hanging out on the Appium Google group answering questions, musing about testing strategies, and helping folks tweak their configurations or hunt down bugs.

When it comes to mobile test automation, in many ways we’re still figuring out what the best approaches are, and with Appium, we’ve had the benefit of lessons learned from early automation solutions that didn’t quite work as well as we’d hoped. Some of these lessons are nicely summarized in Appium’s four-point philosophy:

  1. You shouldn’t have to recompile your app or modify it in any way in order to automate it.
  2. You shouldn’t be locked into a specific language or framework to write and run your tests.
  3. A mobile automation framework shouldn’t reinvent the wheel when it comes to automation APIs.
  4. A mobile automation framework should be open source, in spirit and practice as well as in name!

Let’s explore these four points, shall we?

You shouldn’t have to recompile your app or modify it in any way in order to automate it.

Some early attempts at mobile test automation were based around OCR and pixel-based interactions, which history tells us are never very reliable. After that we saw the introduction of in-app agents that can execute the underlying code that would otherwise be triggered by a user interacting with the app. These agents then make calls to the same code triggered by user actions, like swipes and taps and pinches. It’s not a bad solution, but the gotcha is that you have to compile these agents into your app while it’s being tested, and you probably want to take it out after the testing is done. The result? You’re not actually testing the code that you release.

The agent approach is a bit odd, too, since the real-world user actions are always from outside the app itself, and so if you’re simulating those actions inside the code rather than triggering them by their corresponding UI action, you’re that much further away from the real experience of an actual human user.

In order to solve this problem and provide for testing with these more real-world style user interactions, both Apple and Google have created user interface automation frameworks for their respective development environments. Google provides uiautomator, which is a Java API for simulating these UI actions on Android devices and emulators, and Apple provides UI Automation Instruments, a JavaScript programming interface for use with iOS devices and simulators.

Appium works by interfacing with these vendor-provided automation frameworks, translating your test code into the platform-specific interactions. We think this is a better approach than using an agent, since you don’t have to compile a test build that contains code which will not be in the production build. You’re best off testing the same code you will release.

The next two points of the Appium philosophy are covered together since they both derive from the decision to implement the WebDriver API and JSON Wire Protocol:

You shouldn’t be locked into a specific language or framework to write and run your tests.

A mobile automation framework shouldn’t reinvent the wheel when it comes to automation APIs.

WebDriver is already well-known as the engine behind Selenium 2 test automation for Web apps. In the Appium implementation, instead of driving a Web browser on a desktop operating system as Selenium does, Appium drives native apps and browsers on mobile operating systems.

Many testers are already very familiar with writing and running tests locally using Selenium WebDriver, where your test will open a browser, run through some interactions with the app, and verify that the test case passes. But the magic of WebDriver really happens when it’s used in a distributed fashion. Instead of making local calls directly to the browser, the WebDriver test becomes an HTTP client and makes requests to a WebDriver server, which in turn actually makes the necessary calls to the browser and app. The elegance of Appium is that it utilizes this framework to interact, not with a desktop browser, but with Android’s UI Automator or Apple’s UIAutomation Instruments, which then perform user actions on the native app or mobile browser running in either an emulator or a real device. And of course Sauce Labs’ infrastructure can manage all of this for you!

(Appium architecture for iOS testing on Sauce)

(Appium architecture for iOS testing on Sauce)

One of the great features of WebDriver is that you can write your test code in your language of choice, since there are WebDriver client libraries for pretty much every popular language out there. The code you write represents the actions users perform on your application under test, and WebDriver can be used with most test runners and frameworks, so chances are your Appium tests can simply be added to your team’s existing workflows.

And so the real-world benefits of the second two points of the Appium philosophy unfold from there. Because Appium implements the WebDriver API, you don’t need to reinvent that wheel. Because WebDriver is so widely used, you can choose any language and test framework and thus incorporate Appium tests easily into your existing process.

A mobile automation framework should be open source, in spirit and practice as well as in name!

Finally, Appium is proudly Open Source and hosted on GitHub. With over 4800 commits, 2000 issues closed, 3400 pull requests, 1100 forks, 1300 stars and dozens of contributors, Appium is a hotbed of excitement and activity in the open source world. The results are spectacular!

And the fun has only just begun!  Look for Appium to continue to evolve rapidly, guided by its core philosophy, and helping you do what you do best, which is delivering great apps to your users!

- Michael Sage, Principal Technology Evangelist, Sauce Labs

Michael Sage is a Principal Technology Evangelist at Sauce Labs who helps software teams develop, deliver, and care for great apps. He’s spent over 15 years as a solutions architect and consultant with software companies including Mercury Interactive, Hewlett Packard, and New Relic. He lives in San Francisco, CA.

 

Categories: Companies

Configuration as Code: The Job DSL Plugin

This is one in a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Valentina Armenise, solutions architect, CloudBees. In this presentation, Daniel Spilker, CoreMedia AGs, maintainer of the plugin, shows how to configure a Jenkins Job without using the GUI at JUC Berlin.

Daniel Spilker, from CoreMedia, at the JUC 2014 in Berlin, presented the DSL plugin and showed how the Configuration as a Code Approach can simplify the orchestration of complex workflow pipelines.

The goal of the plugin is to create new pipelines fast and easily using the preferred tools to “code” the configuration as opposite of using different plugins and jobs to set up complex workflows through the GUI.

Indeed, the DSL plugin defines a new way to describe a Jenkins Job configuration by the use of Groovy Language piece of code stored in a single file.

After installing the plugin a new option will be available in the list of build steps: “process JOB DSL” which will allow you to parse the DSL script.

The descriptive groovy file can be either uploaded in Jenkins manually or stored in the SCM and pulled in a specific job.

The jobs whose configuration is described in the DSL script will be created on the fly so that the user is responsible for maintaining the groovy script only.






Each DSL element used in the groovy script matches a specific plugin functionality. The community is continuously releasing new DSL elements in order to be able to cover as many plugins as possible.





Of course, given the +900 plugins available today and the frequency of new plugin releases, it is fairly impossible that the DSL plugin covers all use-cases.

Here comes the strength of this plugin: although each Jenkins plugin need to be defined by a DSL element, you can create your own custom DSL element by the use of the method configure which gives direct access to underlying XML of the Jenkins config.xml. This means that you can use DSL plugin to code any configuration even if a predefined DSL element is not available.

The plugins gives also the possibility to introduce custom DSL commands.

Given the flexibility of the DSL plugin, and how fast the community is in realizing new DSL elements (a new feature every 6 weeks), this plugin seems to be a really interesting way to put Jenkins configuration into code.

Want to know more? Refer to:





Valentina Armenise
Solutions Architect, CloudBees

Follow Valentina on Twitter.


Categories: Companies

Apple Xcode Integration for UrbanCode Deploy

IBM UrbanCode - Release And Deploy - Tue, 08/26/2014 - 17:11

Automate your deployments to iOS devices and simulators alongside any changes to the back-end. With this new plugin for Apple Xcode, UrbanCode Deploy’s support for mobile gets even more broad as it already covers Android and Worklight. Typically, this type of plugin is used in the testing environments to drive rapid feedback of changes to mobile applications – especially when they are being developed against emerging changes on the back-end.

With the emphasis on testing, it’s no surprise that the plugin has steps for running UI and unit testing. These compliment existing integrations for running tests and even loading apps into device clouds (see Appurify and MobileLabs).

Without further ado, here’s the five minute overview and demo video for the new plugin. The demo part starts at 1:32.

Categories: Companies

QF-Test 4 Released

Software Testing Magazine - Tue, 08/26/2014 - 16:49
QF-Test 4.0 is a major step forward – not just with its support of new technologies like JavaFX, the Swing replacement, and extending web testing by Chrome, but with many great enhancements that make automated testing more robust and more enjoyable. The following major new features have been implemented for QF-Test version 4: * New GUI engine: JavaFX * Support for Chrome browser on Windows * Improved support for Java WebStart and applets * Support for the AJAX framework jQuery UI * Uniform generic classes for components of all GUI engines * Multi-level sub-item concept with QPath, ...
Categories: Communities

5 Tips to Write Better Tests

Software Testing Magazine - Tue, 08/26/2014 - 16:36
Writing software tests is a good thing, writing better tests is even better. In this blog post, Marcos Brizeno shares fives tips to improve your software testing practice. For each of the tips, he provides also external references if you want to explore further the topic. The 5 Tips to Write Better Tests shared by Marcos Brizeno are: * Treat Test Code as Production Code * Use Test Patterns to achieve great readability * Avoid Unreliable Tests * Test at The Appropriate Level * Use Test Doubles Each of the topics is explained in detail and pointers ...
Categories: Communities

Quality is Customer Value: My Quest for the uTest MVT Award

uTest - Tue, 08/26/2014 - 16:18

One thing I respect about uTest is their continual pursuit of ways to increase customer value. It’s an essential business objective to ensure the health and growtrophy_goldenth of our company. ‘Value’ should be the middle name of any good tester. “Lucas Value Dargis.” Sounds pretty cool, huh?

I had just finished my 26th uTest test cycle in mid-2012. I had put an extra amount of focus and effort into this cycle because there was something special at stake. On some occasions, uTest offers an MVT award which is given to the Most Valuable Tester of the cycle. The selection process takes several things into account including the quality of the bugs found, clear documentation, participation, and of course, customer value.

The MVT award not only offers a nice monetary prize, but it’s also a way to establish yourself as a top tester within the uTest Community. I decided I was going to win that MVT award.

As usual, I started by defining my test strategy. I took the selection criteria and the project scope and instructions into account and came out with these five strategic objectives:

  • Focus on the customer-defined ‘focus’ area
  • Report only high-value bugs
  • Report more bugs then anyone else
  • Write detailed, easy-to-understand bug reports
  • Be active on the project’s chat

When the test cycle was over, I reflected on how well I’d done. I reported nine bugs — more than anyone else in the cycle. Of those, eight were bugs in the customer’s ‘focus’ area. The same eight were also rated as very or extremely valuable. All the bugs were documented beautifully and I was an active participant in the cycle’s chat.

There was no competition. No other tester was even close. I had that MVT award in the bag. I was thinking of all the baseball cards I could buy with the extra Cheddar I’d won. I even called my mom to tell her how awesome her son was! You can only imagine my surprise when the announcement was made that someone else had won the MVT award. Clearly there was some mistake…right? That’s not how you spell my name!

I emailed the project manager asking for an explanation for this miscarriage of justice. The tester who won had fewer bugs, none of them were from the ‘focus’ area and they weren’t documented particularly well. How could that possibly be worth the MVT award? The PM tactfully explained that while I had done well in the cycle, the tester who won had found the two most valuable bugs and the customer deemed them worthy of the MVT award.

I was reminded that my adopted definition of quality is “value to someone who matters” and suddenly it all fell into place. It didn’t matter how valuable I thought my bugs and reports were. It didn’t matter how much thought and effort I put into my strategy and work. At the end of the day, a tester’s goal, his or her mission, should be to provide “someone who maters with the most value possible. I’m not that “someone who matters.” That “someone” is our customer.

It was a hard pill to swallow, but that lesson had a strong impact on me and it will be something I’ll carry with me moving forward. Congratulations to the MVT. I hope you enjoy all those baseball cards.

A Gold-rated tester and Enterprise Test Team Lead (TTL) at uTest, Lucas Dargis has been an invaluable fixture in the uTest Community for 2 1/2 years, mentoring hundreds of testers and championing them to become better testers. As a software consultant, Lucas has also led the testing efforts of mission-critical and flagship projects for several global companies. You can visit him at his personal blog and website.

Categories: Companies

Never a More Interesting Time

Sonatype Blog - Tue, 08/26/2014 - 15:32
“It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had...

To read more, visit our blog at blog.sonatype.com.
Categories: Companies

Data Driven Performance Problems are Not Always Related to Hibernate

Data-driven performance problems are not new. But most of the time it’s related to too much data queried from the database. O/R mappers like Hibernate have been a very good source for problem pattern blog posts in the past. Last week I got to analyze a new type of data-driven performance problem. It was on […]

The post Data Driven Performance Problems are Not Always Related to Hibernate appeared first on Compuware APM Blog.

Categories: Companies

On auditing, standards, and ISO 29119

Markus Gaertner (shino.de) - Tue, 08/26/2014 - 13:07

Disclaimer:
Since I am publishing this on my personal blog, this is my personal view, the view of Markus Gärtner as an individual.

I think the first time I came across ISO 29119 discussion was during the Agile Testing Days 2010, and probably also during Stuart Reid’s keynote at EuroSTAR 2010. Remembering back that particular keynote, I think he was visibly nervous during his whole talk, eventually delivering nothing worth of a keynote. Yeah, I am still disappointed by that keynote four years later.

Recently ISO 29119 started to be heavily debated in one of the communities I am involved in. Since I think that others have expressed their thoughts on the matter more eloquently and deeper than I going to do, make sure to look further than my blog for a complete picture of the whole discussion. I am going to share my current state of thoughts here.

Audits

In my past I have been part of a few audits. I think it was ISO 9000 or ISO 9001, I can’t tell, since people keep on confusing the two.

These audits usually had a story before the audit. Usually one or two weeks up-front I was approached by someone asking me whether I could show something during the audit that had something to do with our daily work. I was briefed in terms of what that auditor wanted to see. Usually we also prepared a presentation of some sorts.

Then came the auditing. Usually I sat together with the auditor and a developer in a meeting room, and we showed what we did. Then we answered some questions from the auditor. That was it.

Usually a week later we received some final evaluation. Mostly there were points like “this new development method needs to be described in the tool where you put your processes in.” and so on. It didn’t affect my work.

More interestingly, what we showed usually didn’t have anything to do with the work we did when the auditor left the room. Mostly, we ignored most of the process in the process tool that floated around. At least I wasn’t sure how to read that stuff anyways. And of course, on every project there was someone willing to convince you that diverting from whatever process was described was fruitful in this particular situation and context.

Most interestingly, based upon the auditing process people made claims about what was in the process description, and what the auditor might want to see. No one ever talked to them up-front (probably it wasn’t allowed, was the belief). Oh, and of course, if you audit something to improve it that isn’t the thing that you’re doing when you’re not audited, then you’re auditing bogus. Auditing didn’t prevent us from running into this trap. Remember: If there is an incentive, the target will be hit. Yeah, sounds like what we did. We hit the auditing target without changing anything real.

Skip forward a few years, and I see the same problems repeated within organizations that adopt CMMi, SPICE, you-name-it. Inherently, the fact that an organization has been standardized seems to lead to betrayal, mis-information, and ignorance when it comes to the processes that are described. To me, this seems to be a pattern among the companies that I have seen that adopted a particular standard for their work. (I might be biased.)

Standards

How come, you ask, we adopt standards to start with? Well, there are a bunch of standards out there. For example, USB is standardized. So was PS/2, VGA, serial and parallel ports. These standards solve the problem of two different vendors producing two pieces of hardware that need to work together. The standard defines their commonly used interface on a particular system.

This seems to work reasonably for hardware. Hardware is, well, hard. You can make hard decisions about hardware. Software on the other hand is more soft. It reacts flexibly, can be configured in certain ways, and usually involves a more creative process to get started with. When it comes to interfaces between two different systems, you can document these, but usually a particular way of interface between software components delivers some sort of competitive advantage for a particular vendor. Though, when working on the .NET platform, you have to adhere to certain standards. The same goes with stuff like JBoss, and whatever programming language you may use. There are things that you can work around, there are others which you can’t.

Soft-skill-ware, i.e. humans, are even more flexible, and will react in sometimes unpredictable ways when challenged in difficult work situations. That said, people tend to diverge from anything formal to add their personal note, to achieve something, and to show their flexibility. With interfaces between humans, as in behavioral models, humans tend to trick the system, and make it look like they adhere to the behavior described, but don’t do so.

ISO 29119

ISO 29119 tries to combine some of the knowledge that is floating around together. Based upon my experiences, I doubt that high quality work stems from a good process description. In my experience, humans can outperform any mediocre process that is around, and perform dramatically better.

That said, good process descriptions appear to be one indicator for a good process, but I doubt that our field is old enough for us to stop looking for better ways. There certainly are better ways. And we certainly haven’t understood enough about software delivery to come up with any behavioral interfaces for two companies working on the same product.

Indeed, I have seen companies suffer from outsourcing parts of a process, like testing, to another vendor, offshoring to other countries and/or timezones. Most of the clients I have been involved with were even suffering as much as to insource back the efforts they previously outsourced. The burden of the additional coordination was simply too high to warrant the results. (Yeah, there are exceptions where this was possible. But these appear to be exceptions as of now.)

In fact, I believe that we are currently exploring alternatives to the traditional split between programmers and testers. One of the reasons we started with that split, was Cognitive Dissonance. In the belief that a split between programmers and testers only overcomes Cognitive Dissonance, we have created an own profession a couple of decades ago. Right now, we find out with the uprising of cross-function teams in agile software development that that split wasn’t necessary to overcome Cognitive Dissonance. In short, you can keep an independent view if you can maintain a professional mind-set, while still helping your team to develop better products.

The question I am asking: will a standard like ISO 29119 keep us from exploring further such alternatives? Should we give up exploring other models of delivering working software to our customers? I don’t think so.

So, what should I do tomorrow?

Over the years, I have made a conscious effort to not put myself into places where standards dominated. I put myself simply speaking into the position where I don’t need to care, and can still help deliver good software. Open source software is such an environment.

Of course, that won’t help you in the long run if the industry got flooded with standards. ISO 29119 claims it is based upon internationally-agreed viewpoints. Yet, it claims that it tries to integrate Agile methods into the older standards that it’s going to replace. I don’t know which specialist they talked to in the Germany Agile community. It certainly wasn’t me. So, I doubt much good coming out of this.

And yet, I don’t see this as my battle. Since a while I realized that I probably put too much on my shoulders, and try to decide which battles to pick. I certainly see the problems of ISO 29119, but it’s not a thing that I am wanting to put active effort to.

Currently I am working on putting myself in a position where I don’t need to care at all about ISO 29119 anymore, whatever will come out of it. However, I think it’s important that the people that want to fight ISO 29119 more actively than me are able to do so. That is why, they have my support from a far.

— Markus Gärtner

PrintDiggStumbleUpondel.icio.usFacebookTwitterLinkedInGoogle Bookmarks

Categories: Blogs

Creating TestTrack Issues from QA Wizard Pro Scripts

The Seapine View - Tue, 08/26/2014 - 13:00

QA Wizard Pro’s scripting language includes a set of statements you can use to automatically create TestTrack issues to report errors found during script playback. The statements you use depend on the information you want to add to the issue.

You can use the AddIssue statement (named AddDefect in QA Wizard Pro 2014.0 and earlier) to create a brief issue with information only in the Summary, Description, Steps to Reproduce, and Other Hardware and Software fields. This is a simple way to create a new issue, provide some basic information in it, and add it to TestTrack at the same time.

After the issue is added, you can manually edit it in TestTrack or from the Issues pane in QA Wizard Pro to provide additional information.

ttAddIssueStatementExample

You can also use advanced statements introduced in QA Wizard Pro 2014.1 to create an empty TestTrack issue object, set and work with specific field values, and then add the issue to the project. These statements (NewIssue, SetFieldValue, GetFieldValue, RemoveField, AddFileAttachment, and AddToTestTrack) allow you to set more issue field values, including custom fields, and add file attachments to create more thorough issues that require less time to edit or review later.

ttAdvancedStatementsExample

Check out the QA Wizard Pro help for more information about using these statements and examples.

Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

Visit Ranorex at STARWEST 2014

Ranorex - Tue, 08/26/2014 - 10:00
Ranorex will participate in the STARWEST at Anaheim, California Disneyland Hotel from 12th to 17th October, 2014.

STARWEST is the premier event for software testers and quality assurance professionals – covering all your testing needs with 100+ learning and networking opportunities:
  • Keynotes featuring recognized thought-leaders
  • In-depth half- and full-day tutorials
  • Conference sessions covering major testing issues and solutions
  • Enjoy complimentary bonus sessions
  • Pre-conference training classes
  • The Expo, bringing you the latest in testing solutions
  • Networking events including meeting the speakers, the Test Lab, and more!
Don't miss the session "Why Automation Fails – in Theory and Practice" presented by our own Jim Trentadue.

We look forward to seeing you at our booth!



Categories: Companies

Are the tools that you are using to test your application affecting the results of those tests?

HP LoadRunner and Performance Center Blog - Tue, 08/26/2014 - 06:35

Success_failure.jpgVirtualization is all around us, and you may be considering using virtual servers as a load generator. There is support for this option in LoadRunner and Performance, but the question to ask is “Are the tools that you are using to test your application affecting the results of those tests?”

 

Keep reading to find out how noise and other factors could be impacting your tests.

Categories: Companies

Upcoming changes and improvements to Assembla Portfolio and Enterprise

Assembla - Mon, 08/25/2014 - 23:29
We are upgrading our enterprise and portfolio packages so that our customers can more easily launch, manage and maintain a large number of Fast IT projects. We are also adding new features which will help them track milestones -- the commitments they have made to clients and stakeholders -- and get them delivered on time and on budget.
 
If you use one of our portfolio or enterprise packages you will see a number of changes in the portfolio level menus and features.  All ticket users will see some upgrades to milestones.  Please feel free to contact us if you have questions.  For more control over these upgrades, please contact us in advance to become part of our customer advisory group.  Send your comments, questions, and call requests to jeff@assembla.com.
Why? Our bigger customers often manage a list of what we now call “Fast IT” projects: websites, mobile apps, digital marketing, and SaaS related projects. They want a "system of record" where they can keep all of these assets together for future maintenance and improvement. We also found that they were using creative techniques to track the commitments they have made for delivering upgrades. Because Assembla was initially designed for a continuous agile process, we were not effectively helping them to deliver on the specific dates that they committed to. We intend to help them with new features for planning and tracking deliverables.
DETAILS OF THE PLANNED UPGRADES Project Portfolio Management

Support a large number of teams, spaces and deliverables, without losing anything:

  • Dashboard overview of spaces, users, and upcoming deliverables
  • Better way to group spaces for reporting. Replace the old “groups” system with tags so you can easily tag a space by client, business, type, etc.
  • Streamlined process for creating spaces 
  • A simple way to add and maintain custom space configurations (template spaces)
  • Better workflow for moving spaces to and from archived status.
  • Workflow for reporting on status and needs from the project manager to the portfolio manager.
  • New summary report tab on spaces.
  • Improved API for external reporting
  • Rename “Projects” to “Spaces”: We found that when our customers use the word “project” they are often not referring to a single Workspace.  Usually they mean deliverables, which they show as specific milestones inside a space. Sometimes they are speaking of large projects that span multiple spaces and teams. In order to reduce confusion, we are removing the word “projects” from our spaces tab.
Definitions: A workspace or "space" keeps your assets together -- teams, tasks, roadmaps, documentation, and code -- so that you can improve these assets over time. A deliverable is a specific release or upgrade that you are trying to finish by a specified date.  Many of our customers use Assembla Milestones to track deliverables.
Deliverable Management

Deliver what you committed to, on time:

  • Top-level dashboard to show the status of upcoming milestones.
  • Upgrade the milestone views to make it easier to see what has been planned, what has been finished, whether there are any obstacles.  Add optional budget and due date information.  Add links to reports and cardwalls that will show the state of that milestone.
  • Add reporting about status and obstacles to each milestone.
  • Upgrade the milestone calendars that show upcoming milestones for a space or set of spaces.
  • Add new List and Timeline (Gantt) views of the milestone calendar.
  • Swim lanes to show the progress of epics and stories within each milestone.
  • Discussion and cardwall for planning deliverables. We have found that some of our customers have created special ‘proposal spaces’ where they can discuss, plan, and budget upcoming deliverables. They use the ticket discussion threads, and the cardwall for showing the planning and delivery process. We will add special views for portfolio-level Kanban boards and discussions.
Security
  • Hosted customers can use SAML login (released recently), which allows them to centralize their user list, passwords, and permissions into their own SAML server.
  • Private installations give larger customers complete control over their security environment.  We have updated the private install and simplified installation and upgrades.
FUTURE- Import and link with other apps

Importing and linking will make it easy to:

  1. Keep your assets together for future maintenance and improvement, even if they come from teams, suppliers, and clients working on multiple systems.
  2. Track deliverables in all your projects.
Our customers work with clients and suppliers that use a variety of tools. We want you to be able to bring information and assets from those tools into Assembla -- your system of record. You should be able to do this without disrupting the work of your clients and suppliers. Assembla's extensible tool architecture will help. We will be adding tabs and open source connectors that help you capture the assets that you’ve bought and paid for. For example, we are upgrading the Github tool to replicate code from developers working in Github, into your own account.

For more control over these upgrades, please contact us in advance to become part of our customer advisory group.  Send your comments, questions, and call requests to jeff@assembla.com.

Categories: Companies

On ISO 29119 Content

Thoughts from The Test Eye - Mon, 08/25/2014 - 18:58
DocumentationIdeas
Background

The first three parts of ISO 29119 were released in 2013. I was very skeptic, but also interested, so I grabbed an opportunity to teach the basics of the standard, so it would cover the costs of the standard.

I read it properly, and although I am biased against the standard I did a benevolent start, and blogged about it a year ago, http://thetesteye.com/blog/2013/11/iso-29119-a-benevolent-start/

I have not used the standard for real, I think that would be irresponsible and the reasons should be apparent from the following critique. But I have done exercises using the standard and had discussions about the content, and used most of what is included at one time or another.

Here are some scattered thoughts on the content.

 

World view

I don’t believe the content of the standard matches software testing in reality. It suffers from the same main problem as the ISTQB syllabus: it seems to view testing as a manufacturing discipline, without any focus on the skills and judgments involved in figuring out what is important, observing carefully in diverse ways, and reporting results appropriately. It puts focus on planning, monitoring and control; and not about what is being tested, and how the provided information brings value. It gives an impression that testing follows a straight line, but the reality I have been in is much more complicated and messy.

Examples: Test strategy and test plan is so chopped up that it is difficult to do something good with it. Using the document templates will probably give the same tendency as following IEEE 829 documentation: You have a document with many sections that looks good to non-testers, but doesn’t say anything about the most important things (what are you trying to test, and how.)

For such an important area as “test basis” – the information sources you use – they only include specifications and “undocumented understanding”, where they could have mentioned things like capabilities, failure modes, models, data, surroundings, white box, product history, rumors, actual software, technologies, competitors, purpose, business objectives, product image, business knowledge, legal aspects, creative ideas, internal collections, you, project background, information objectives, project risks, test artifacts, debt, conversations, context analysis, many deliverables, tools, quality characteristics, product fears, usage scenarios, field information, users, public collections, standards, references, searching.

 

Waste

The standard includes many documentation things and rules that are reasonable in some situations, but often will be just a waste of time. Good, useful documentation is good and useful, but following the standard will lead to documentation for its own sake.

Examples: If you realize you want to change your test strategy or plan, you need to go back in the process chain and redo all steps, including approvals (I hope most testers adjust often to reality, and only communicate major changes in conversation.)

It is not enough with Test Design Specification and Test Cases, they have also added a Test Procedure step, where you in advance write down in which order you will run the test cases. I wonder which organizations really want to read and approve all of these… (They do allow exploratory testing, but beware that the charter should be documented and approved first.)

 

Good testing?

A purpose of the standard is that testing should be better. I can’t really say that this is the case or not, but with all the paper work there are a lot of opportunity cost, time that could have been spent on testing. On the other hand, this might be somewhat accounted for by approvals from stakeholders.

At the same time, I could imagine a more flexible standard that would have much better chances of encouraging better testing. A standard that could ask questions like “Have you really not changed your test strategy as the project evolved?” A standard that would encourage the skills and judgment involved in testing.

The biggest risk with the standard is that it will lead to less testing, because you don’t want to go through all steps required.

 

Agile

It is apparent that they really tried to bend in Agile in the standard. The sequentiality in the standard makes this very unrealistic in reality.

But they do allow bug reports not being documented, which probably is covered by allowing partial compliance with ISO 29119 (this is unclear though, together with students I could not be certain what actually was needed in order to follow the standard with regards to incident reporting.)

The whole aura of the standard don’t fit the agile mindset.

 

Finale

There is a momentum right now against the standard, including a petition to stop it http://www.ipetitions.com/petition/stop29119 which I have signed.

I think you should make up your own mind and consider signing it; it might help if the standard starts being used.

 

References

Stuart Reid, ISO/IEC/IEEE 29119 The New International Software Testing Standards, http://www.bcs.org/upload/pdf/sreid-120913.pdf

Rikard Edgren, ISO 29119 – a benevolent start, http://thetesteye.com/blog/2013/11/iso-29119-a-benevolent-start/

ISO 29119 web site, http://www.softwaretestingstandard.org/

Categories: Blogs

Snagit for Windows Features Every Tester Needs to Know

uTest - Mon, 08/25/2014 - 17:56

I used TechSmith’s Snagit before I started working here. I was creating simple screen captures with annotations for my test documentation and reporting defects. The more I used Snagit, the more it became a part of my daily workflow. I discovered that many testers are doing just what I did — using Snagit for those simple screen capture tasks. But it’s far more powerful than that. And the robust features in Snagit are often overlooked because testers find lots of value in the capture experience alone.

To better understand the features that testers love most about Snagit, I turned to our testers here at TechSmith. Who better to give advice on Snagit features than the testers that help make it! Here are the top features of Snagit our testers use to make their work shine.

Video Capture

Video in Snagit? Yep, it’s in there, but you might be wondering why you would want to use it. It can be difficult to describe the complex behaviors of software solely through text. Capturing video of a defect or anomaly in action is a far more powerful demonstration. With video, you can describe the behavior prior to and following an anomaly. Essentially, you’re narrating the defect. And video is extremely helpful when working with remote testers or developers.

To capture a video, simply activate a capture and select the video button:

snagit1

Snagit will record full screen or a partial selection of your screen. When you’ve finished capturing, you can trim the video in the Snagit editor and share it using your favorite output. Speaking of sharing…

Outputs

You can save captures as images in a variety of formats, but did you know about the many outputs for sharing your content from the Snagit Editor? Get your images and videos where they need to go using Output Accessories. From the Share menu, you can output captures to many places including Email, FTP, the Clipboard, MS Office programs, our very own Camtasia and Screencast.com, YouTube, and Google Drive. The complete list of available outputs can be found from the Snagit Accessories Manager on the Share menu:

snagit2

Additional places to share your captures to include Twitter, Facebook, Evernote, Skype, and Flickr.

Profiles

Profiles allow users to set up a workflow for their captures. Workflows make it more efficient by configuring a capture type and sharing locations. Profiles are often used by testers for repetitive testing processes, such as creating test documentation, recording test execution artifacts, and capturing defects. An example of using a profile would be sending an image capture to the Snagit editor for a quick annotation and then directly to Microsoft Word by Finishing the profile:

snagit3

Or you can even bypass the editor altogether if you want your images to go to your selected output without annotations. Learn more about profiles.

Mobile Capture with TechSmith Fuse

Are you testing a mobile application and need to get images of those bugs over to a developer ASAP? Rather than messing with email, just Fuse it! TechSmith Fuse is a free mobile application that lets you capture images or video on your mobile device (iOS, Android, or Windows), upload them directly to the Snagit Editor through your wireless network, and then enhance your content using Snagit’s many editing tools.

snagit4

Sharing Your Content

Screencast.com is both a repository for your image and video content as well as a place to conveniently share it with others. Your image and video content can be sent from Snagit and shared privately or publicly. Best of all, you can start storing and sharing your content with a free account that comes with 2GB of storage space.

snagit5

There you have it — some key features to you need to know to get the most out of Snagit. Happy capturing!

Jess Lancaster is the Software Test Manager at TechSmith, the makers of Snagit, Camtasia, and other visuals communication software applications.

Like Snagit? Be sure to leave a review and also check out all of the tools available to testers, along with their user reviews, over at the Tool Reviews section of uTest.

Categories: Companies

My Tests are a Mess

Testing TV - Mon, 08/25/2014 - 17:25
Is your test suite comprehensible to someone new to the project? Can you find where you tested that last feature? Do you have to wade through dozens of files to deal with updated code? Organizing tests are hard. It is easy to make things overly elaborate and complicated. Learn an approach to grouping the tests […]
Categories: Blogs

Professional Tester’s Manifesto

Software Testing Magazine - Mon, 08/25/2014 - 16:38
Certification is a process that has gradually spread amongst all areas of software development. Software testing certifications are mainly managed by the International Software Testing Qualifications Board (ISTQB) and its local affiliates. The Professional Tester’s Manifesto is a strong statement about the certification process in software testing. There are many issues with the certification process. Companies use certification as a criteria for employee selection without considering the actual capabilities of the people applying for jobs. The value of certification is dubious and it is mainly a money-making market. As an example, ...
Categories: Communities

Open source criticism often misguided

Kloctalk - Klocwork - Mon, 08/25/2014 - 15:31

Recent months have seen a number of leading commentators offer serious criticism of the open source software movement as a whole, suggesting that these efforts are either doomed to fail or not worth the investment. Yet according to InfoWorld contributor Matt Asay, such criticism is often misguided, demonstrating a lack of understanding as to what lies at the heart of open source software efforts.

Criticizing the critics
Asay noted that two of the most recent attacks against open source efforts came from The New York Times' Quentin Hardy and fellow InfoWorld contributor Galen Gruman. In the latter case, Asay argued that Gruman rightfully criticizes many recent open source mobile failures. However, he fails to acknowledge that Android is, in fact, a mobile open source operating system and is also the most successful mobile OS in the world. Instead, Gruman maintained that because Android's development is primarily conducted by Google, rather than the open source community, it is somehow inherently distinct from true, traditional open source projects.

Asay explained that the reality of the situation is that the vast majority of open source projects currently take this form. OpenStack, Linux and many others originate with major companies before being offered to the open source community at large. The writer called the notion of open source projects springing forth from organic, communal, selfless developers a "mythical (and mostly false)" understanding.

Hardy, on the other hand, focused his critique on what he saw as the commercial failure of open source. Asay countered by noting that many major companies are now pulling in tremendous revenue thanks to open source software. The key point is that rather than trying to make open source directly profitable, firms are selling services and software that complement open source offerings.

The new standard
Perhaps most importantly, Asay argued that open source is, simply put, the standard method of software development now. As Mike Olson, co-founder of Cloudera, recently pointed out, companies really have no choice but to embrace open source.

"You can no longer win with a closed source platform, and you can't build a successful stand-alone company purely on open source," said Olson, the news source reported.

As a result, open source increasingly represents the standard for businesses, rather than a risky commercial venture.

Further evidence of this trend can be seen in the creation and funding of the Core Infrastructure Initiative. The CII was created to identify open source projects in need of funding to remain operational. As International Business Times contributor Joram Borenstein reported, this organization received a tremendous amount of funding from major tech-focused companies, including Google, Facebook and Microsoft. Borenstein explained that these organizations likely invested money into CII because they acknowledged the degree to which they depend on open source software. They have a financial interest in ensuring these projects remain secure and operational.

The commercialization of open source software should not be seen as controversial. On the contrary, it is well established and likely to grow further.

Categories: Companies

Knowledge Sharing

Telerik Test Studio is all-in-one testing solution that makes software testing easy. SpiraTest is the most powerful and affordable test management solution on the market today