Skip to content

Syndicate content
Updated: 14 min 2 sec ago

New Testing Tool Tutorials at uTest University

Fri, 10/24/2014 - 18:01

There are plenty of options when it comes to choosing your suite of testing tools. Some tools may excel at one specific task, while others perform at an average level for more than one testing task.

A few months ago, we launched the Tool Reviews section of our site to let members of the uTest community rate and review the best testing tools. The community has responded by easily singling out the most popular and highest rated testing tools. logos

Over at uTest University, we’ve recently published new tutorials for some of the most requested tools in order to help testers set up these tools to use for testing. These tutorials are designed to be quick, easy to follow, and to get you up-and-running in no time.

Check My Links is a browser extension developed primarily for web designers, developers and content editors. The extension quickly finds all the links on a web page, and checks each one for you. It highlights which ones are valid and which ones are broken. You can learn how to set up and use Check My Links for testing using this new tutorial.

Firebug is a browser extension that allows you to edit, debug, and monitor CSS, HTML, and JavaScript live in any web page. Firebug is often reviewed as a “must-have” tool for both web development and web testing. Learn how to set up and use Firebug for testing using this new tutorial.

Mobizen is a tool that allows the mirroring and control of an Android device using your computer. This free tool features the ability to connect to the device using USB/Wifi/mobile network, screen mirroring with a high frame rate, and movie recording and capturing screenshots. Learn how to set up and use Mobizen for testing using this new tutorial.

liteCam HD is a computer screen recorder for Windows users that helps create professional-looking HD videos. This tool’s easy interface makes quick recordings and reduces complex settings. Learn how to set up and use liteCam HD for testing using this new tutorial.

uTest University is free for all members of the uTest Community. We are constantly adding to our course catalog to keep you educated on the latest topics and trends. Have an idea for a new course or how-to tutorial? Submit your course idea today.

Categories: Companies

Four Reasons Software Testing Will Move Even Further Into the Wild by 2017

Thu, 10/23/2014 - 21:12

apple0132Ever since our inception, uTest and our colleagues within Applause have always been a huge proponent of what we like to call ‘In-the-Wild’ Testing.

Our community is made up of 150,000+ testers in 200 countries around the world, the largest of its kind, and our testers have already stretched the definition of what testing ‘in the wild’ can be, by testing countless customers’ apps on their own devices where they live, work and play.

That ‘play’ part of In-the-Wild testing is primed to take up a much larger slice of testers’ time. While we have already seen a taste of it with emerging technologies gradually being introduced into the mobile app mix, there are four major players primed to go mainstream in just a couple of short years. That means you can expect testers to be spending less time pushing buttons testing on mobile apps in their homes and offices…and more time ‘testing’ by jogging and buying socks. Here’s why.

Apple Pay

Google Wallet has been out for several years now, but it is widely expected by many (including this guy) that Apple Pay will be the technology that takes mobile payments to the mainstream with its ease-of-use and multiple layers of security.

Of course, it will take more of the little banks and retailers to be on-board for Apple Pay to spread like wildfire, but Apple is banking on an ‘if you build it, they will come’ strategy, and it already seems to be working. Case in point: My little, local credit union in Massachusetts — probably 1/25th the size of a Chase or Citibank — has already previewed that it’s working with Apple to bring Apple Pay to all of its members.

This is all well for consumers, but it provides even more of an opportunity for testers — there will be plenty of retailers lined up to make sure the functionality works with their environments, along with retailers needing testers to verify that any in-app functionality is sound when consumers use Apple Pay from the comfort of their own homes. Expect a lot of testers buying socks and sandwiches (not together in the same transaction) as part of their new “testing” routine in the coming months and years.


While I have been in the camp of only wanting a smartwatch if it promises to produce lasers, I know that there are many out there that will be early adopters. And who can resist their stylin’ nature?

Once again, Apple in this technology category has made smartwatches sleek and sexy with a variety of styles and accompanying straps on its soon-to-be-released Apple Watch. While the $349 may be a sticker shock to many, one space that it is expected to take off in is the enterprise amongst executives and employees on the go.

And for testers, smartwatches will open up a whole new era and class of apps more pint-sized than ever…that you can bet will need lots of testing on proper screen rendering and functionality in those board meetings filled with execs.

Health & Fitness Wearables

With Google and Apple taking on this realm in its smartphones, and fitness-centric trackers from Nike, Fitbit and Jawbone in the form of armbands, the health and fitness wearable market is one that has already actively had much adoption.

From a tester standpoint, testing fitness devices may be the most ‘out there’ definition of in-the-wild testing. As health and fitness apps and armbands track fitness- and health-specific information such as number of steps taken, heart rate and calories burned, expect a lot more of testers’ routines including a 2-mile jog lumped in with their mobile testing.

Automobile In-dash Entertainment

From popular car manufacturers from Ford and Toyota to BMW and Audi, to navigation services like TomTom and Garmin, in-dash entertainment and navigation systems have already taken off in the past year, and that trend is only expected to continue as these packages eventually become standard in automobiles.

And this only opens up more doors for testers. We’ve all heard of texting while driving, but did law enforcement consider ‘testing’ while driving? Testing teams should consider safety first and buddy-up their testers when sending them out to drive for a “testing” assignment.

What do you think? Is the tester’s work environment going to be stretched even more into the wild in the next few years because of these emerging technologies? Are there others you would add to the list such as Google Glass? Will these technologies still just be a shadow in a tester’s daily testing routine? Let us know in the Comments now.

Categories: Companies

Authors in Testing Q&A: Dorothy Graham Talks ‘Experiences of Test Automation’

Wed, 10/22/2014 - 15:00

Dorothy (Dot) Graham has been in software testing for 40 years, and is co-author of four books, including two on test automation (with Mark DG-photoFewster).

She was programme chair for EuroSTAR twice and is a popular speaker at international conferences. Dot has been on the boards of publications, conferences and qualifications in software testing. She was awarded the European Excellence Award in Software Testing in 1999 and the first ISTQB Excellence Award in 2012. You can visit her at her website.

In this Q&A, uTest spoke with Dot about her experiences in automation, its misconceptions, and some of her favorite stories from her most recent book which she co-authored, ‘Experiences of Test Automation: Case Studies of Software Test Automation.’ Stay tuned at the end of the interview for chapter excerpt previews of the book, along with an exclusive discount code to purchase.

uTest: Could you tell us a little more about the path that brought you to automation?

Dorothy Graham: That’s easy – by accident! My first job was at Bell Labs and I was hired as a programmer (my degrees were in Maths, there weren’t many computer courses back in the 1970s). I was put into a testing team for a system that processed signals from hydrophones, and my job was to write test execution and comparison utilities (as they were called then, not tools).

My programs were written on punched cards in Fortran, and if we were lucky, we got more than one “turn-around” a day on the Univac 1108 mainframe (when the program was run and we got the results – sometimes “didn’t compile”). Things have certainly moved on a lot since then! However, I think I may have written one of the first “shelfware” tools, as I don’t think it was used again after I left (that taught me something about usability)!

uTest: There’s a lot of misconceptions out there amongst management that automation will be a cure-all to many things, including cost-cutting within testing teams. What is the biggest myth you’d want to dispel about test automation?

DG: The biggest misconception is that automated tests are the same as manual tests – they are not! Automated tests are programs that check something – the tool only runs what it has been programmed to run, and doesn’t do any thinking. This misconception leads to many mistakes in automation — for example, trying to automate all — and only — manual tests. Not all manual tests should be automated. See Mike Baxter et al’s chapter (25) in my Experiences book for a good checklist of what to automate.

This misconception also leads to the mistaken idea that tools replace testers (they don’t, they support testers!), not realizing that testing and automating require different skillsets, and not distinguishing good objectives for automation from objectives for testing (e.g. expecting automated regression tests to find lots of bugs). I could go on…

uTest: What are you looking for in an automation candidate that you wouldn’t be looking for in a QA or software tester?

DG: If you are looking for someone to design and construct the automation framework, then software design skills are a must, since the test execution tools are software programs. However, not everyone needs to have programming skills to use automation – every tester should be able to write and run automated tests, but they may need support from someone with those technical skills. But don’t expect a developer to necessarily be good at testing – testing skills are different than development skills.

uTest: You were the first Programme Chair for EuroSTAR, one of the biggest testing events in Europe, back in 1993, and repeated this in 2009. Could you talk about what that entailed and one of the most valuable things you gained out of EuroSTAR’s testing sessions or keynotes?

DG: My two experiences of being Programme Chair for EuroSTAR were very different! SQE in the US made it possible to take the major risk of putting on the very first testing conference in Europe, by financially underwriting the BCS SIGIST (Specialist Group In Software Testing). Organizing this in the days before email and the web was definitely a challenge!

In 2009, the EuroSTAR team, based in Galway, gave tremendous support; everything was organized so well. They were great in the major planning meeting with the Programme Committee, so we could concentrate on content, and they handled everything else. The worst part was having to say no to people who had submitted good abstracts!

I have heard many excellent keynotes and sessions over the years – it’s hard to choose. There are a couple that I found very valuable though: Lee Copeland’s talk on co-dependent behavior, and Isabel Evans’ talk about the parallels with horticulture. Interesting that they were both bringing insights into testing from outside of IT.

uTest: Your recent book deals with test automation actually at work in a wide variety of organizations and projects. Could you describe one of your favorite case studies of automation gone right (or wrong) from the book, and what you learned from the experience?

DG: Ah, that’s difficult – I have many favorites! Every case study in the book is a favorite in some way, and it was great to collect and critique the stories. The “Anecdotes” chapter contains lots of smaller stories, with many interesting and diverse lessons.

The most influential case study for me, which I didn’t realize at the time, was Seretta Gamba’s story of automating “through the back door.” When she read the rest of the book, she was inspired to put together the Test Automation Patterns, which we have now developed into a wiki. We hope this will continue to disseminate good advice about automation, and we are looking for more people to contribute their experiences of automation issues or using some of the patterns.

uTest has arranged for a special discount of 35% off the purchase of ‘Experiences of Test Automation: Case Studies of Software Test Automation’ here by entering the code SWTESTING at checkout (offer expires Dec. 31, 2014). 

Additionally, Dot has graciously provided the following exclusive chapter excerpts to preview: 

Categories: Companies

Latest Testing in the Pub Podcast: Part II of Software Testing Hiring and Careers

Tue, 10/21/2014 - 21:02

Testing in the PubThe latest Testing in the Pub podcast continues the discussion on what test managers need to look out for when recruiting testers, and what testers need to do when seeking out a new role in the testing industry.

There’s a lot of practical advice in this edition served over pints at the pub — from the perfect resume/CV length (one page is too short!) to a very candid discussion on questions that are pointless when gauging whether someone is the right fit for your testing team.

Part II of the two-part podcast is available right here for download and streaming, and is also available on YouTube and iTunes. Be sure to check out the entire back catalog of the series as well, and Stephen’s recent interview with uTest.

Categories: Companies

Open Source Load Testing Tools Comparison: Which One Should You Use?

Tue, 10/21/2014 - 18:04

This piece was originally published by our good friends at BlazeMeter – the Load Testing Cloud. Don’t forget to also check out all of the load testing tool options out there — and other testing tools — along with user-submitted reviews at our Tool Reviews section of the site.

Is your application, server or service is fast enough? How do you know? Can you be 100% sure that your latest feature hasn’t triggered a performance degradation or memory JMeter-Response-Times-vs-Threadsleak?

The only way to be sure is by regularly checking the performance of your web or app. But which tool should you use for this?

In this article, I’m going to review the pros and cons of the most popular open source solutions for load and performance testing.

Chances are that most of you have already seen this page. It’s a great list of 53 of the most commonly used open source performance testing tools.  However, some of these tools are limited to only HTTP protocol, some haven’t been updated for years and most aren’t flexible enough to provide parametrization, correlation, assertions and distributed testing capabilities.

Given the challenges that most of us are facing today, out of this list of 52, I would only consider using the following four:

  1. Grinder
  2. Gatling
  3. Tsung
  4. JMeter

So these are the four that I’m going to review here. In this article, I’ll cover the main features of each tool, show a simple load test scenario and an example of the reports. I’ve also put together a comparison matrix at the end of this report – to help you decide which tool is best for your project ‘at a glance’ .

The Test Scenario and Infrastructure

For the comparison demo, I’ll be using simple a HTTP GET request by 20 threads with 100 000 iterations. Each tool will be sending requests as fast as it can.

The server (application under test) side:

CPU: 4x Xeon L5520 @ 2.27 Ghz
RAM: 8Gb
OS: Windows Server 2008 R2 x64
Application Server: IIS 7.5.7600.16385

The client (load generator) side:

CPU: 4x Xeon L5520 @ 2.27 Ghz
RAM: 4Gb
OS: Ubuntu Server 12.04 64-bit
Load Test Tools:
Grinder 3.11
Gatling 2.0.0.M3a
Tsung 1.51
JMeter 2.11

The Grinder

The Grinder is a free Java-based load testing framework available under a BSD-style open source license. It was developed by Paco Gomez and is maintained by Philip Aston. Over the year, the community has also contributed with many improvements, fixes and translations.

The Grinder consists of two main parts:

  1. The Grinder Console – This is GUI application which controls various Grinder agents and monitors results in real time. The console can be used as a basic IDE for editing or developing test suites.
  2. Grinder Agents - These are headless load generators; each can have a number of workers to create the load

Key Features of the Grinder:

  1. TCP proxy – records network activity into the Grinder test script
  2. Distributed testing – can scale with the increasing number of agent instances
  3. Power of Python or Closure combined with any Java API for test script creation or modification
  4. Flexible parameterization which includes creating test data on-the-fly and the capability to use external data sources like files, databases, etc.
  5. Post processing and assertion – full access to test results for correlation and content verification
  6. Support of multiple protocols

The Grinder Console Running a Sample Test


Grinder Test Results:



The Gatling Project is another free and open source performance testing tool, primarily developed and maintained by Stephane Landelle. The Grinder Gatling also has a basic GUI – limited to test recorder only. However, the tests can be developed in easy-readable/writable domain-specific language (DSL).

Key Features of Gatling:

  1. HTTP Recorder
  2. An expressive self-explanatory DSL for test development
  3. Scala-based
  4. Produces higher load by using an asynchronous non-blocking approach
  5. Full support of HTTP(S) protocols & can also be used for JDBC and JMS load testing
  6. Multiple input sources for data-driven tests
  7. Powerful and flexible validation and assertions system
  8. Comprehensive informative load reports

The Gatling Recorder Window:


An Example of a Gatling Report for a Load Scenario



Tsung (previously known as IDX-Tsunami) is the only non-Java based open source performance testing tool in today’s review. Tsung relies on Erlang so you’ll need to have it installed (for Debian/Ubuntu, it’s as simple as “apt-get install erlang”). The development of Tsung was started in 2001 by Nicolas Niclausse – who originally implemented a distributed load testing solution for Jabber (XMPP). Several months later, support for more protocols was added and in 2003 Tsung was able to perform HTTP Protocol load testing.

It is currently a fully functional performance testing solution with the support of modern protocols like websocket, authentication systems, databases, etc.

Key Features of Tsung:

  1. Distributed by design
  2. High performance. Underlying multithreaded-oriented Erlang architecture enables the simulation of thousands of virtual users on mid-end developer machines
  3. Support of multiple protocols
  4. A test recorder which supports HTTP and Postgres
  5. OS monitoring. Both the load generator and application under the test operating system metrics can be collected via several protocols
  6. Dynamic scenarios and mixed behaviours. The flexible load scenarios definition mechanism allows for any number of load patterns to be combined in a single test
  7. Post processing and correlation
  8. External data sources for data driven testing
  9. Embedded easy-readable load reports which can be collected and visualized during load

Tsung doesn’t provide a GUI – for test development or execution. So you’lll have to live with the shell scripts, which are:

  1. Tsung-recorder – a bash script which records a utility capable of capturing HTTP and Postgres requests and creates a Tsung config file from them
  2. Tsung – a main bash control script to start/stop/debug and view the status of your test
  3. – a Perl script to generate HTML statistical and graphical reports. It requires the gnuplot and Perl Template library to work. For Debian/Ubuntu, the commands are
    –   apt-get install gnuplo
    –   apt-get install libtemplate-perl

The main tsung script invocation produces the following output:


Running the test:


Querying the current test status:


Generating the statistics report with graphs can be done via the script:


Open report.html with your favorite browser to get the load report. A sample report for a demo scenario is provided below:

A Tsung Statistical Report


A Tsung Graphical Report


Apache JMeter

Apache JMeter is the only desktop application from today’s list. It has a user-friendly GUI, making test development and debugging processes much easier.

The earliest version of JMeter available for download is dated the 9th of March, 2001. Since that date, JMeter has been widely adopted and is now a popular open-source alternative to proprietary solutions like Silk Performer and LoadRunner. JMeter has a modular structure, in which the core is extended by plugins. This basically means that all the implemented protocols and features are plugins that have been developed by the Apache Software Foundation or online contributors.

Key Features of JMeter:

  1. Cross-platform. JMeter can be run on any operating system with Java
  2. Scalable. When you need to create a higher load than a single machine can create, JMeter can be executed in a distributed mode – meaning one master JMeter machine will control a number of remote hosts.
  3. Multi-protocol support. The following protocols are all supported ‘out-of-the-box’: HTTP, SMTP, POP3, LDAP, JDBC, FTP, JMS, SOAP, TCP
  4. Multiple implementations of pre and post processors around sampler. This provides advanced setup, teardown parametrization and correlation capabilities
  5. Various assertions to define criteria
  6. Multiple built-in and external listeners to visualize and analyze performance test results
  7. Integration with major build and continuous integration systems – making JMeter performance tests part of the full software development life cycle

The JMeter Application With an Aggregated Report on the Load Scenario:


The Grinder, Gatling, Tsung & JMeter Put to the Test

Let’s compare the load test results of these tools with the following metrics:

  1. Average Response Time (ms)
  2. Average Throughput (requests/second)
  3. Total Test Execution Time (minutes)

First, let’s look at the average response and total test execution times:


Now, let’s see the average throughput:


As you can see, JMeter has the fastest response times with the highest average throughout, followed by Tsung and Gatling. The Grinder has the slowest times with the lowest average throughput.

Features Comparison Table

And finally, here’s a comparison table of the key features offered to you by each testing tool:

Feature The Grinder Gatling    Tsung JMeter OS Any Any Linux/Unix Any GUI Console Only  Recorder Only No Full Test Recorder TCP (including HTTP) HTTP HTTP, Postgres HTTP Test Language Python, Clojure Scala XML XML Extension Language Python, Clojure Scala Erlang Java, Beanshell, Javascript, Jexl Load Reports Console HTML HTML CSV, XML, Embedded Tables, Graphs, Plugins Protocols





Host monitoring No No  Yes Yes with PerfMon plugin Limitations

Python knowledge required for test development & editing

Reports are very plain and brief

Limited support of protocols

Scala-based DSL language knowlegde required

Does not scale

Tested and supported only on Linux systems. Bundled reporting isn’t easy to interpret More About Each Testing Tool

Want to find out more about these tools? Log on to the websites below – or post a comment here and I’ll do my best to answer!

The Grinder –
Gatling –
Tsung –
–  Home Page:
       –  JMeter Plugins:
       –  Blazemeter’s Plugin for JMeter:

On a Final Note…

I truly hope that you’ve found this comparison review useful and that it’s helped you decide which open source performance testing tool to opt for. Out of all these tools, my personal recommendation has to be JMeter.  This is what I use myself  – along with BlazeMeter’s Load Testing Cloud because of its support for different JMeter versions, plugins and extensions.

Categories: Companies

Testing the Limits With Testing ‘Rock Star’ Michael Larsen — Part I

Mon, 10/20/2014 - 15:00

Michael Larsen is a software tester based out of San Francisco. Including a picture-87071-1360261260decade at Cisco in testing, he’s also has an extremely varied rock star career (quite literally…more on that later) touching upon several industries and technologies including virtual machine software and video game development.

Michael is a member of the Board of Directors for the Association for Software Testing and a founding member of the “Americas” Chapter of “Weekend Testing.” He also blogs at TESTHEAD and can be reached on Twitter at @mkltesthead.

In Part I of our two-part Testing the Limits interview, we talk with Michael on the most rewarding parts of his career, and how most testers are unaware of a major “movement” around them.

uTest: This is your first time on Testing the Limits. Could you tell our testers a little bit about your path into testing?

Michael Larsen: My path to testing was pure serendipity. I initially had plans to become a rock star in my younger years. I sang with several San Francisco Bay Area bands during the mid-to-late 80s and early 90s. Not the most financially stable life, to say the least. While I was trying to keep my head above water, I went to a temp agency and asked if they could help me get a more stable “day job.” They sent me to Cisco Systems in 1991, right at the time that they were gearing up to launch for the stratosphere.

I was assigned to the Release Engineering group to help them with whatever I could, and in the process, I learned how to burn EEPROMs, run network cables, wire up and configure machines, and I became a lab administrator for the group. Since I had developed a god rapport with the team, I was hired full-time and worked as their lab administrator. I came to realize that Release Engineering was the software test team for Cisco, and over the next couple of years, they encouraged me to join their testing team. The rest, as they say, is history.

uTest: You also come from a varied tech career, working in areas including video game development and virtual machine software. Outside of testing, what has been the most rewarding “other” part of your career?

ML: I think having had the opportunity to work in a variety of industries and work on software teams that were wildly varied. I’ve had both positive and negative experiences that taught me a great deal about how to work with different segments of the software world. I’ve worn several hats over the years, including on-again, off-again stints doing technical support, training, systems and network administration, and even some programming projects I was responsible for delivering.

All of them were memorable, but if I had to pick the one unusual standout that will always bring a smile to my face, it was being asked to record the guide vocal for the Doobie Brothers song “China Grove,” which appeared on Karaoke Revolution, Volume 3 in 2004.

uTest: You are also a prolific blogger and podcast contributor. Why did you get into blogging and why is it an effective medium for getting across to testers?

ML: I started blogging before blogging was really a thing, depending on who you talk to. Back in the late 90s, as part of my personal website, I did a number of snowboard competition chronicles for several years called “The Geezer X Chronicles.” Each entry was a recap of the event, my take on my performance (or lack thereof) and interactions with a variety of the characters from the South Lake Tahoe area. Though I didn’t realize at the time, I was actively blogging for those years.

In 2010, I decided that I had reached a point where I felt like I was on autopilot. I didn’t feel like I was learning or progressing, and it was having an effect on my day-to-day work. I had many areas of my life that I was passionate about (being a musician, being a competitive snowboarder, being a Boy Scout leader), but being a software tester was just “the day job that I did so I could do all the other things I loved.”

I decided I wanted to have that same sense of passion about my testing career, and I figured if my writing about snowboarding had connected me with such an amazing community, maybe writing about software testing would help me do the same. It has indeed done that — much more than I ever imagined it would. It also rekindled a passion and a joy for software testing that I had not felt in several years.

uTest: And your own blog is called ‘TESTHEAD.’ That sounds like a very scary John Carpenter movie.

ML: I’m happy it’s memorable! The term “test head” was something we used when I was at Cisco. The main hardware device in the middle that we’d do all the damage to was called the test head. I’ve always liked the idea of trying to be adaptable and letting myself be open to as many experiences and methods of testing as possible, even if the process wasn’t always comfortable. Because of that, I decided that TESTHEAD would be the best name for the blog.

uTest: As you know, James Bach offers free “coaching” to testers over Skype. You’re a founding member of the Americas chapter of “Weekend Testing,” learning sessions for testers in the Western Hemisphere. Does Weekend Testing run off of a similar concept?

ML: Weekend Testing is a real-time chat session with a number of software testers, so it’s more of a group interaction. James’ Skype coaching is one-on-one. It has some similarities. We approach a testing challenge, set up a mission and charters, and then we review our testing efforts and things we learn along the way — but we emphasize a learning objective up front so that multiple people can participate. We also time-box the sessions to two hours, whereas James will go as long as he and the person he is working with has energy to continue.

uTest: In the video interview you gave with us, you mentioned a key problem in testing is the de-emphasis of critical thinking as a whole in the industry. Are endeavors such as Weekend Testing more of a hard sell than they should be because of testers’ unwillingness to “grow?”

ML: I think we have been fortunate in that those that want to find us (Weekend Testing) do find us and enjoy the interactions they have. Having said that, I do think that there are a lot of software testers currently working in the industry that don’t even realize that there is a movement that is looking to develop and encourage practitioners to become “sapient testers” (to borrow a phrase from James Bach).

When I talk with testers that do understand the value of critical thinking, and that are actively engaged in trying to become better at their respective craft, I reluctantly realize that the community that actively strives to learn and improve is a very small percentage of the total number of software testing practitioners. I would love to see those numbers increase, of course.

Stay tuned for Part II of Michael Larsen’s Testing the Limits interview next Monday on the uTest Blog. Amongst other discussion topics, Michael will share why he believes “silence” is powerful on testing teams.

Categories: Companies

Mad Scientists Welcome at the STARWEST 2014 Test Lab

Fri, 10/17/2014 - 19:30

Testing is dull, boring, and repetitive.

Ever heard anyone say that? Well at STARWEST 2014, the theme is Breaking Software (in the spirit of Breaking Bad), and this crowd is anything but dull! Creativity abounds at this conference, from the whimsical (yet impactful) session topics to the geek-chic booth themes (I do so love a good Star Wars parody!) to the on-site Test Lab run by what at first glance appears to be a crew of mad scientists. Boring or repetitive? I don’t think so!

Because the Test Lab was such a fun space, I interviewed one of the mad scientist/test lab rats, Paul Carvalho, to get the lowdown on what STARWEST 2014 attendees have been up to. Check out the video below for a tour of the STARWEST Test Lab, complete with singing computers, noisy chickens, talking clocks, and more!

You can learn more about Paul Carvalho – an IT industry veteran of more than 25 years – at (Software Testing and Quality) where he is the principal consultant. You can also find him on LinkedIn here.

So what do you think about the STARWEST Test Lab? What would you try to break first? Let us know in the Comments below, and check out all of our coverage from STARWEST 2014.

Categories: Companies

STARWEST 2014 Interview: Mind Over Pixels — Quality Starts With the Right Attitude

Fri, 10/17/2014 - 17:10

How important is a tester’s mindset and attitude when it comes to testing?

I sat down with Stephen Vance, one of the STARWEST 2014 speakers, to chat about just that. As an Agile/Lean coach, Stephen is passionate about helping testers understand how to communicate with developers to better integrate into the software development process, and it all starts with the attitude you bring to the table.

Stephen teaches that investing in a “distinctly investigative, exploratory, hypothesis-driven mindset” is key to achieving process improvement at all levels of the software organization. He sees the value in the iterative approach that so well suits the skills testers bring to a collaboration, and encourages testers to be integral in more aspects of a project than just the black-and-white testing phases.

Stephen’s STARWEST 2014 session was called “Building Quality In: Adopting the Tester’s Mindset.” If you weren’t able to attend, check out my interview with him below to hear what else he had to say!

You can also read more about Stephen Vance on his website and connect with him on LinkedIn here.

What are some ways you think testers can use a hypothesis-driven, investigative approach to inject greater value into the software development life cycle? Feel free to sound off in the Comments below.

Categories: Companies

Top Tweets from STARWEST 2014

Thu, 10/16/2014 - 23:34

If you haven’t stopped by and seen us at the ol’ uTest booth, now’s the time! CM’s own Sue Brown is at the show along with the Applause crew.

But if you’re not there, have no fear, as Sue will be reporting back with some video interviews with testers and her own thoughts on the show here on the uTest Blog. In the meantime, we have selected some of our favorite tweets from STARWEST as the tail-end of the show is in full swing:

OH at #StarWest: "How do we do automation?" People: automation—a tool—isn't something you DO; it's something you USE. #testing #agile

— Michael Bolton (@michaelbolton) October 16, 2014

If you're not free to think, or learn, and adapt while you test, it's not exploration. - @jbtestpilot #starwest

— Ben Simo (@QualityFrog) October 14, 2014

The chickens are getting restless in @TheTestLab as #starwest winds down in the final hours..

— Paul Carvalho (@can_test) October 16, 2014

If your security testing is focused on the things that you secured, you’re going to miss all the things you didn’t think about. #starwest

— Kwality Rules (@KwalityRules) October 16, 2014

#STARWEST keynote @cheekytester asked the audience "who struggles with Test environments?" Almost every hand up. We need to fix this.

— Paul Carvalho (@can_test) October 15, 2014

Room set for #starwest Going to be huge audience. 3 screens needed !

— Alison Wade (@awadesqe) October 15, 2014

"Test the software with the minimum number of tests"… hmmm let's ignore that request and start testing #starwest

— Andy Glover (@cartoontester) October 14, 2014

Fedoras, beers, #APIs, #SoftwareTesting, you name it. #STARWEST always impresses, and this year is no different.

— Ready! API (@ready_api) October 16, 2014

Computer System Innovation crew likes glasses and mustaches #STARWEST #stachecam

— Yvonne Johns (@yvjohns) October 16, 2014

#starwest Ben Simo's presentation on part tester, part hacker, all awesome

— Daniel Hill (@RenjyaaDan) October 16, 2014

If your ex can answer the security question- it's a bad question @QualityFrog #healthcare #userexperience #starwest

— StickyMinds (@StickyMinds) October 16, 2014

Agile is about the commitment of the full team, not individual teams by @bobgalen #StarWest #HPsoftwarealm

— silvia siqueira (@silvia_ITM) October 16, 2014

#agile «@jefferyepayne Attack of the killer flip charts. Help! I only asked for 1 but they are swarming #starwest»

— erik petersen (@erik_petersen) October 14, 2014 empower All developers to Test for success, test early test continuously #StarWest #HPsoftwarealm #blizzard

— silvia siqueira (@silvia_ITM) October 16, 2014

Never substitute tools for communication. @Jeanne_Schmidt #starwest

— Kwality Rules (@KwalityRules) October 15, 2014

If you are not having fun testing something is wrong @cheekytester #starwest

— Gitte Ottosen (@Godtesen) October 15, 2014


To see what other events are upcoming in the software testing world, make sure to check out our brand-spankin’ new Events Calendar.

Categories: Companies

Dynamic Testing According to ISO 29119 the Subject of Software Testing Book Excerpt

Wed, 10/15/2014 - 19:00

As testers, you know that software testing is a critical aspect of the software development process. A new book aims to offer a practi804Hasscal understanding of all the most critical software testing topics and their relationships and interdependencies.

The Guide to Advanced Software Testing (second edition) by Anne Mette Hass, published by Artech House, offers a clear overview of software testing, from the definition of testing and the value and purpose of testing, through the complete testing process with all its activities, techniques and documentation, to the softer aspects of people and teams working with testing.

Practitioners will find numerous examples and exercises presented in each chapter to help ensure a complete understanding of the material. The book supports the ISTQB certification and provides a bridge from this to the ISO 29119 software testing standard in terms of extensive mappings between the two.

The full version of the book is available for £75 (USD $119) from Artech House, but testers will be able to receive an exclusive 20% discount off that list price, plus free shipping, by using promo code EUROSTAR14 at checkout, valid through December 31, 2014.

In the meantime, you can check out our exclusive chapter excerpt right here. This specific sample provided by Artech House clocks in at a generous 30 pages, and its subject matter should be quite familiar to many testers, covering the recent, controversial ISO 29119 testing standard and its associated dynamic testing process.

Categories: Companies

The Ins and Outs of Writing an Effective Mobile Bug Report (Part II)

Wed, 10/15/2014 - 15:30

Be sure to check out Part I of Daniel Knott’s articleimages on effective mobile bug reports for further context before continuing on.

Here’s the rest of the information you should plan on including in every bug report.

Network Condition and Environment

When filing a mobile bug, it’s important to provide some information about the network condition and the environment in which the bug occurred. This will help to identify the problem more easily and will possibly show some side effects no one has thought of.

  • Bad: “No information” or “Happened on my way to work”
  • Good: “I was connected to a 3G network while I was walking through the city center.”


If your app supports several languages, provide this information in your bug report.

  • Bad: “No information”
  • Good: “I was using the German language version of the app.”

Test Data

This information can already be provided in the steps taken to reproduce, but test data you need to reproduce the bug may be more complex, so it makes sense to provide this information in a separate section. Provide SQL dumps, scripts or the exact data you entered in the input fields.

  • Bad: “No information”
  • Good: “Find the attached SQL script to put the database in the defined state” or “Enter ‘Mobile Testing’ into the search input field.”


Every bug you find needs a severity level. Either your defect management tool will offer you some categories or you have to define them with your team. It is important to give a bug a severity level as it will allow the team to prioritize their bug fixing time so that critical and high priority bugs will be fixed first. If this information is not provided, it takes much more time to find the right bugs that need to be fixed before the release. The default severities are: Critical, High, Medium and Low.

  • Bad: “No information”
  • Good: “Critical” or “Medium”

Bug Category

Besides the severity level, the bug category is also a very useful piece of information. The product owner or the developer can filter by category to get an overview of the current status of bugs per category. For example, if there are lots of UX bugs, this may be an indicator of poor UI and UX or a missing design expert in the team, meaning that the app needs design improvements.

  • Bad: “No information”
  • Good: “Functionality” or “UX” or “Performance”

Screenshot or Video

Whenever you find a bug, try to create screenshots or a video to provide the developer with more information. When providing a screenshot, use an image editing tool to mark the bug in the screenshot (Jing, for instance). A video is also a great way to describe a bug you’ve come across. It is also very useful to give the screenshot or the video a good name or description.

  • Bad: “No screenshots or videos attached” or “Screenshot1.png”
  • Good: “01_InsertSearchTerm.png, 02_SearchResultPageWithError.png”

Log Files

If your app crashes or freezes, connect the device to your computer and read out the log files. In most cases, a stack trace will be shown with a description of the error. This kind of information is extremely useful for developers as they know right away in which class the bug or the error has occurred.

  • Bad: “No information provided when the app crashed.”
  • Good: “Provide the full stack trace in the bug report” or “Attached the log file to the report.”

Tester Who Found the Bug

Write down your name or the name of the tester who found the bug. Developers or product owners may have some questions about the reported bug and they would of course like to directly get in touch with the tester who found the issue. In most cases, this is automatically done by the defect management system where each user has his or her own account. If not, make sure you add your e-mail address and/or phone number.

  • Bad: “No information”
  • Good: “Daniel Knott,”

Other Things to Remember When Writing Bug Reports

As you have seen, there is a lot of information that should be included in a bug report. There are three other points you should keep in mind when writing them.

Don’t get personal. When filing a bug report, describe the software misbehavior rather than the developer’s mindset or the quality of his or her work. Don’t use offensive or emotionally charged words as those kinds of bugs will be ignored by the developer…and you’ll end up with bad blood within the team.

It’s not you. It’s not your fault that the bug occurred. It is the software that’s broken and you and your colleagues need to fix it.

Keep it simple. Try to write your bug report in such a way that someone with no idea about the project or the app is able to understand the problem. If the bug report is that easy, every developer within the team will be able to fix it and non-technical colleagues can understand the problem and will value your work.

If you want to read more about mobile testing, my book Hands-On Mobile App Testing covers this in depth.

Daniel Knott has been in software development and testing since 2008, working for companies including IBM, Accenture, XING and AOE. He is currently Daniel Knotta Software Test Manager at AOE GmbH where he is responsible for test management and automation in mobile and Web projects. He is also a frequent speaker at various Agile conferences, and has just released his book, Hands-On Mobile App Testing. You can find him over at his blog or on Twitter @dnlkntt.

Categories: Companies

The Ins and Outs of Writing an Effective Mobile Bug Report (Part I)

Tue, 10/14/2014 - 19:05

If you find a bug within a mobile app, you need to report it in order to get it fixed. Filing mobile bug reports requires some additional information 250x250xbug_report1-250x250.png.pagespeed.ic_.H3eXAv82fDthat the developers need in order to reproduce and fix the bug.

But what is important when filing a mobile bug? What should a bug report look like? Before I answer those two questions, I want to raise another one: “Why even send a bug report?”

Bug reports are very important for the product owner, product manager and the developers. Firstly, a bug report tells the developers and the product owner about issues they were not aware of. Reports also help identify possible new features no one has thought of, and, last but not least, they provide useful information about how a customer may use the software. All of this information can be used to improve the software.

Whenever you find something strange or if something behaves differently or looks weird, don’t hesitate to file a bug report.

Now onto the question of what a bug should look like and what’s important when filing it. It should contain as much information as possible in order to identify, reproduce and fix the bug. Having said that, your report should only include information that’s important to handling the bug, so try to avoid adding any useless information. Additionally, only describe one error per bug. Don’t combine, group or create containers for bugs. It’s likely that not all of the bugs will be fixed at the same time, so refrain from combining or grouping them.

Here’s the information you should plan on including in every bug report.

Bug ID

A bug must have an unique identifier like a number or a combination of characters or numbers. If you’re using a defect management tool, the tool will handle the bug IDs for you. If not, think about a unique ID system for your project.

  • Bad: 123 is a unique ID, but you might have several projects where the ID is the same.
  • Good: AppXYZ-123 is good because you’re combining an ID with a project abbreviation and a number.


Create a short but meaningful description in order to provide the developer with a quick overview of what went wrong without going into detail. You should, for example, include error codes or the part of the application where the bug occurred.

  • Bad: “The app crashed,” “White page,” “Saw an error,” “Bug”
  • Good: “Error Code 542 on detail message view,” “Timeout, when sending a search request.”

Steps to Reproduce

This is one of the most important points. Provide the exact steps together with the input data on how to reproduce the bug. If you are able to provide this kind of information, the bug will be very easy to fix in most cases.

  • Bad: “I tried to execute a search.”
  • Good: “Start the app and enter ‘Mobile Testing’ into the search input field. Press the search button and you’ll see the error code 783 on the search result page header.”

Expected Result

In this section, you should describe what you expected to happen when the bug occurred.

  • Bad: “It should work,” “I didn’t expect it to crash.”
  • Good: “I expected to see a search results page with a scrollable list of 20 entries.”

Actual Result

What happened when the bug occurred? Write down the actual result — what went wrong or the error that was returned.

  • Bad: “It just won’t work.”
  • Good: “The search results page was empty” or “I got the error code 567 on the search result page.”


If you’ve found a way to continue using the app by avoiding the bug, explain your steps. Those steps are important to know since the workaround could cause other problems or indicate a way in which the app should not be used. On the other hand, a workaround can be very useful for the customer support team in order to help customers solve the current problem until the bug gets fixed.

  • Bad: “I found a workaround.”
  • Good: “If you put the device into landscape mode, the search button is enabled and the user can search again.”


If you found a reproducible bug, that’s fine, but does it occur every time? If it happens every time, that’s great, as this should be an easy fix for the developer. But if the bug only occurs 20 percent of the time for instance, it is much harder to find a solution for that. Make sure you provide this information, however, as it is very useful for the developer and will prevent the bug from being closed with the comment “can’t be reproduced.”

  • Bad: “Sometimes”
  • Good: “The bug occurs 2 out of 10 times.”

Operating System, Mobile Platform and Mobile Device

The same applies to the operating system, the mobile platform and the mobile device. Write down the operating system, mobile platform and device on which the bug occurred.

  • Bad: “On Android” or “On iOS”
  • Good: “Android, Version 4.1.2 Google Nexus 4″ or “iOS, Version 6.1 iPhone 4S”

Mobile Device-Specific Information

Mobile devices have lots of interfaces and sensors that could have an impact on your app. The battery could also affect the app you’re testing. Write down all of this information in your bug report.

  • Bad: “No information”
  • Good: “GPS sensor activated, changed the orientation from landscape to portrait mode” or “Used the device in a sunny place” or “Battery state was 15%” or “Battery state was 100%.”

Browser Version

If your app is a mobile web app and you found an issue, it’s very important to note down the browser version where you found the bug, as it may only occur in certain versions.

  • Bad: “Google Chrome” or “Mozilla Firefox”
  • Good: “Google Chrome Version 45.35626″ or “Mozilla Firefox 27.6.”

Software Build Version

Another really useful piece of information is the current build version of the app where the bug occurred. Maybe you found the issue in version 1.2, but there is already a newer version available where the bug has been fixed. This will prevent the developer from wasting time by trying to reproduce a bug that’s already been fixed.

  • Bad: “No information”
  • Good: “App build version 1.2.3″

Check out Part II of this article right here.

Daniel Knott has been in software development and testing since 2008, working for companies including IBM, Accenture, XING and AOE. He is currently Daniel Knotta Software Test Manager at AOE GmbH where he is responsible for test management and automation in mobile and Web projects. He is also a frequent speaker at various Agile conferences, and has just released his book, Hands-On Mobile App Testing. You can find him over at his blog or on Twitter @dnlkntt.

Categories: Companies

My Weekend with the Goat Simulator App

Mon, 10/13/2014 - 21:18

We often talk about the newest and hottest mobile apps at the uTest Community Management desk. Recently, I was curious if I was missing out on any top apps that I didn’t already have on my Samsung Galaxy S4. I am surrounded by a sea of iPhone users so I am used to not getting in on the latest apps until (much, much) later. Of course, I have the requisite social media, weather, and news apps installed but what is really hot for the Android app market these days? I checked out the top paid apps in the Google Play store and, to my surprise, the one odd app that stuck out is the Goat Simulator at #9 on the Top 10 list. Screenshot_2014-10-10-19-10-05

Per the app’s description: “Gameplay-wise, Goat Simulator is all about causing as much destruction as you possibly can as a goat. It has been compared to an old-school skating game, except instead of being a skater, you’re a goat, and instead of doing tricks, you wreck stuff. When it comes to goats, not even the sky is the limit, as you can probably just bug through it and crash the game. Disclaimer: Goat Simulator is a completely stupid game and, to be honest, you should probably spend your money on something else, such as a hula hoop, a pile of bricks, or maybe pool your money together with your friends and buy a real goat.”

I appreciate the developer’s humor, especially since they list the top key feature as “you can be a goat.” I can’t say I’ve always dreamed of being a goat, but here was my shot. Ryan, our beloved blog and forums guru, practically ordered me to buy the app (cost: $4.99), play it over the weekend, and report back on Monday. A check of the app on Applause Analytics showed a satisfaction score of 77 and noted that it is the app’s strongest attribute. Okay, game on. Goat-sim-screenshot

The Goat Simulator app played as expected. You are a first person goat whose job is to run people down, kick objects across long distances, and generally be a menace to society. (If we’ve ever met, then you know I am capable of such things – no app needed.) I was waiting to encounter the supposed millions of bugs that the developer mentions but, sadly, I did not. I stopped playing the game on Saturday when I realized I had given one too many hours of my life to being a virtual goat and that it was time to take a shower and rejoin civilization.

However, I was still wondering: What odd, strange, or unique apps do you have installed on your phone? And what’s the oddest app that you’ve paid for? Let’s chat about it in the forums.

Happy goating!

Categories: Companies

Software Testing Budgets on the Rise, Focused on the ‘New IT’

Mon, 10/13/2014 - 15:30

Software testing and QA budgets keep on going up, and shiny, new toys are all of their focus.3C8D67088BE44F318BC592671BC43

According to a ZDNet report based off of a new survey of 1,543 CIOs, conducted and published by Capgemini and HP, “for the first time, most IT testing and QA dollars are now being spent on new stuff, such as social, mobile, analytics, cloud and the Internet of Things, and less of it on simply modernizing and maintaining legacy systems and applications.”

In fact, this “new IT” is making up 52 percent of the testing budgets, up from 41 percent in 2012. And it’s just part of a trend of rising testing budgets in general, hopefully good news for testers — testing now represents 26 percent of total IT budgets on average, up from 18 percent in 2012, and projected to rise to 29 percent by 2017.

What testing teams are doing with this extra budget is a whole other story, however, so it remains to be seen whether more budget for testing teams is a good thing, and will provide a much-needed boost to teams strapped by a lack of time and misdirected efforts.

Do these trends look familiar within your own organization? We’d love to hear from you in the Comments below.

Categories: Companies

uTest to Provide Coverage Next Week from STARWEST in Anaheim

Fri, 10/10/2014 - 20:05

star_west_logo-150x150Headed to STARWEST in Anaheim, CA, next week? uTest will be there for the final three days of the nearly week-long esteemed testing event of the Fall. We’ll also be live tweeting and interviewing conference attendees all week.

In a clever spin on the hit TV show Breaking Bad, the 2014 theme is Breaking Software. The conference is billed as the premier event for software testers and quality assurance professionals—covering all your testing needs with 100+ learning and networking opportunities including: keynotes featuring recognized thought-leaders, in-depth half- and full-day tutorials, and conference sessions covering major testing issues and solutions.

Some of this year’s keynotes include The Power of an Individual Tester: The Experience, by Ben Simo of eBay. If you weren’t there for Ben Simo’s similar session at CAST 2014 in NYC in August, you’re in for a treat — this was a cant-miss keynote.

So if you’re wandering the halls of STARWEST next week in between all of these great keynotes and sessions, be sure to stop by and see us at the Applause booth (#5) on Wednesday and Thursday, as representatives from both uTest and Applause will be in attendance! You may even get some nice uTest and/or Applause goodies, too.

For those unable to make it in person, have no fear, as we will be video interviewing attendees on the spot here on the uTest Blog, and you’ll be able to follow along @uTest on Twitter for all of the coverage live from Cali.

Categories: Companies

‘Tis the Season for uTest Community Contests

Thu, 10/09/2014 - 22:18

If you haven’t noticed, we’ve gone a little crazy over tester recognition at uTest lately. We closed out the summer by crowning the victors of our epic Bug Battle and Ideal Tool contests.  Then we made a nice new home for our Testers of the Quarter and our uTesters of the Year in our Hall of Famelight-bulbs

To keep the excitement going, we thought we’d bring some cash and prizes to help highlight two other areas of software testing: your testing workspace and your favorite testing tools. We have two contests running in the uTest Forums right now:

Your Testing Workspace

What does your sanctuary of testing bliss say about you? This month’s uTest Community contest asks you to take a step back and examine this question – and take a picture of this testing workspace in the process!

uTesters will be able to submit their best photos in one of two categories: Best Testing Workspace or Best Desktop Trinket. For the first category, maybe your workspace has a wickedly cool setup or there’s just something unique about it. For your best desktop trinket, perhaps there’s one item that was a gift from a dev that will just make everyone laugh. Whatever these images are, we wanna see ‘em! One tester will also be selected at random out of all entrants for a uTest t-shirt as well, so don’t be shy about participating!

For more information, read the forums thread (login required). 

Chrome/Firefox Testing Add-on Tutorial

Browser add-ons are handy tools to use when testing, but it’s not always clear as to the best ways to set them up for testing. We are running a quick contest with a $200 cash prize for the best workflow and easiest-to-follow tutorial for your favorite browser add-on. Check out our Tool Reviews library for Chrome tools and for Firefox tools, but you are not limited to just these specific examples.

For more information, read the forums thread (login required).

So what are you waiting for? Sign up for a uTest account (if you don’t already have one) and get those ideas flowing for your submissions.

Categories: Companies

Don’t Say That: Five of the Most Disliked Software Testing Terms

Thu, 10/09/2014 - 20:15

When you say that, you just sound like a jerk.YOU_DONT_SAY

Or maybe at least don’t sound like you completely know what you’re talking about. There are many words and phrases used within the software testing realm that have caused much anguish amongst testers, either because the terms are so vastly overused or are grossly inaccurate in how they are used.

In the past on the uTest Blog, we’ve covered software testing buzzwords, but a tester in our community recently took it a step further in our Forums, coming up with terminology that has caused such unrest beyond the normal annoyances of buzzwords. Here are some of the highlights from the discussion, in the words of our testers:

  • Manual testing: It’s one of the most hated terms in the industry. A manual tester, manual project manager and a manual network admin walk into a bar…
  • Glitch: A bug is not a glitch. A glitch is something transient, undefinable and suddenly appears. I search for bugs — repeatable faults in the code that can be triggered following a certain set of actions.
  • Manual testing vs. Automated testing: As soon as this term is raised, it becomes a kind of a war. This is a never-ending debate.
  • Validate vs. Verify: I will sometimes say verify when I should say validate just to annoy testers. I think the value of these terms does not outweigh the pain and suffering I get trying to get people to use the right term, so I quit worrying about it.
  • Test vs. Check: Honestly, does anyone outside the test community care? Testers are constantly getting into discussions about how these terms should be used. That’s a waste of my time. Every engineering person I’ve ever talked to immediately knows what regression testing is. No real need to explain it.

What would you add to the list? We’d love to hear in the Comments.

Categories: Companies

iOS Log Capture Tool Showdown: iPhone Configuration Utility vs. iTools

Wed, 10/08/2014 - 20:07

When capturing system information critical for bug reports and reproducing bugs in action, iPhone Configuration Utility is often used as the default tool for capturing iOS logs. But is the standaiTools_logo_Realrd the best option out there for testers? VSiphone_config_real1

In this week’s Testing Tool Showdown, we’ve pitted the iPhone Configuration Utility against iTools to see which has garnered more support. The former is by far the standard testers within the uTest Community use for log capture, and has earned a five-star average review in our Tool Reviews. Here’s some of what our users have to say:

  • IPCU is handy for installing apps that iTunes has issues with. The console log alone is also a quick and easy way to got the logs you need.
  • I have used this tool many times when needing to grab a console log and overall I have found it works well
  • Lightweight and useful. Hard to beat.

On the other side of the ring in this showdown is iTools. This one has the distinction of being used primarily by our testers for screen recording and screenshots, but there are log capture capabilities that are too part of its arsenal. According to our testers both in our Forums and in the Tool Reviews:

  • What makes me use this tool? When I updated my iPad Mini 2 to iOS 8, I just could not see logs on iPhone Configuration Utility anymore. Very good and useful tool.
  • I use this mainly for recording desktop, but also to get the logs, which are not always synchronized with iTunes!

So while you’ll see most documentation out there referring to iPhone Configuration Utility as the de facto log capture tool for iPhone (heck, we even have a course on the tool at uTest University), there are alternatives such as iTools that are also useful in log capture.

Who would win in this testing tool showdown? Are there other swiss-army/catch-all tools you prefer to use versus Config Utility that get the job done along with many other vital testing functions? Let us know in the comments below.


Categories: Companies

A Bittersweet Transition from the uTest Community

Tue, 10/07/2014 - 19:10

It’s with a heavy heart that I announce that after 5.5+ years of dedication to the uTest Community, I will be transitioning to a different role within the comPeter-Shihpany.

This move comes as an opportunistic one, as I now will be tasked with ways to help prospective customers understand the value that the uTest Community brings to their software lifecycle. And if I succeed in this new role, greater opportunities will be unlocked for you.

So what does this mean for you? Quite frankly, not much, besides the fact that I will miss the frequent interactions with many of our top testers. The CM Team is still the same CM Team that you love and is there to help you succeed. Additionally, the vision that we’ve set out for the uTest brand will live on and continue to expand, which includes launching new programs that push learning and mentoring, empowering our top testers to influence the greater community, and working with our engineering team to deliver platform features that enable you to succeed on paid projects.

Finally, since I won’t be going anywhere new, I’ll still be indirectly involved with the CM team and ongoing programs. I will also be poking my head in to the uTest Forums from time to time and look forward to hearing about major milestones and achievements from our most influential testers!

Categories: Companies

Tickets for UK’s TestBash 2015 Now on Sale

Mon, 10/06/2014 - 19:57

3-TestBash-Banners-2015-10Tickets for one of the biggest testing events of the year in the United Kingdom, TestBash, are now on sale for the 2015 edition.

According to host of the event Ministry of Testing, “TestBash is the leading software testing conference within the UK. We invite testers who we believe are the best and most interesting people to talk  all sorts of crazy things related to software testing. Our mission is to inspire a generation of testers to learn more about their craft and create professional friendships that create a long lasting support network.

Some of the sessions for TestBash 2015, taking place March 26-27, include:

  • Why I Lost My Job As a Test Manager and What I Learnt As a Result – Stephen Janaway
  • The Rapid Software Testing Guide to What You Meant To Say – Michael Bolton
  • What’s In a Name? Experimenting With Testing Job Titles – Martin Hynie
  • Automation in Testing – Richard Bradshaw
  • The Art of Asking Questions – Karen Johnson
  • Bug Detection – Iain McCowatt

2015 is also the first to feature a workshop day before the day of speakers including nine hands-on sessions ranging from 2-4 hours a piece. You can purchase tickets for the UK event now, and find a complete list of currently confirmed sessions, right here. With a conference name like that and a who’s who of testing personalities, the 2015 edition is bound to be a great and worthwhile time!

Be sure to also check out all of the software testing events on tap for the months ahead at our uTest Events Calendar.

Categories: Companies