Two weeks ago, I gave a talk at CAST 2014 (the conference of the Association for Software Testing) in New York, titled “Standards: Promoting quality or restricting competition?”
It was mainly about the new ISO 29119 software testing standard (according to ISO, “an internationally agreed set of standards for software testing that can be used within any software development life cycle or organization”), though I also wove in arguments about ISTQB certification.
My argument was based on an economic analysis of how ISO (the International Organization for Standardization) has gone about developing and promoting the standard. ISO’s behavior is consistent with the economic concept of rent seeking. This is where factions use power and influence to acquire wealth by taking it from others — rigging the market — rather than by creating new wealth.
I argued that ISO has not achieved consensus, or has even attempted to gain consensus, from the whole testing profession. Those who disagree with the need for ISO 29119 and its underlying approach have been ignored. The opponents have been defined as irrelevant.
If ISO 29119 were expanding the market, and if it merely provided another alternative — a fresh option for testers, their employers and the buyers of testing services — then there could be little objection to it. However, it is being pushed as the responsible, professional way to test — it is an ISO standard, and therefore, by implication, the only responsible and professional way.
What is Wrong With ISO 29119?
Well, it embodies a dated, flawed and discredited approach to testing. It requires a commitment to heavy, advanced documentation. In practice, this documentation effort is largely wasted and serves as a distraction from useful preparation for testing.
Such an approach blithely ignores developments in both testing and management thinking over the last couple of decades. ISO 29119 attempts to update a mid-20th century worldview by smothering it in a veneer of 21st century terminology. It pays lip service to iteration, context and Agile, but the beast beneath is unchanged.
The danger is that buyers and lawyers will insist on compliance as a contractual requirement. Companies that would otherwise have ignored the standard will feel compelled to comply in order to win business. If the contract requires compliance, then the whole development process could be shaped by a damaging testing standard. ISO 29119 could affect anyone involved in software development, and not just testers.
Testing will be forced down to a common, low standard, a service that can be easily bought and sold as a commodity. It will be low quality, low status work. Good testers will continue to do excellent testing. But it will be non-compliant, and the testers who insist on doing the best work that they can will be excluded from many companies and many opportunities. Poor testers who are content to follow a dysfunctional standard and unhelpful processes will have better career opportunities. That is a deeply worrying vision of the future for testing.
I was astonished at the response to my talk. I was hoping that it would provoke some interest and discussion. It certainly did that, but it was immediately clear that there was a mood for action. Two petitions were launched. One was targeted at ISO to call for the withdrawal of ISO 29119 on the grounds that it lacked consensus. This was launched by the International Society for Software Testing.
The other petition was a more general manifesto that Karen Johnson organized for professional testers to sign. It allows testers to register their opposition to ISTQB certification and attempts to standardize testing.
A group of us also started to set up a special interest group within the Association for Software Testing so that we could review the standard, monitor progress, raise awareness and campaign.
Since CAST 2014, there has been a blizzard of activity on social media that has caught the attention of many serious commentators on testing. Nobody pretends that a flurry of Tweets will change the world and persuade ISO to change course. However, this publicity will alert people to the dangers of ISO 29119 and, I hope, persuade them to join the campaign.
This is not a problem that testers can simply ignore in the hope that it will go away. It is important that everyone who will be affected knows about the problem and speaks out. We must ensure that the rest of the world understands that ISO is not speaking for the whole testing profession, and that ISO 29119 does not enjoy the support of the profession.
James Christie has 30 years experience in IT, covering testing, development, IT auditing, information security management and project management. He is now a self-employed testing consultant, based in Scotland. You can learn more about James and his work over at his blog is and follow him on Twitter @james_christie.
This story was originally published on the Applause App Quality Blog by Dan Rowinski.
Sometimes you can’t find the app you are looking for.
A single app in Apple’s App Store is just the perfect one that you are seeking. With 1.2 million apps, it has to be in there somewhere, right? It may be a new calendar app to that syncs your iCal, Google Calendar and Outlook meetings. Or it is a messaging app that focuses on standard and proper English, eschewing the craze of emoji and emoticons endemic today’s popular communication methods. You know somebody at some point must have built this app, but it is impossible to find.
App Store discovery has been a massive problem for developers, users and Apple for the last several years. App Store search is inadequate for most people’s needs and the top lists that Apple relies upon have created a top-heavy capitalistic market that breeds poor quality apps.
Apple is not ignorant to this problem. In 2012 it spent a reported $50 million to improve the App Store and acquired app search engine Chomp to enhance discoverability. The improvements proved minimal and Apple eventually shuttered Chomp and rolled its intellectual property into iOS 6. Judging by the current discourse among the iOS developer community, Apple still has a lot of work to do to help app makers sell their wares.
Apple has some more improvements for the App Store coming with iOS 8 that it hopes will arrest the issue.
App Store Improvements In iOS 8
In the keynote for its World Wide Developer Conference in June, Apple CEO Tim Cook called the coming improvements to the App Store the biggest the product has seen since it was launched in 2008.
- App bundles so users can download a group of apps from the same publisher.
- App preview videos that will augment the standard screenshots in the App Store.
- A new “explore” tab in that will help users browse categories and subcategories of apps.
- Trending search to identify popular app search terms.
- An expanded “Editor’s Choice” section with a new logo that may or may not satisfy the call for human curators of the App Store.
- TestFlight integration for developers to find beta users.
- Vertical,endless scrolling in search.
Let’s take a look at some of the highlights.
Bundles Perfect For Unbundled Apps
Several of these items are essential to improving app store discovery. The forthcoming app bundles will solve an issue that big app publishers have in rolling out groups of apps intended to be bought and downloaded as a package. For instance, Microsoft rolled out its Office suite of apps earlier this year, only for users have to download each individually from the App Store. Companies like Facebook or Google could package their apps together (download Facebook and Messenger at the same time, for instance) or smaller publishers can group apps together to gain more traction and marketing opportunities from the get go, as opposed to struggling for each and every download.
App bundles is right App Store feature at the right time, especially considering the 2014 trend of “app unbundling” where app creators build multiple apps with different functions as opposed to cramming them all into one central app.
Finally, Video Previews For Apps
One of the greatest advantages that Google has had over Apple in their respective app repositories is that Google could roll out YouTube videos straight into Google Play. Anybody that has ever written a “top apps of the month” article will tell you that Android’s YouTube preview videos are so much more effective in telling the user what an app is actually about then just descriptions and screen shots. The fact that YouTube videos are embeddable helps to increase the media and viral quotient of spreading app awareness across the Web.
Video is coming to the App Store as well, but not in the form of YouTube. This will be a welcome cosmetic change for developers and app marketers. Apple has not said if App Store video previews will be embeddable.
Editor’s Choice = Human Curation?
In recent weeks, developers have called for the end of the algorithmic top lists in the App Store in favor of human curators. To be fair, Apple has long had an “editor’s choice” section in the App Store, but it has been buried at the bottom of the feed and really was not all that helpful.
Apple did not really get into specifics of the new Editor’s Choice section coming in iOS 8, but it did say that it will get a revamp and a new logo. Developers will also get a small editor’s choice icon next to their apps in App Store search results if their apps have been met with Apple editors’ seal of approval.
The best part about working in the uTest Community is seeing the number of new testers who join our ranks everyday. We see testers new to the testing world, as well as veteran testers who have years of experience. No matter your experience level, we have resources to help guide you toward your first paid project with uTest.
The first stop in our journey after registration is a course in uTest University called “Getting Started with uTest Paid Projects.” This course contains answers to many of the questions that new uTesters typically have, like how to update your Expanded profile and how to get invited to the Sandbox program.
Keep in mind that, in order for uTest to match you with incoming projects, you will need to keep your testing profile complete and up-to-date. For example, if a project requires testers in Canada with BlackBerry devices and your profile matches these requirements, we will then be able to notify you of an upcoming test cycle. Be sure to update your profile as you pick up new gadgets (mobile devices, laptops, etc.) and update your software. Many customers are especially interested in testers with the latest devices for testing purposes. Removing outdated items you no longer own is also very important.
The next stop takes a step back from uTest and examines the greater software testing realm. In short, without a solid foundation in testing fundamentals, it will no doubt be tough to develop as a tester at uTest. “Building Your Software Testing Skills” is a great primer for new testers and vets alike, and contains many testing resources, those recommended by a 15-year software testing veteran, that are intended to help you grow as a software tester.
Coming back into the uTest world, the next stop is the “5 Steps to Succeeding in Your First uTest Project” course. Once you’ve been invited to a uTest project, there are helpful steps outlined in the course that will assist you, such as how to accept your first invitation, review the scope and chat, submit your bug reports, submit your test case, and check in on your bug reports in the event a Project Manager or Test Team Lead has a question.
Another course that contains invaluable advice for testers is uTest Test Team Lead Aaron Weintrob’s “When is a Bug Not a Bug?” One of the hardest things for new testers to know is where the line between a valid bug stops and good feedback begins, and Aaron’s course points out key tips for testers to remember when deciding whether to file a bug or not in testers’ first test cycles.
As you plan ahead for your testing future, also take a look at the Skill Tree for Paid Projects. This outlines the various ways that you can further your career with uTest. For example, some of our testers have day jobs as QA testers and supplement their income with paid projects at uTest, while others become some of our top testers and earn thousands of dollars a month by testing with us full-time.
Last but not least, be sure to browse through the careers in QA blog posts on the uTest Blog, and stop by the uTest Forums to hear from uTest veterans whom have a lot to share about their experiences and tips for success as a uTester (it’s also just a great place to network off the clock with fellow testing peers!).
While becoming a successful uTester will require a lot of hard work, learning core testing concepts, learning from peers and self-paced studying, we hope that these resources will ease you into your first cycles and the rewarding journey ahead as a uTester.
Creator of cloud testing tool LoadStorm, CustomerCentrix, today announced that it has released a LITE version of its cloud load testing tool.
This version is designed to be a cost-effective, easy-to-use complement to its enterprise level tool, LoadStorm PRO. According to the company, LoadStorm allows users to set up tests in the web application and run them from the cloud with no hardware to purchase and no software to install. Users will be able to try LITE for free from their site.
Don’t forget to leave a review of LoadStorm if you’ve used the cloud load testing tool in the past, and be sure to check out the complete library of testing Tool Reviews to check out comparable load testing tools and see which is best for your testing team’s needs.
uTesters may be pretty familiar with uTester of the Year already. Continuing the theme of recognizing and championing our best testers, uTest is proud to launch a brand-new, community-wide recognition initiative — Tester of the Quarter!
This quarterly program exists solely to recognize and award the rock stars of our global community, and differs from uTester of the Year in that it puts the power of nominations directly in the hands of our testing community:
- Testers will be able to easily recognize their peers’ dedication and great work in various facets of their participation at uTest: test cycle performance, course writing, blogging, etc.
- And not just peers – testers will have the chance to recognize Test Team Leads and Project Managers, as well as mentors who have helped them along their testing journey on paid projects at uTest
Additionally, once the nominating is complete, all winners will have their names proudly displayed on a “Hall of Fame” recognition board. The Hall of Fame will serve as the recognition hub for not only Tester of the Quarter, but all uTest award programs, including past uTesters of the Year and uTest Lifetime Achievement Awards (coming soon!), and will be a mainstay on the uTest site.
If you are a uTester, submit your nominations now through September 15, or just drop by to see what great things testers are saying about their peers. Happy Nominating!
FULL LIST OF TESTER OF THE QUARTER CATEGORIES:
- Outstanding Forums Contributors
- Outstanding uTest University Instructors
- Outstanding Bloggers
- Outstanding Tool Reviewers
uTest Paid Project-specific categories:
- Outstanding Project Managers of the Quarter
- Outstanding Test Team Leads, Tester’s Choice
- Outstanding Testers, Test Team Leads’ Choice
- Outstanding Up-and-Comers, Test Team Leads’ Choice
One thing I respect about uTest is their continual pursuit of ways to increase customer value. It’s an essential business objective to ensure the health and growth of our company. ‘Value’ should be the middle name of any good tester. “Lucas Value Dargis.” Sounds pretty cool, huh?
I had just finished my 26th uTest test cycle in mid-2012. I had put an extra amount of focus and effort into this cycle because there was something special at stake. On some occasions, uTest offers an MVT award which is given to the Most Valuable Tester of the cycle. The selection process takes several things into account including the quality of the bugs found, clear documentation, participation, and of course, customer value.
The MVT award not only offers a nice monetary prize, but it’s also a way to establish yourself as a top tester within the uTest Community. I decided I was going to win that MVT award.
As usual, I started by defining my test strategy. I took the selection criteria and the project scope and instructions into account and came out with these five strategic objectives:
- Focus on the customer-defined ‘focus’ area
- Report only high-value bugs
- Report more bugs then anyone else
- Write detailed, easy-to-understand bug reports
- Be active on the project’s chat
When the test cycle was over, I reflected on how well I’d done. I reported nine bugs — more than anyone else in the cycle. Of those, eight were bugs in the customer’s ‘focus’ area. The same eight were also rated as very or extremely valuable. All the bugs were documented beautifully and I was an active participant in the cycle’s chat.
There was no competition. No other tester was even close. I had that MVT award in the bag. I was thinking of all the baseball cards I could buy with the extra Cheddar I’d won. I even called my mom to tell her how awesome her son was! You can only imagine my surprise when the announcement was made that someone else had won the MVT award. Clearly there was some mistake…right? That’s not how you spell my name!
I emailed the project manager asking for an explanation for this miscarriage of justice. The tester who won had fewer bugs, none of them were from the ‘focus’ area and they weren’t documented particularly well. How could that possibly be worth the MVT award? The PM tactfully explained that while I had done well in the cycle, the tester who won had found the two most valuable bugs and the customer deemed them worthy of the MVT award.
I was reminded that my adopted definition of quality is “value to someone who matters” and suddenly it all fell into place. It didn’t matter how valuable I thought my bugs and reports were. It didn’t matter how much thought and effort I put into my strategy and work. At the end of the day, a tester’s goal, his or her mission, should be to provide “someone who maters“ with the most value possible. I’m not that “someone who matters.” That “someone” is our customer.
It was a hard pill to swallow, but that lesson had a strong impact on me and it will be something I’ll carry with me moving forward. Congratulations to the MVT. I hope you enjoy all those baseball cards.
A Gold-rated tester and Enterprise Test Team Lead (TTL) at uTest, Lucas Dargis has been an invaluable fixture in the uTest Community for 2 1/2 years, mentoring hundreds of testers and championing them to become better testers. As a software consultant, Lucas has also led the testing efforts of mission-critical and flagship projects for several global companies. You can visit him at his personal blog and website.
I used TechSmith’s Snagit before I started working here. I was creating simple screen captures with annotations for my test documentation and reporting defects. The more I used Snagit, the more it became a part of my daily workflow. I discovered that many testers are doing just what I did — using Snagit for those simple screen capture tasks. But it’s far more powerful than that. And the robust features in Snagit are often overlooked because testers find lots of value in the capture experience alone.
To better understand the features that testers love most about Snagit, I turned to our testers here at TechSmith. Who better to give advice on Snagit features than the testers that help make it! Here are the top features of Snagit our testers use to make their work shine.
Video in Snagit? Yep, it’s in there, but you might be wondering why you would want to use it. It can be difficult to describe the complex behaviors of software solely through text. Capturing video of a defect or anomaly in action is a far more powerful demonstration. With video, you can describe the behavior prior to and following an anomaly. Essentially, you’re narrating the defect. And video is extremely helpful when working with remote testers or developers.
To capture a video, simply activate a capture and select the video button:
Snagit will record full screen or a partial selection of your screen. When you’ve finished capturing, you can trim the video in the Snagit editor and share it using your favorite output. Speaking of sharing…
You can save captures as images in a variety of formats, but did you know about the many outputs for sharing your content from the Snagit Editor? Get your images and videos where they need to go using Output Accessories. From the Share menu, you can output captures to many places including Email, FTP, the Clipboard, MS Office programs, our very own Camtasia and Screencast.com, YouTube, and Google Drive. The complete list of available outputs can be found from the Snagit Accessories Manager on the Share menu:
Additional places to share your captures to include Twitter, Facebook, Evernote, Skype, and Flickr.
Profiles allow users to set up a workflow for their captures. Workflows make it more efficient by configuring a capture type and sharing locations. Profiles are often used by testers for repetitive testing processes, such as creating test documentation, recording test execution artifacts, and capturing defects. An example of using a profile would be sending an image capture to the Snagit editor for a quick annotation and then directly to Microsoft Word by Finishing the profile:
Or you can even bypass the editor altogether if you want your images to go to your selected output without annotations. Learn more about profiles.
Mobile Capture with TechSmith Fuse
Are you testing a mobile application and need to get images of those bugs over to a developer ASAP? Rather than messing with email, just Fuse it! TechSmith Fuse is a free mobile application that lets you capture images or video on your mobile device (iOS, Android, or Windows), upload them directly to the Snagit Editor through your wireless network, and then enhance your content using Snagit’s many editing tools.
Sharing Your Content
Screencast.com is both a repository for your image and video content as well as a place to conveniently share it with others. Your image and video content can be sent from Snagit and shared privately or publicly. Best of all, you can start storing and sharing your content with a free account that comes with 2GB of storage space.
There you have it — some key features to you need to know to get the most out of Snagit. Happy capturing!
Apple has always prided itself on a sleak, sexy, streamlined experience. Moreover, this is one same experience that the user on his iPhone 4 in the United States may very well be sharing with that iPhone 4 in India.
Now take a look at Android. He’s kind of the sloppy guy at the wedding that decided to wear shorts and sandals. But this operating system of the Big Two has always embraced this different and defiant but sloppy lifestyle, with a customized experience on each device that’s as unique as a snowflake.
However, as of late, Android has recently taken this very un-Apple business model to an extreme. According to PC Magazine, there are now approximately 18,796 unique Android devices in-the-wild. And this number has jumped a whopping 60% in just one year from just over 11,000.
So with this proliferation of Android devices floating around, has the experience for Android testers and developers become that much more of a horror show full of challenges? We’d like to hear from you in the Comments below.
When I’m not testing, one of my favorite hobbies is alcohol. Wait…that didn’t come out right. What I meant was my hobby is learning about wine, beer and sprits. Yeah, that sounds better.
While I do love a cold beer in the summer, a single-malt scotch when I’m feeling sophisticated, or an 1855 classified Bordeaux on special occasions, I think I spend more time studying booze than I do drinking it. I really enjoy learning about the various Appellation d’origine contrôlée (AOCs) in France, and the differences between a Pinot Noir from California and one from Burgundy. I sound pretty smart, huh?
As any cultured, refined wine connoisseur such as myself knows, the true masters of the bottle are called sommeliers. These fine folks are highly trained adult beverage experts who often work in fancy, fine-dining restaurants, setting the wine list, caring for the cellar and working with customers to help them select the perfect wine.
So what could a tester possibly learn from a someone obsessed with booze? Good question! I have three answers.
I have yet to find people who are more passionate about what they do than Master Sommeliers. Need proof? Watch the movie Somm (available on Netflix). The tremendous amount of dedication and effort these people pour (wink, wink) into their work is simply astounding.
A sommelier must be constantly learning and exploring. Each year, a new vintage of every wine is created. That means thousands of new wines are added to the multitude that already exist…and a sommelier is expected to be familiar with with all of them. And you thought the IT world was constantly changing!
There will always be a new product to test, a new approach to learn, a new idea to debate. Testers who are passionate about testing are excited about these new developments as they are opportunities to grow and improve.
Be a servant
From the Demeanor of the Professional Sommelier:
It is important for sommeliers to put themselves in the role of a server; no job or task on the floor is beneath the role of a sommelier; he or she does whatever needs to be done in the moment to take care of the guest.
Sommeliers are at the top of the service food chain. They are highly trained, knowledgeable, and professional people, yet they are also the most humble servants. They realize that their qualities alone are worthless. They must be used in the service of a customer for their value to be realized.
Testers too need to remember that we don’t get paid because we think we’re great testers. We get paid because the person who is paying us values our work. Put your client’s perception of value and quality ahead of your own.
Be an expert
A sommelier would never walk up to your table and say, “I recommend this bottle I found in the back.” They are the ultimate experts in wine selection and food pairing. He or she asks questions: What do you like? What have you had before? What is your budget? They learn about the customer and use that information to help them find exactly the right bottle.
Likewise, a tester should be knowledgeable in the testing field. A good tester doesn’t just randomly go banging on things — they too take a more thoughtful approach. What areas are the most error prone? What parts of the product are the most important? What information would be most useful at this stage in the product’s life cycle? They learn about the product and the circumstances under which they are testing to ensure they provide the most value possible.
Take pride in your work. Understand that testing is not a low-skilled job; it is a highly cognitive profession with demands on your professionalism, communication skills, and attention to detail. It takes a lot of effort, study and experience to become an expert (or so I’m told), but that should be the goal of every tester.
A Gold-rated tester and Enterprise Test Team Lead (TTL) at uTest, Lucas Dargis has been an invaluable fixture in the uTest Community for 2 1/2 years, mentoring hundreds of testers and championing them to become better testers. As a software consultant, Lucas has also led the testing efforts of mission-critical and flagship projects for several global companies. You can visit him at his personal blog and website.
Testers within our community often want to know on which devices they should be testing. Concurrently, developers also want to know where their beautiful creations should be given the most love.
Thankfully we have a magical data team that can take any request we throw their way, and give us such statistics on the hottest devices requested by our customers.
We sent such a request over to our trusty data team, and magically (for me, anyways, as an English/Communications major), they came back with this list of the 10 most tested mobile devices at uTest. The criteria for this data were the devices (both phones and tablets) on which the most bugs were filed in the past 30 days. Here’s the top 10 in order of popularity:
- iPhone 5 (on iOS 6.1 all the way up to 8.0 beta, but those on 7.0+ were tested far more) (phone)
- iPhone 4s (on iOS 6.1.2 all the way up to 8.0 beta, but again, only a handful of testers had versions lower than 7.0) (phone)
- iPad (late 2012 model & iPad 2) (tablet)
- iPhone 5S (on iOS 7.0 up to 8.0 beta) (phone)
- Galaxy S4 (phone)
- iPhone 4 (on iOS 6.0 up to 7.1.2, but vast majority of these tested were on 7.0+ versions) (phone)
- Galaxy S3 (phone)
- iPad Mini (Mostly wifi-only versions) (tablet)
- iPad Air (tablet)
- Nexus 7 (tablet)
Notables also included the LG Nexus 5, iPhone 5c and the HTC One, despite not making our Top 10. So, testers and devs, these are the devices your counterparts are getting the most out of based on our customer demand.
This of course will always fluctuate as soon as the next biggest and baddest device comes out (I’m lookin’ at you, iPhone 6), which is why we plan to update you every quarter on the hottest devices. But on the flipside, as the iPhone 4 can attest to released two years ago, it’s always a safe bet that as long as the device still has widespread adoption “in the wild,” there’s always a need for you to test your apps on it.
Testers aren’t mind readers. We’re not psychic or telepathic, either. Please write your test cases and review our bug reports accordingly.
Your QA Testers
Many testers dislike seeing the dreaded “Working as Designed” rejection reason on our bug reports because, in many cases, we had no idea it was designed to work that way. To us, it just seemed like something wasn’t quite right, and it’s our job to tell you these things.
In fact, if a tester who has the perspective of a new user (someone who has no prior experience with what they’re testing and sees the environment in a way a brand-new user would) writes up a bug report on a feature that seems to be broken or working poorly, there’s a good chance new users will also think the same thing. Perhaps you should review the design and user experience to see if there’s a flaw there.
After all, quality assurance is not just about telling developers when a page errors or when a link is broken. It’s also about telling you when your software does not work in such a way that a user will find it to be a useful, quality, worthwhile experience.
It might seem like a pain in the butt, but we’re really just trying to do our jobs and help make your project a complete success. Really! We’re on the same team, I promise.
When you ask us to test something for you, try and do the following things. They will help us help you and, in turn, your projects:
- Provide us with at least a basic understanding of the software, what goals it has to meet, and how users are meant to see and use it. We are your first and last line of defense against poor user experiences.
- Give us the design and requirement specifications if you are able to. We can help ensure that they were met and let you know if we see discrepancies.
- Converse with us. Frequent communication and discussion helps us understand what you need from us and how we can help you better.
- Be detailed, accurate, objective, and constructive when you give us feedback. You don’t like when our bug reports only say, “It’s broken,” and similarly, we don’t like when your rejection notes say nothing but, “Working as intended.”
- Share your thoughts, expectations, needs, and wants with us. We want to be able to provide you with these things, but we can’t read your mind. You have to externalize them.
- Be proactive. Let us know what we can do for you sooner rather than later so that we can get it done for you when you need it.
Good testers are magical, yes, but we are far from psychic. Help us and we can help you so much more than you dreamed!
Tammy Shipps has experience working with testers as a developer and marketing engineer at Applause. She also is an active uTest Community member and tester herself.
We’re proud and excited to crown the champions of the 2014 Summer Bug Battle, uTest’s first in almost four years.
If you’ll remember, in this recent edition of the uTest Bug Battle, testers were asked to submit the most impactful Desktop, Web and Mobile bugs from testing tools contained on our Tool Reviews site. After two weeks of heated competition, our Test Team Leads chose the top 10 most impactful finalists from the bunch, and the Community spoke by voting on their favorites from these.
Here’s who YOU, the Community, crowned as the 2014 Summer Bug Battle champions, winners of $1000 in prizes, along with their corresponding winning entries:
1st Prize ($500): Davide Savo, Italy, for his winning entry of a ‘ZD SOFT Screen Recorder tool bug that would allow a tester to have 2221 days remaining in a Free Trial of its software”
2nd Prize ($300): Iwona Pekala, Poland (Iwona is also a Forums Moderator this quarter!), for her winning entry of an ‘ iTools bug allowing for crashing when a device is disconnected after recording a video”
3rd Prize ($200): Priyanka Halder, United States, for her winning entry of a bug where “JIRA doesn’t display the proper error message when the URL of an issue is too long”
Honorable Mentions (Winners of $50 each):
- Ioan Nita, Romania
- Ronny Sugianto, Indonesia
- Faye Gu, China
- Michael Sokolov, United States
- Marek (Mark) Langhans, Czech Republic
- Yulia Atlasova, Russia
- Pankaj Sharma, India
10 Winners of a uTest t-shirt (randomly selected from those who participated):
- Marius van Rees, The Netherlands
- Terence Ching, Hong Kong
- Alexander Silkin, United States
- Massimo Sartori, Italy
- Mallikarjun Hassan, India
- Kirill Bilchenko, Ukraine
- Vivek Gar, India
- Pradyumna Dutta, United Kingdom
- Diego Delgado, Columbia
- Vanessa Yaginuma, Brazil
A huge congratulations to all of the champions in uTest’s 2014 Summer Bug Battle! Please be sure to send your own congratulations to all of the winners (and see all of the winning entries in their entirety) over at the uTest Forums or in the Comments below.
Beyond our champions seen here, we also thank everyone that made our first battle in a long time such a success, including all entrants, and especially our TTLs for all their hard work in triaging the entries: Peggy Fobbe, Lucas Dargis, Gagan Talwar, Atul Angra and Sonal Modi! The competition wouldn’t have been possible without some outside help, so we are grateful for it!
If you missed out on this one, don’t fear. We’ll see you next time.
Note: The following is a guest submission to the uTest Blog from Sanjay Zalavadia.
Agile QA teams should use specialized software testing metrics to make sure they’re on the right track.
Software testing metrics are a vital component of the quality assurance process, providing team leaders with the data needed to make informed decisions. Agile teams require their own set of specialized metrics to better track progress and ensure that they are getting the most value out of the testing methodology. One of the largest issues that organizations run into when leveraging agile is botching the implementation, typically by misusing a tactic or strategy. With the right testing metrics, QA teams making their first foray into agile can gain the oversight needed to deploy the practice in an effective manner.
Writing for Agile Atlas, veteran software engineer and lean development expert David Koontz offered several metrics that could benefit agile teams. Many of these measurements centered around sprints, giving QA leaders insight into the effectiveness of these testing processes.
For instance, test teams can use software testing metrics to track both the projected capacity and capability of upcoming sprints. Such insight will help businesses address one of the most common problems associated with agile methods: sprint mismanagement. Teams that are still getting acquainted with agile may wind up underestimating how long forthcoming sprints will last, resulting in longer-than-anticipated production schedules. By gathering data and laying out an accurate timeframe for test sprints, QA leaders can avoid running into costly release delays.
Running tested features provides more insight to agile users
There are far more metrics that are geared toward agile users. Running tested features (RTF) may be chief among them as this is uniquely qualified to provide insight into the progress of agile productions. Scrum Alliance contributor Raghu Angara explained that RTF essentially ascertains the functionality and performance of critical software features, giving teams an accurate idea of how much progress they have made with a given application.
Within a waterfall framework, RTF wouldn’t provide much value since testers wouldn’t be able to gauge any change until late in the development process. Because so much time at the beginning of a waterfall production is dedicated to planning, there would be nothing for RTF to actually measure. Meanwhile, agile teams will find a worthy testing metric in RTF as they constantly evaluate the functionality of their in-development programs.
“In terms of productivity, measuring RTF is a quick way to see the state of the team,” Angara wrote. “A healthy agile team should be able to consistently deliver a set of stories over time, with any unexpected challenges or risks averaging out against features that turn out to be easier than expected.”
With all of these metrics to collect and analyze, QA teams will need a reliable platform to store and track relevant data. A comprehensive test management platform will provide an easy way for testers to upload and share information with one another as well as their superiors. This way, everyone will have access to crucial data regarding the testing process at various levels of detail.
With these insightful software testing metrics in hand, testing managers can better determine how successful their deployment of agile practices has been and make course corrections as the need arises.
Sanjay Zalavadia is the VP of Client Services for Zephyr, who offers a real-time, full-featured test management system. Learn more about Zephyr right here.
Co-Chair Keith Klain of Doran Jones, a software testing consulting company, kicked things off in rousing fashion this past Tuesday, announcing that the 9th Annual edition of the CAST conference was sold out. From there? The festivities officially started with a lively keynote from James Bach (are they ever not lively with him?).
CAST 2014 was held at the Kimmel Center at New York University (NYU), just outside the confines of beautiful Washington Square Park in lower Manhattan. What separates CAST from other testing conferences are the lively discussions and the varying viewpoints. With other testing shows, you may hear a speaker give a one-sided discussion with an agenda to promote his or her product — at CAST, you’re getting varied viewpoints that can actively be challenged and refuted by the audience.
The star of CAST is the ‘Open Season’ at the end of each speaker’s session — testers can hold up various color-coded cards which signal either a ‘new’ thread for discussion points, or replies to some of the threads that were already started. Questions were fielded both from the in-person audience and online viewers throughout the course of the conference.
The ‘Hits’ From CAST
A couple of the biggest hits from CAST came from Day 1. James Bach’s keynote led off Tuesday’s slate with a lively discussion that preached the quintessential underlying theme of the show — that testers have to have that ‘thinking’ part — otherwise, they are nothing but bodies checking off boxes, running test cases. And that ain’t testing.
James was even so cautious as to point out that he watches how he refers to testing situations with others so as to avoid the wrong mindset setting in: “I’m reluctant to say ‘pass’ — I’d rather say, I don’t see a problem as far as I can tell.’ In short, one walking away from James’ kickoff keynote left with the wisdom that testing, much like an actor on stage, is a performance, and as such, must be performed and not ‘executed.’
Another star of Day 1 was Trish Khoo of Google who gave a talk on ‘Scaling up with embedded testing.’ The message given was that endless hours are spent in a loop — with testers and devs finding and fixing bugs, and the product owner and tester responsible for checking and verifying expectations. This endless loop eats up a lot of time, and according to Khoo, solving this must start by empowering developers to become better testers. Doing so allows for more “testable” code that decreases the amount of time and energy spent in a loop. The message was clear and engaging, but definitely jarring, given what the implications could be for testers’ roles being lessened.
Day 2 had a number of great sessions, but a standout was Ben Simo’s talk — ‘There wasn’t a breach…there was a blog.’ Ben went over in detail, through photographs of the website and clips weaved in from media coverage, how he uncovered major security and functional flaws in the United States’ healthcare.gov website.
His findings were so grand that they became the subject of numerous media pundits’ discussions, and even ended up in front of Congress. While Ben highlighted how he arrived at these sometimes laughable errors found, including error messages for conditions that didn’t exist (there was much snickering throughout this entire talk), he was careful to point out that the ethical line was drawn early on for him — he never once attempted to gain access to private healthcare.gov information or accounts.
The ‘Bits’ From CAST
Some other beautiful (and even funny) messages that were a favorite from the show:
- “As a discipline, I think we should also concentrate on helping to prevent defects, and not just find them”
- “Be an advocate of awesomeness”
- “Talk to people you don’t know. Make a relationship out of it. The value of peers.”
- “A test manager should be an orchestrator of skills within a team”
- “I was very careful with my testing. I didn’t want people in black suits knocking on my door.”
- “I love ‘unexpected’ errors. How many times do we ‘expect’ errors?”
- “Less testers is not a goal in introducing automation.”
- “Testing can’t be automated, but checking can.”
- “I may someday leave testing, but I will always be a tester. Software testing is just a way to get paid for being what I am.”
CASTING off from 2014 into 2015
It’s impossible to do the entire CAST 2014 experience justice, especially as a neutral party, so we recommend listening to the wisdom of the attendees themselves. Be sure to check out our on-the-spot interviews with CAST speakers including Hilary Weaver, Henrik Andersson and Michael Larsen. And speaking of Michael Larsen, read his personal blog of Day 1 of the show — it’ll probably be one of the BEST reads on the show you’ll see out there.
For those that weren’t able to make this year’s 9th Annual Conference of the Assocation for Software Testing (CAST 2014), or follow along with our live coverage, we’ve compiled all of our interviews from over the course of the week at the show.
uTest was on hand to live tweet the event, as well as sit down in between sessions for informal chats with major personalities from the New York City testing conference. These chats included everything from what separates CAST from other testing shows, to what testers can do to better improve their relationships with developers.
Be sure to stop by this weekend to the uTest Blog as well, as we conclude our CAST 2014 coverage with a full recap and thoughts on the show. In the meantime, here are some of the interviews from New York:
Interview with Hilary Weaver:
Interview with Jessica Nickel:
Interview with Michael Larsen:
Interview with Martin Folkoff:
Interview with Henrik Andersson:
Interview with Tim Graham:
The last of our interviews live from CAST 2014 was with one of the figures from the organization responsible for bringing everyone together in NYC this week — the Association for Software Testing (AST).
Michael Larsen is a software tester from the San Francisco Bay Area, and is on the Board of Directors for the AST. Michael also co-led a session earlier this week at the conference. We learn from him a little bit about what sets CAST apart from other testing shows, and why one of the biggest problems in testing today is the de-emphasis of critical thinking.
On the heels of a successful Bug Battle contest, we are launching another contest exclusively for uTesters called the Ideal Tool Contest. If you’ve ever wished for testing tools with more features or better integration options, now is your chance to design the perfect testing tool that the world has never seen before!
The Ideal Tool Contest is a competition for testers – either individuals or teams – to design their ideal browser-based, manual functional testing tool.
How do you participate? First, decide if you want to compete independently or together with a team to win the $1,000 grand prize. Yes, you read that correctly. One thousand dollars!
Next, gather your information and materials as noted in the Requirements section of the contest page. Then, submit your entry.
Lastly, spread the word about the contest on Twitter, Facebook, or your favorite social media site.
The contest is running now and ends at 12:00 p.m. EDT, Tuesday, August 26th. You can find the list of key dates on the contest page. Get your entries in as soon as possible to qualify for our random drawings for uTest t-shirts.
Jessica Nickel is a Test Lead with Doran Jones, a software testing consultancy based in NYC, the host city of CAST 2014. uTest had an off-the-cuff discussion with Jessica on what sets CAST apart from other conferences and what the biggest threat to testers is today.
uTest will be interviewing attendees and speakers all week from CAST in NYC, and live tweeting @uTest using the #CAST2014 official hashtag. Check out the interview with Jessica below.
Hilary Weaver is a QA Engineer that bills herself as a “prolific swearer.” She kindly agreed to dial it down for this uTest interview just this once.
We interviewed Hilary in between CAST 2014 sessions this morning to discuss some key takeaways from her Tuesday session on the rift between testers and developers, and what testers can do to improve the relationship. We even take time to discuss her love for Star Wars — the title of her session was a quote from the film (“He doesn’t like you! I don’t like you either!”).
uTest will be interviewing attendees and speakers all week from CAST in NYC, and live tweeting @uTest using the #CAST2014 official hashtag. Check out the interview with Hilary below, and be sure to view all of the video interviews from CAST 2014.
Day 2, the final of CAST 2014, is just underway, and we took the opportunity to talk briefly with another CAST attendee in between sessions.
Tester Martin Folkoff hails from Washington, DC, and shares his thoughts on why CAST is exciting and why testers should bring a development mindset to the table. uTest will be interviewing attendees and speakers all week from CAST in NYC, and live tweeting @uTest using the #CAST2014 official hashtag.