We’re proud and excited to crown the champions of the 2014 Summer Bug Battle, uTest’s first in almost four years.
If you’ll remember, in this recent edition of the uTest Bug Battle, testers were asked to submit the most impactful Desktop, Web and Mobile bugs from testing tools contained on our Tool Reviews site. After two weeks of heated competition, our Test Team Leads chose the top 10 most impactful finalists from the bunch, and the Community spoke by voting on their favorites from these.
Here’s who YOU, the Community, crowned as the 2014 Summer Bug Battle champions, winners of $1000 in prizes, along with their corresponding winning entries:
1st Prize ($500): Davide Savo, Italy, for his winning entry of a ‘ZD SOFT Screen Recorder tool bug that would allow a tester to have 2221 days remaining in a Free Trial of its software”
2nd Prize ($300): Iwona Pekala, Poland (Iwona is also a Forums Moderator this quarter!), for her winning entry of an ‘ iTools bug allowing for crashing when a device is disconnected after recording a video”
3rd Prize ($200): Priyanka Halder, United States, for her winning entry of a bug where “JIRA doesn’t display the proper error message when the URL of an issue is too long”
Honorable Mentions (Winners of $50 each):
- Ioan Nita, Romania
- Ronny Sugianto, Indonesia
- Faye Gu, China
- Michael Sokolov, United States
- Marek (Mark) Langhans, Czech Republic
- Yulia Atlasova, Russia
- Pankaj Sharma, India
10 Winners of a uTest t-shirt (randomly selected from those who participated):
- Marius van Rees, The Netherlands
- Terence Ching, Hong Kong
- Alexander Silkin, United States
- Massimo Sartori, Italy
- Mallikarjun Hassan, India
- Kirill Bilchenko, Ukraine
- Vivek Gar, India
- Pradyumna Dutta, United Kingdom
- Diego Delgado, Columbia
- Vanessa Yaginuma, Brazil
A huge congratulations to all of the champions in uTest’s 2014 Summer Bug Battle! Please be sure to send your own congratulations to all of the winners (and see all of the winning entries in their entirety) over at the uTest Forums or in the Comments below.
Beyond our champions seen here, we also thank everyone that made our first battle in a long time such a success, including all entrants, and especially our TTLs for all their hard work in triaging the entries: Peggy Fobbe, Lucas Dargis, Gagan Talwar, Atul Angra and Sonal Modi! The competition wouldn’t have been possible without some outside help, so we are grateful for it!
If you missed out on this one, don’t fear. We’ll see you next time.
Note: The following is a guest submission to the uTest Blog from Sanjay Zalavadia.
Agile QA teams should use specialized software testing metrics to make sure they’re on the right track.
Software testing metrics are a vital component of the quality assurance process, providing team leaders with the data needed to make informed decisions. Agile teams require their own set of specialized metrics to better track progress and ensure that they are getting the most value out of the testing methodology. One of the largest issues that organizations run into when leveraging agile is botching the implementation, typically by misusing a tactic or strategy. With the right testing metrics, QA teams making their first foray into agile can gain the oversight needed to deploy the practice in an effective manner.
Writing for Agile Atlas, veteran software engineer and lean development expert David Koontz offered several metrics that could benefit agile teams. Many of these measurements centered around sprints, giving QA leaders insight into the effectiveness of these testing processes.
For instance, test teams can use software testing metrics to track both the projected capacity and capability of upcoming sprints. Such insight will help businesses address one of the most common problems associated with agile methods: sprint mismanagement. Teams that are still getting acquainted with agile may wind up underestimating how long forthcoming sprints will last, resulting in longer-than-anticipated production schedules. By gathering data and laying out an accurate timeframe for test sprints, QA leaders can avoid running into costly release delays.
Running tested features provides more insight to agile users
There are far more metrics that are geared toward agile users. Running tested features (RTF) may be chief among them as this is uniquely qualified to provide insight into the progress of agile productions. Scrum Alliance contributor Raghu Angara explained that RTF essentially ascertains the functionality and performance of critical software features, giving teams an accurate idea of how much progress they have made with a given application.
Within a waterfall framework, RTF wouldn’t provide much value since testers wouldn’t be able to gauge any change until late in the development process. Because so much time at the beginning of a waterfall production is dedicated to planning, there would be nothing for RTF to actually measure. Meanwhile, agile teams will find a worthy testing metric in RTF as they constantly evaluate the functionality of their in-development programs.
“In terms of productivity, measuring RTF is a quick way to see the state of the team,” Angara wrote. “A healthy agile team should be able to consistently deliver a set of stories over time, with any unexpected challenges or risks averaging out against features that turn out to be easier than expected.”
With all of these metrics to collect and analyze, QA teams will need a reliable platform to store and track relevant data. A comprehensive test management platform will provide an easy way for testers to upload and share information with one another as well as their superiors. This way, everyone will have access to crucial data regarding the testing process at various levels of detail.
With these insightful software testing metrics in hand, testing managers can better determine how successful their deployment of agile practices has been and make course corrections as the need arises.
Sanjay Zalavadia is the VP of Client Services for Zephyr, who offers a real-time, full-featured test management system. Learn more about Zephyr right here.
Co-Chair Keith Klain of Doran Jones, a software testing consulting company, kicked things off in rousing fashion this past Tuesday, announcing that the 9th Annual edition of the CAST conference was sold out. From there? The festivities officially started with a lively keynote from James Bach (are they ever not lively with him?).
CAST 2014 was held at the Kimmel Center at New York University (NYU), just outside the confines of beautiful Washington Square Park in lower Manhattan. What separates CAST from other testing conferences are the lively discussions and the varying viewpoints. With other testing shows, you may hear a speaker give a one-sided discussion with an agenda to promote his or her product — at CAST, you’re getting varied viewpoints that can actively be challenged and refuted by the audience.
The star of CAST is the ‘Open Season’ at the end of each speaker’s session — testers can hold up various color-coded cards which signal either a ‘new’ thread for discussion points, or replies to some of the threads that were already started. Questions were fielded both from the in-person audience and online viewers throughout the course of the conference.
The ‘Hits’ From CAST
A couple of the biggest hits from CAST came from Day 1. James Bach’s keynote led off Tuesday’s slate with a lively discussion that preached the quintessential underlying theme of the show — that testers have to have that ‘thinking’ part — otherwise, they are nothing but bodies checking off boxes, running test cases. And that ain’t testing.
James was even so cautious as to point out that he watches how he refers to testing situations with others so as to avoid the wrong mindset setting in: “I’m reluctant to say ‘pass’ — I’d rather say, I don’t see a problem as far as I can tell.’ In short, one walking away from James’ kickoff keynote left with the wisdom that testing, much like an actor on stage, is a performance, and as such, must be performed and not ‘executed.’
Another star of Day 1 was Trish Khoo of Google who gave a talk on ‘Scaling up with embedded testing.’ The message given was that endless hours are spent in a loop — with testers and devs finding and fixing bugs, and the product owner and tester responsible for checking and verifying expectations. This endless loop eats up a lot of time, and according to Khoo, solving this must start by empowering developers to become better testers. Doing so allows for more “testable” code that decreases the amount of time and energy spent in a loop. The message was clear and engaging, but definitely jarring, given what the implications could be for testers’ roles being lessened.
Day 2 had a number of great sessions, but a standout was Ben Simo’s talk — ‘There wasn’t a breach…there was a blog.’ Ben went over in detail, through photographs of the website and clips weaved in from media coverage, how he uncovered major security and functional flaws in the United States’ healthcare.gov website.
His findings were so grand that they became the subject of numerous media pundits’ discussions, and even ended up in front of Congress. While Ben highlighted how he arrived at these sometimes laughable errors found, including error messages for conditions that didn’t exist (there was much snickering throughout this entire talk), he was careful to point out that the ethical line was drawn early on for him — he never once attempted to gain access to private healthcare.gov information or accounts.
The ‘Bits’ From CAST
Some other beautiful (and even funny) messages that were a favorite from the show:
- “As a discipline, I think we should also concentrate on helping to prevent defects, and not just find them”
- “Be an advocate of awesomeness”
- “Talk to people you don’t know. Make a relationship out of it. The value of peers.”
- “A test manager should be an orchestrator of skills within a team”
- “I was very careful with my testing. I didn’t want people in black suits knocking on my door.”
- “I love ‘unexpected’ errors. How many times do we ‘expect’ errors?”
- “Less testers is not a goal in introducing automation.”
- “Testing can’t be automated, but checking can.”
- “I may someday leave testing, but I will always be a tester. Software testing is just a way to get paid for being what I am.”
CASTING off from 2014 into 2015
It’s impossible to do the entire CAST 2014 experience justice, especially as a neutral party, so we recommend listening to the wisdom of the attendees themselves. Be sure to check out our on-the-spot interviews with CAST speakers including Hilary Weaver, Henrik Andersson and Michael Larsen. And speaking of Michael Larsen, read his personal blog of Day 1 of the show — it’ll probably be one of the BEST reads on the show you’ll see out there.
For those that weren’t able to make this year’s 9th Annual Conference of the Assocation for Software Testing (CAST 2014), or follow along with our live coverage, we’ve compiled all of our interviews from over the course of the week at the show.
uTest was on hand to live tweet the event, as well as sit down in between sessions for informal chats with major personalities from the New York City testing conference. These chats included everything from what separates CAST from other testing shows, to what testers can do to better improve their relationships with developers.
Be sure to stop by this weekend to the uTest Blog as well, as we conclude our CAST 2014 coverage with a full recap and thoughts on the show. In the meantime, here are some of the interviews from New York:
Interview with Hilary Weaver:
Interview with Jessica Nickel:
Interview with Michael Larsen:
Interview with Martin Folkoff:
Interview with Henrik Andersson:
Interview with Tim Graham:
The last of our interviews live from CAST 2014 was with one of the figures from the organization responsible for bringing everyone together in NYC this week — the Association for Software Testing (AST).
Michael Larsen is a software tester from the San Francisco Bay Area, and is on the Board of Directors for the AST. Michael also co-led a session earlier this week at the conference. We learn from him a little bit about what sets CAST apart from other testing shows, and why one of the biggest problems in testing today is the de-emphasis of critical thinking.
On the heels of a successful Bug Battle contest, we are launching another contest exclusively for uTesters called the Ideal Tool Contest. If you’ve ever wished for testing tools with more features or better integration options, now is your chance to design the perfect testing tool that the world has never seen before!
The Ideal Tool Contest is a competition for testers – either individuals or teams – to design their ideal browser-based, manual functional testing tool.
How do you participate? First, decide if you want to compete independently or together with a team to win the $1,000 grand prize. Yes, you read that correctly. One thousand dollars!
Next, gather your information and materials as noted in the Requirements section of the contest page. Then, submit your entry.
Lastly, spread the word about the contest on Twitter, Facebook, or your favorite social media site.
The contest is running now and ends at 12:00 p.m. EDT, Tuesday, August 26th. You can find the list of key dates on the contest page. Get your entries in as soon as possible to qualify for our random drawings for uTest t-shirts.
Jessica Nickel is a Test Lead with Doran Jones, a software testing consultancy based in NYC, the host city of CAST 2014. uTest had an off-the-cuff discussion with Jessica on what sets CAST apart from other conferences and what the biggest threat to testers is today.
uTest will be interviewing attendees and speakers all week from CAST in NYC, and live tweeting @uTest using the #CAST2014 official hashtag. Check out the interview with Jessica below.
Hilary Weaver is a QA Engineer that bills herself as a “prolific swearer.” She kindly agreed to dial it down for this uTest interview just this once.
We interviewed Hilary in between CAST 2014 sessions this morning to discuss some key takeaways from her Tuesday session on the rift between testers and developers, and what testers can do to improve the relationship. We even take time to discuss her love for Star Wars — the title of her session was a quote from the film (“He doesn’t like you! I don’t like you either!”).
uTest will be interviewing attendees and speakers all week from CAST in NYC, and live tweeting @uTest using the #CAST2014 official hashtag. Check out the interview with Hilary below, and be sure to view all of the video interviews from CAST 2014.
Day 2, the final of CAST 2014, is just underway, and we took the opportunity to talk briefly with another CAST attendee in between sessions.
Tester Martin Folkoff hails from Washington, DC, and shares his thoughts on why CAST is exciting and why testers should bring a development mindset to the table. uTest will be interviewing attendees and speakers all week from CAST in NYC, and live tweeting @uTest using the #CAST2014 official hashtag.
Henrik Andersson is a context-driven testing champion and co-founder and CEO of House of Test Consulting, and is presenting at Day 2 of CAST 2014′s festivities.
While at CAST Day 1 today, we chat with Henrik regarding what brought him to CAST and get his thoughts on which qualities are most lacking in testers. Don’t forget, uTest will be interviewing attendees and speakers all week from CAST in NYC, and live tweeting @uTest using the #CAST2014 official hashtag.
We sat down for a brief chat with Tim in between keynotes to discuss his development background, what brought him to CAST, and what sets it apart from other testing conferences. Stay tuned to the uTest Blog throughout the week for more video interviews with conference attendees and speakers.
When James Bach enters the room at a testing conference, there’s never a dull moment. This morning’s keynote led by Sir James was no exception, so we round off some of the morning’s highlights from the session, the full kick-off of CAST 2014.
Be sure to stay tuned throughout the day, too, as we interview testers on the spot at CAST. Here’s some of the best of James’ talk this morning:
— uTest (@uTest) August 12, 2014
— Hilary aka H-Bomb (@g33klady) August 12, 2014
— John Stevenson (@steveo1967) August 12, 2014
Testing is performed. Not executed. #CAST2014
— uTest (@uTest) August 12, 2014
This particular one led to an on-point, humorous reply from one attendee:
@uTest Although there are some tests I would like to execute so never have to deal with them again!
— Nancy Kelln (@nkelln) August 12, 2014
Let's be clear what testing is. It's that 'thinking' part. #CAST2014
— uTest (@uTest) August 12, 2014
— uTest (@uTest) August 12, 2014
— uTest (@uTest) August 12, 2014
Part of your skill as a tester is the ability to label and explain how and why you test. #CAST2014
— Perze Ababa (@perze) August 12, 2014
As a proud sponsor of the Association for Software Testing’s 9th Annual conference this week, CAST 2014, uTest will be in New York City through Wednesday covering all of the happenings and keynotes from this major (and now sold-out) testing event.
Beginning Tuesday here on the Blog, uTest will be providing daily video interviews with speakers from some of the conference’s sessions and keynotes as they leave the stage. Additionally, uTest will also be live-tweeting @uTest on Twitter, using the official event hashtag of #CAST2014 throughout the course of the conference’s full days on Tuesday and Wednesday.
This year’s theme is ‘The Art and Science of Testing,’ so conference speakers will share their stories and experiences surrounding software testing, whether bound by rules and laws of science and experimentation, or expressed through creativity, imagination, and artistry. Some of these esteemed speakers include:
- James Bach
- Michael Bolton
- Fiona Charles
- Anne-Marie Charrett
- Matthew Heusser
- Paul Holland
- Henrik Andersson
- Benjamin Yaroch
In addition to the live Tweets and video blogging, uTest will be providing a full recap later on in the week highlighting some of the best discussions, topics and happenings from the show.
Be sure to follow all of the coverage from CAST 2014 on the uTest Blog, and on Twitter @uTest and #CAST2014. And if you’re at the show this week, let us know in the comments below, or reply to us while we’re tweeting at one of the sessions or keynotes!
So I hear a lot about hugging developers. ‘Have you hugged a developer today?’
In a recent video from the good folks at Smartbear, in fact, software testing consultant Dawn Haynes said, “Why don’t you buy a developer a doughnut? You know, make friends and give people positive feedback as well, not just only the negative.”
And I don’t have anything against this. In fact, developers are lovely people who have to put up with a lot themselves. My only gripe is that the testers aren’t usually the ones getting these bountiful gifts of doughnuts and hugs.
Until today. The Community Management Team at uTest decided it was about time that a tester got some hugs, so we trekked from the 5th floor penthouse at the Applause/uTest HQ down to the 4th floor and rectified this immediately, embracing our in-house Applause and uTest QA Manager Bryan Raffetto. Needless to say, love was in the air. It’s about time someone hugged a tester. If anyone knows the hardships a tester must endure and can empathize, it’s the CM team.
So have YOU been hugged today? If not, be sure to hug a fellow testing colleague. Maybe they’ll return the favor.
If you’re also feeling adventurous in spreading the tester love, feel free to tag a picture on Twitter with #hugatester. Maybe you’ll end up on the uTest Blog!
Marek (Mark) Langhans is a Gold-rated tester and former Forums Moderator in the uTest Community, and hails from Prague. Mark has tested information systems, web, mobile and desktop applications of domestic financial institutions for a couple of years now. In June, he participated in the Europe Preliminary round of the Software Testing World Cup (STWC), and shares his experience here, along with some advice for future STWC participants.
Before I go into any details about the competition, let me thank the entire team and all the judges and product owners behind the Software Testing World Cup (STWC).
If testing needed a push into the general public eye, this event was the right way to go about it. Not only it has given us testers a way to compete and connect with each other, and to see our limitations, strengths and weaknesses, but it has also put the testing profession into a whole new perspective. Testing has been made cool, and that is very rare to do.
On Friday, the 13th of June, three of my colleagues and I participated in the Europe Preliminary round of the STWC. Even though we had a pretty awesome base in our firm HQ’s basement, at the end, we didn’t end up near the top. However, with the things we have taken from it and it has given to us, you just can’t put a price tag on that. In three hours, you learn so much about yourself and your testing capabilities than you may have in your whole testing career.
The competition started on time. Thirty minutes before the official start, we all had received emails introducing us to the software to test (Sales Tool — for more details, check out the YouTube stream), the scope, and, of course, some tips what we should focus on. The email contained a link to the application so we could move around with it before the actual competition. These thirty minutes flew by, as the application was something none of us had come across before, and so we tried to figure out what we actually could test and how.
The email also contained a link to a YouTube channel which, in real-time, streamed a Google hangout of the judges, customer and product owner. On the YouTube channel, in the comments area, we could ask questions regarding the scope, the application and everything about the competition. We could also use Twitter to ask these questions. My team and I were focused more on the application, as it was an unknown to us, so we didn’t pay much attention to the stream, even though we had a huge screen just for it in front of us. We listened to it with only one ear, but that probably was a mistake, in retrospect.
The application wasn’t that complex, so there weren’t that many features, but the ones we could cover weren’t that easy to test. We found a few issues, and everything we came across was reported in the Agile Manager, the official bug reporting tool. Before the competition, we prepared a bug report template so we all had the same information contained in the reports, but we weren’t consistent…another mistake, as this was taken into consideration in judging.
There was one huge issue with the reporting tool, as other teams reported that they could see other teams’ bug reports. This made the competition a little uneven, to be honest, but I do not think it made it any difference in the end.
After the 2 1/2 hours testing the application, we moved into creating the test report. We had prepared a template for it, so we just filled in our findings. For non-English speaking teams, this was the hardest part, I guess, to make it understandable without any major grammatical mistakes. We followed a few tips from Global Jury member Matt Heusser: Keep it simple on few pages with clearly visible sections, have a summary at the top, and below, go into some details about your findings, with more details about those findings and how you came to them.
We sent out the report five minutes after 12. We immediately received an auto-reply that the report was received, and after that, we packed our stuff and went home. Of course, our heads were filled with what we had done, we could have done better and what we could have done better next time.
Advice for the Prospective STWC participant
When I look back at the whole experience and compare it to uTest, for instance, I originally thought this would be very similar in many ways, but it wasn’t.
Having three hours to test something isn’t that bad — you get to used to it when testing applications here at uTest. But with three hours, you just report bugs and that is it. Maybe you’ll fill in a review whenever you have time. But for the competition, there is a responsibility to report back to the customer your findings and recommend having the application released or postponed. And to do all this in three hours is quite different from just testing and then moving on to another project.
This contest wasn’t all about how well you test and how many valuable bugs you log, but also about communication, both within your team, and also with other teams and the team behind STWC. This was mentioned many times by Matt, and teams that were visible publicly were given additional points which may have helped few of them to get a better position at the end.
As this contest was made for testers and our profession, we should have given something small back in return — at least promoting it on social media and so on. A colleague of ours Tweeted about our team a few times, but when I look back, it wasn’t enough. It wasn’t all about the points, but about being part of something greater and helping to achieve something.
If you decide to be part of a STWC team next year, take the time and sit down with your team before the competition, and set a few strategies. We did so only once, and within the time we only came up with bug and test report templates. We didn’t cover some sort of strategy on how to go about the application under test, or at least make a list what to test for both mobile and web applications. In just three hours, there isn’t much time to be creative and try to think of something, and when you have no guidelines, you may start panicking.
Three hours is a very short time, so don’t try to test everything, the entire application and all its features. It is more about making compromises and focusing on some key areas, each team member on something different, or just working in pairs in the same area.
From what I have taken from the competition, communication within our team helped us understand the SUT very quickly, but we tried to test every feature available, and thus ended up scratching only the surface and not going deep enough. When you compromise, you have at least something to write about in your test report, where you can detail what you covered and what you would cover if you had more time.
The 4th International Software Craftsmanship and Testing (SoCraTes) Conference show kicks off in Soltau, Germany tomorrow and runs until August 10, 2014. What sets the SoCraTes show apart of other testing conferences is the emphasis on it being run using Open Space Technology (OST). OST is a way for hosting conferences that is “focused on a specific and important purpose or task—but beginning without any formal agenda, beyond the overall purpose or theme.”
In this case, the event is about the sustainable creation of useful software in a responsible way and is a joint effort of all Softwerkskammer groups. The show includes hands-on coding sessions, sessions focused on discussion, and interactive talks.
Follow tweets from this year’s SoCraTes event via their Twitter account @socrates_2014.
Want to know what other events are happening soon? Check out upcoming software testing events like SoCraTes 2014 on the uTest Events Calendar.
This is the second part of tester and uTest Enterprise Test Team Lead Lucas Dargis’ journey on becoming ISTQB-certified. Be sure to check out Part One from yesterday.
After about 3 minutes, I realized just how ridiculous the test was. Some of the questions were so obvious it was insulting, some were so irrelevant they were infuriating, and others were so ambiguous all you could do was guess.
Interestingly, testers with experience in context-driven testing will actually be at a disadvantage on this test. When you understand that the context of a question influences the answer, you realize that many of the questions couldn’t possibly have only one correct answer, because no context was specified.
You are allotted 60 minutes to complete the test, but I was done and out of the building in 27 minutes. That I finished quickly wasn’t because I knew all the answers — it was rather the exact opposite. Most of the questions were so silly, that all I could do was select answers randomly. Here are two examples:
“Who should lead a walkthrough review?” – Really? I was expected to memorize all the participants of all the different types of meetings, most of which I’ve never seen any team actually utilize?
“Test cases are designed during which testing phase?” – Umm…new tests and test cases should be identified and designed at all phases of the project as things change and your understanding develops.
According to the syllabus, there are “right” answers to all the questions, but most thinking testers, those not bound by the rigidness of “best practices,” will struggle because you know there is no right answer.
Despite guessing on many questions, I ended up passing the exam, but that really wasn’t a surprise. The test only requires a 65% to pass, so a person could probably pass with minimal preparation, simply making educated guesses. I left the test in a pretty grumpy mood.
For the next few days, I was annoyed. I felt like I had completely wasted my time. But then I started thinking about why I took the test in the first place, and what that certification really meant. As a tester striving to become an expert, I wanted to know for myself what certifications were about. Well — now I have first-hand experience with the process. I’m able to talk about certifications more intelligently because my opinions and views about them are my own, not borrowed from others.
Former tennis great Arthur Ashe once said, “Success is a journey, not a destination. The doing is often more important than the outcome.” I think this sums up certifications well. For me, there was no value in the destination (passing the exam). I’m not proud of it, and I don’t think it adds to or detracts from my value as a tester. However, I felt there was value in the journey.
Through my studying, I actually did learn a thing our two about testing. I was able to build some structure around the basic testing concepts. I learned some new testing terms that I have used to help explain concepts such as tester independence. Studying gave me practice analyzing and questioning the “instructional” writing of others (some of which I found inaccurate, misleading or simply worthless). The whole process gave me insight into what is being taught, which helps me better understand why some testers and test managers believe and behave the way they do.
But more importantly, I became aware of the way I formulate opinions and of how susceptible I am to the teachings of confident and powerful people.
The lessons I learned from testing leaders early in my career freed me from the constraints of the traditional ways of thinking about testing, but in return, I took on the binds associated with more modern testing thinking. Ultimately, I came to a conclusion similar to that of many wise testers before me, but I did so on my terms, in a way that I’m satisfied with.
One day, I hope to have a conversation with some of the more boisterous certification opponents and say, “I decided to get certified, in part, because you were so adamantly against it. I don’t do it out of defiance, but rather out of a quest for deeper understanding.”
If you made it this far, I thank you for sharing this journey with me. Make a mention that you finished the story in the Comments below, and I’ll send you a pony.
Thanks to our great community, over the past month at uTest, we’ve added more than 30 tools to our ever-expanding library of Software Testing Tools, including those in security, automation and even screen mirroring.
The Tool Reviews section of uTest is your one-stop shop to rate, review and discuss the tools that are supposed to make testers’ lives (hey, that’s you!) easier. Here’s just a small sampling of the tools being talked about by our community over the past 30 days:
uTest has designed Tool Reviews to be the place where testers can make educated decisions on the tools that may become a part of their daily routine, and to see which tools have won the hearts — or the ire — of their testing peers. If we’re ever missing your favorite test tool, be sure to submit it to us, and we’ll add it right away so you can leave the first review!
A Gold-rated tester and Enterprise Test Team Lead (TTL) at uTest, Lucas Dargis has been an invaluable fixture in the uTest Community for 2 1/2 years, mentoring hundreds of testers and championing them to become better testers. As a software consultant, Lucas has also led the testing efforts of mission-critical and flagship projects for several global companies.
Here, 2013 uTester of the Year Lucas Dargis here shares his journey on becoming ISTQB-certified, and also tackles some of the controversy surrounding certifications.
In case you missed it, testing certification is somewhat of a polarizing topic. Sorry for stating the obvious, but I needed a good hook and that’s the best I could come up with. What follows is the story of my journey to ISTQB certification, and how and why I pursued it in the first place. My reasons and what I learned might surprise you, so read on and be amazed!
Certifications are evil
Early in my testing career, I was a sponge for information. I indiscriminately absorbed every piece of testing knowledge I could get my hands on. I guess that makes sense for a new tester — I didn’t know much, so I didn’t know what to believe and what to be suspicious of. I also didn’t have much foundational knowledge with which to form my own opinions.
As you might expect, one of the first things I did was look into training and certifications. I quickly found that the pervasive opinion towards certifications (at least the opinion of thought leaders I was learning from) was that they were at best a waste of time, and at worst, a dangerous detriment to the testing industry.
In typical ignoramus (It’s a word, I looked it up) fashion, I embraced the views of my industry leaders as my own, even though I didn’t really understand them. Anytime someone would have something positive to say about certification, I’d recite all the anti-certification talking points I’d learned as if I was an expert on the topic. “You’re an idiot” and “I’d never hire a certified tester” were phrases I uttered more than once.
A moment of clarity
Then one fine day, I was having a heated political debate with one of my friends (I should clarify…ex-friend). We had conflicting views on the topic of hula hoop subsidies. He could repeat the points the talking heads on TV made, but when I challenged him, asking prodding questions trying to get him to express his own unique ideas, he just went around in circles (see what I did there?).
Like so many other seemingly politically savvy people, his views and opinions were formed for him by his party leaders. He had no experience or expertise in the area we were debating, but he sure acted like the ultimate authority. Suddenly, it dawned on me that despite my obviously superior hip-swiveling knowledge, I wasn’t that much different from him. My views on certifications and the reasons behind those views came from someone else.
As a tester, I pride myself on my ability to question everything, and to look at situations objectively in order to come to a useful and informative conclusion. But when it came to certifications, I was adopting the opinions of others and acting like I was an expert in an area I knew nothing about. After that brief moment of clarity, I realized how foolish I’d been, and felt quite embarrassed. I needed to find the truth about certifications myself.
Now let me pause here for a second to clarify that I’m not saying everyone should go take a certification test so they can have first-hand experience. I think we all agree that it’s prudent to learn from the experience of others. For example, not all of us have put our hand in a fire, but we all know that if you do, it will get burned.
It’s perfectly fine to let other people influence your opinions as long as you are honest about the source of those opinions. For example, say:
Testers I respect have said that testing certificates are potentially dangerous and a waste of time. Their views make sense to me, so it’s my opinion that testers should look for other ways to improve and demonstrate their testing abilities.
However, if you’re going to act like an expert who has all the answers, you should probably be an expert who has all the answers.
Studying for the exam
I took my preparation pretty seriously. Most of my studying material came from this book which I read several times. I spent a lot of time reading the syllabus, taking practice exams and downloading a few study apps on my phone. I also looked through all the information on the ISTQB site, learning all I could about the organization and the exam.
Since I knew all the anti-certification rhetoric about how this test simply measures your ability to memorize, I memorized all the key points from the syllabus (such as the 7 testing principles). But I also took my studying further, making sure I understood the concepts so I could answer the K3 and K4 (apply and analyze) questions.
Headed into the test, I felt quite confident that I was going to ace it.
Be sure check out the second part of Lucas’ story at the uTest Blog.
Based on a recent joint study by uTest, IBM and TechWell, testers are certainly spending a whole lot of time on non-testing-related activities, and testing activities that they simply feel are just a big, plain ol’ waste of time.
The study, conducted in April 2014 by TechWell, was made up of 250 software testing pros from six continents, and found that:
- 36% of testers surveyed spend over half of their week not testing
- 58% cite ad hoc requests as the biggest disruptor of their work weeks
- In regard to activities where testers spend more time than they’d like, almost 60% responded with waiting for test assets, while over 50% responded with ‘non-testing’ activities
- Some of the top activities testers wished they’d spend more time in include: creating and executing automated tests, performing exploratory testing, and designing and planning tests
- Testers want greater organizational improvements, including increasing automation, in order to free up their time to focus on testing and improving quality
These numbers and findings are really just the tip of the iceberg — be sure to download the full report from IBM right here. It’s a good read, and gets even further into the rich story of testers being weighed down by bottlenecks hindering their ability to deliver quality.
Are these stats in line with what you’ve experienced? Are you continually spending more time on non-testing-related activities, and wish your time was better devoted to something more productive? We’d like to know in the Comments below.