Skip to content

uTest
Syndicate content
Updated: 25 min 45 sec ago

Authors in Testing Q&A With Penetration Testing Expert Peter Kim

Thu, 10/30/2014 - 20:23

Peter Kim has been in the information security industry for the last 10 years and has been a penetration tester for the last seven. He is the author of the becoverst-selling computer hacking book, ‘The Hacker Playbook: Practical Guide to Penetration Testing.’ He was the lead penetration tester for the U.S. Treasury and Financial Management Systems.

In addition, he was a penetration tester for multiple utility companies, Fortune 1000 entertainment companies, government agencies, and the Federal Reserve. He also gives back to the security community by teaching penetration testing courses at a community college, and creating and maintaining one of the largest security communities in the Santa Monica, CA area. He has also spoken at multiple security conferences. You can find him at his blog Secure Planet

In this Q&A, uTest spoke with Peter about some of the more memorable vulnerabilities he has come across while hacking web apps, what he thinks of Apple Pay, and why his book is used in college coursework. Stay tuned at the end of the interview for a chapter excerpt from ‘The Hacker Playbook,’ currently the number one-selling software testing book on Amazon.

uTest: You’ve been in security and pen testing for a while now. Without giving out too many specifics, what was one of the more surprising or memorable lapses in judgment you have come across while ethically hacking web applications?

Peter Kim: I could write a book just on this question. I mean, I’ve seen it all, from a single company having 20+ different SQLi vulnerable public web applications, default credentials into their whole camera system, PII data leaks from major e-commerce sites, all the way to having access into equipment that controlled certain types of SCADA utility networks.

The funniest one I came across was about five years ago. A major AV vendor had all their clients talking back to their central web application over HTTP APIs. Sniffing the traffic, I was able to gain the administrative credentials in clear text from a client. Once I logged into the web application, I was able to modify the update agents within the web interface to force the end user to download a malicious file and execute them on the host systems.

We all had a good laugh, because what was meant to protect the network allowed us to compromise the network, and, ironically, the companies that advocated security had one of the worst IT security practices.

uTest: With all of the data breaches in the news, are organizations not investing enough money in their security strategies, or are they just not investing enough in the right security strategies/programs such as extensive penetration testing?

PK: This is a tough question to answer. I think everyone is looking for the golden egg answer, but it’s much more complex than that.

What I’ve been seeing as the problem is that corporations are becoming tool-dependent. We have host/network-based monitoring, antivirus, malware detection, vulnerability scanners, managed services, application filters, email proxies, and web proxies. Yet, our users are still getting infected with malware, clicking on spear phishing emails, and aren’t able to detect and stop C2 traffic properly.

People focus too much on words like APT, zero-day, PCI, and checkboxes. I’ve worked with security teams where the analysts spent most of their type fighting adware and junk. This isn’t where we should be today, and we should have our analysts focused on identifying anomalies and locking down networks.

With the recent large breaches, like on those Point of Sales (PoS) devices, those networks and systems were only designed for a single purpose. Time should have really been spent detecting any anomalies and alerting on any changes on those systems. If systems are specifically made to do XYZ, it should be very easy to identify and alert when a system decides to anything suspicious.

I also believe we are still failing at user education. This isn’t just the responsibility of the security department, but it should be everyone’s job to be part of the solution. Users need to be able to identify malicious attacks, know how to report these incidents easily, and to stop clicking on malicious email links.

uTest: Do you think programs like Apple Pay are going to be a savior for a retail industry that has been so hard hit with breaches at Home Depot, Kmart and Target, amongst others?

PK: The great thing about hacking is that it’s always about doing what they say is impossible. With that said, what Apple is doing with things like Apple Pay, is in the right direction. By removing the need for third-party credit card number storage, requiring multiple factors of authentication, and not having to hand your credit card to a random stranger for purchases (like at restaurants, grocery stores, and gas stations), it provides many different additional layers of security for the end user.

Just remember that the bad guys adapt just as quickly, if not quicker, than the good guys. So if credit card cloning becomes hard, what about spoofing NFC, what about attacking jailbroken devices with financial-purposed malware, or attacking iTunes accounts associated with your credit cards?

It also really comes down to adoption. With Google going in one direction with payments and Apple going in another, without mass adoption, we might not see the full potential benefits of these systems.

uTest: You’ve mentioned that your book ‘The Hacker Playbook’ has been used as core university materials in some colleges. Could you tell us a bit about which programs it is used in, and where it fits in with the curriculum as an educational resource?

PK: Although the book wasn’t originally developed to be used as college resource, it seems to have ended up aligning with many different undergrad and graduate programs.

Graduate courses like “Advanced Topics – Penetration testing forensics” at George Mason University have incorporated it as the core book for their course. In addition to being added to multiple U.S. universities, it has also been incorporated in multiple universities in other countries (Sheffield Hallam University, Asian Institute of Technology, and Algonquin College). The great part about security is that it isn’t language/culture-bound. Attacks in one country are just as prevalent in another country.

I see this book as a good fit in the advanced network security courses. Whether it is forensics, incident response, or penetration testing, this book gives students a real-world view in what both professionals and unethical hackers are doing. Being able to understand and replicate these attacks allows students to prepare for the types of attacks they’ll encounter in their professional career.

uTest: The book doesn’t read like an encyclopedia – it’s a story walking a tester through the entire penetration testing process from network layer to Web application layer. Could you describe why you laid the book out the way you did, and whether it’s designed for the security rookie or a seasoned veteran?

PK: I’ve read a ton of different security books and they were always laid out by tool or by protocol. I never really came across a book that walked me through an actual penetration test. The other thing I didn’t see too often was a book breaking out of the norm by trying to incorporate and push creative attacks that might not have been fully polished. This allows the reader to continue his/her research and progress their own skills.

The layout was also developed based on needs I’ve had when performing my own penetration tests. Many times, I’ve gotten stuck at a particular point during a test. For example, I might have compromised a host as a regular domain user, but wasn’t able to get that domain admin account. I just pop open ‘The Hacker Playbook’ and skip to “The Lateral Pass” section, and review all of the different options I have. Other times, I get caught up by a certain AV vendor and turn to the “Quarterback Sneak” section and bypass AV.

As the book was originally written as a collection of my lifetime of notes and tips, it’s not targeted for those without any experience. Those that benefit the most are the ones that have played around with Metasploit and Meterpreter. The most surprising part was that a lot of senior penetration testers have come back to me and said that they were really surprised to have learned a bunch of new things from my book. That alone makes it all worth it.

Excerpt from The Hacker Playbook: Practical Guide to Penetration Testing:

Hunched over your keyboard in your dimly lit room, frustrated, possibly on one too many energy drinks, you check your phone. As you squint from the glare of the bright LCD screen, you barely make out the time to be 3:00 a.m. “Great”, you think to yourself. You have 5 more hours before your test is over and you haven’t found a single exploit or critical vulnerability. Your scans were not fruitful and no one’s going to accept a report with a bunch of Secure Flag cookie issues.

You need that Hail Mary pass, so you pick up The Hacker Playbook and open to the section called “The Throw – Manual Web Application Findings.” Scanning through, you see that you’ve missed testing the cookies for SQL injection attacks. You think, “This is something that a simple web scanner would miss.” You kick off SQLMap using the cookie switch and run it. A couple of minutes later, your screen starts to violently scroll and stops at:

Web server operating system: Windows 2008
web application technology: ASP.net, Microsoft IIS 7.5
back and DBMS: Microsoft SQL Server 2008

Perfect. You use SQLMap to drop into a command shell, but sadly realize that you do not have administrative privileges. “What would be the next logical step…? I wish I had some post-exploitation tricks up my sleeve”, you think to yourself. Then you remember that this book could help with that. You open to the section “The Lateral Pass – Moving through the Network” and read up and down. There are so many different options here, but let’s see if this host is connected to the domain and if they used Group Policy Preferences to set Local Administrators.

Taking advantage of the IEX Power Shell command, you force the server to download Power Sploit’s GPP script, execute it, and store the results to a file. Looks like it worked without triggering Anti-Virus! You read the contents of the file that the script exported and lo and behold, the local administrative password.

The rest is history… you spawn a Meterpreter shell with the admin privileges, pivot through that host, and use SMBexec to pull all the user hashes from the Domain Controller.

Categories: Companies

How Sesame Street Can Help You Become a Better Software Tester

Wed, 10/29/2014 - 22:09

indexAll I really need to know, I learned in Kindergarten.

STARWEST presenter Robert Sabourin – a 30+ year veteran and well-respected member of the software development community – took that nugget of conventional wisdom and put his own unique tech spin on it in his course on Testing Lessons Learned from Sesame Street.

While the topic was fun and lighthearted, Rob took his subject matter seriously and impressed on attendees just how important it is to learn and master the basics. But what are “the basics”?

Let’s take a closer look at what you really need to know to build a solid software testing foundation.

Rob’s presentation focused on two main areas of professional – and personal! – development: cognitive skills and social skills. Developing your cognitive skills allows you to think more analytically, to develop efficient models and lay out precise explanations for your processes and reasoning. Strong social skills elevate your ability to collaborate to a whole new level of effectiveness and can help grow your reputation as a thought-leader.

Think of your team as a “neighborhood.” To be successful, people take on many diverse roles which require them to focus on various short-term goals. You may all be working toward the same end goal, but to get there, you’ll have to be sensitive to diversity and understand how successfully navigating it can enhance the overall quality of the end product.

Much as John F. Kennedy asked in his 1961 inaugural presidential address, “ask not what your country can do for you; ask what you can do for your country,” Rob Sabourin encourages testers to ask themselves and their teams:

Ask not what the system can do for the user; ask what your user does with the system.

Testers often find themselves in the position of defending their work, proving the whats and whys and hows. We can learn a lot from Big Bird and his struggle to help justify the existence of his imaginary friend Mr. Snuffleupagus. How did Big Bird prove he was real? Garnering and presenting proof of his friend’s existence parallels a tester’s role in reporting an issue found while testing a product. It requires persistence, evidence, and continuing advocacy for the bug.

Oscar the Grouch – while seen negatively by many – can actually be a great role model for software testers! Oscar thrives in a messy environment, where there’s plenty of “trash” (bugs!) and he excels at breaking things (knowledge of inducing failure modes). Oscar’s goal is to travel the unhappy path since that’s where he can best employ his talents in disrupting the flow, similar to how a great tester will learn to step outside the comfort of the happy path, dig deep and think creatively to uncover valuable issues.

And, of course, there’s Kermit the Frog. We can’t forget him! Like a good bug-hunter, Kermit is the ultimate reporter; he knows how to observe, blend in, gather facts, and report them in a factual (non-emotional) way. One of the biggest lessons we can learn from Kermit is that “it’s not easy being green,” where “green” is synonymous with the progressive, persistent investigation that makes excellent testers so valuable.

Kudos to Rob Sabourin on creating and delivering an excellent – and enjoyably creative – STARWEST 2014 session presentation!

Want to see the full slide deck from Rob’s presentation? Check it out on the uTest forums here, then answer our poll asking which Sesame Street character you best identify with and chat with other testers about it!

Categories: Companies

uTest Announces New Software Testing Career Mentoring Program

Wed, 10/29/2014 - 15:00

mentoringACEing your work as a software tester just got a little easier.

uTest is proud to introduce the beta version of A.C.E. (Assisted Continuing Education), a new software testing career mentoring initiative beginning November 1. The program will be available to all members of the uTest Community.

The mentoring program is designed to help software testers build a solid foundation of testing education. By honing these essential skills, participants will be well-equipped to grow their testing careers and strive for professional success on many levels. This will be achieved through participation in various course modules, each geared to the software testing professional at various stages of his or her career.

At the November 1 beta launch of the program, A.C.E. will offer the first two modules of the program, How to find valuable bugs and How to write great bug reports. Testers will have the option of signing up for one (or both) of the course modules.

Both courses will consist of a brief independent study, along with a graded homework assignment. The core concepts gained through the two modules will then be tied together in a live webinar with a uTest Community testing expert and Test Team Lead, and a hands-on, live exploratory testing session where participants can practice their new skills.

Sound like something you want to be a part of? A.C.E. is free for members of the uTest Community. Check out the full announcement now (requires uTest login) for further details and to sign up today. Space is limited.

If you’re not a uTest member, you can sign up free of charge to attend the mentoring program, and get access to free training, discussions with peers, the latest news, and opportunities to make money by working on testing projects with some of the top brands in the world.

Categories: Companies

STARWEST 2014 Interview: Qualities Test Managers Seek in Software Testers

Tue, 10/28/2014 - 20:00

Recently at STARWEST, I caught up with professional software testing team lead Richard DeBarba.

Rich gave some great insight into what qualities he looks for in a team member and how attending STARWEST helps him focus on the most optimal training materials and direction for his team. By attending conferences like STARWEST, Rich is able to keep up with recent trends in software testing and learn about progressive new tools or practices.

So what types of testers do team managers like Rich look for? Check out the video below to see what he had to say!

Categories: Companies

Preparing for a Load Test With JMeter: The Vital Point You Might Be Overlooking

Tue, 10/28/2014 - 15:30

This piece was originally published by our good friends at BlazeMeter – the Load Testing Cloud. Don’t forget to also check out all of the lbz_bl_0001.24.13f_1oad testing tool options out there — and other testing tools — along with user-submitted reviews at our Tool Reviews section of the site.

If you often run load tests, you probably have a mental checklist of questions that run through your mind, including:

  • How many threads per JMeter engine should I use?
  • Can my engine handle 10% more threads?
  • Should I start with the maximum number of threads, or add as I go?

All of these questions are important and should be carefully considered – but what about the load test itself? Have you considered how the actual procedure will be managed?

It’s easy to get so wrapped up in the technical details that you lose sight of the overall picture. Even with BlazeMeter’s self-service load testing – which is designed to simplify performance and load testing for developers and QA testers – it’s still vital to take control of the process management to ensure each test runs smoothly.

Why Should I Care?

Load tests require a great deal of cooperation and teamwork. Everyone needs to coordinated in order to quickly analyze the data, identify the bottlenecks and even solve them (if possible) during the test.

For example: the developers/QA testers should create and share the script with all departments involved in the test, to make sure that everyone understands the scenario and how it can affect their particular system. An n-tier application is made up of several disciplines, each one requiring the same level of attention – be it database, front-end, back-end, or monitoring services. It is imperative that the staff understand the upcoming load test scenario, regardless of their knowledge of JMeter and/or load testing, so as to create the most agile environment for the duration of the test. If the staff clearly understand what is happening when a bottleneck is hit, it’s much easier for them to analyze the data and quickly tend to the issue. If they aren’t on par, there’s a much greater chance that they won’t be able to overcome the bottlenecks and delays will occur.

Once the ins and outs of the infrastructure are understood, further managerial tasks are needed. For example: sending out updates and involving your operations team and third parties like Akamai or Amazon to avoid complications such as blocking, shaping, or even inadvertently breaking a legal agreement during the test. While they may seem excessive, it’s really worth taking these supplementary steps before starting the load test to ensure the best results.

Technical and Managerial Perspectives of a Load Test The Technical Aspects of a Load Test

When it comes to the technical aspect of load testing, as detailed in How to Run a Load Test of 50k+ Concurrent Users, it’s important to supply everyone involved with a test scenario outlining the exact processes of the test. By taking this step, even staff members who are not JMeter experts will understand that during the test, they may experience issues like an excessive number of log-ins, ‘add to cart’ requests, image usages, database queries, and so on. The scenario should be simplified and easy to read for everyone involved. It should include a defined scaling process and explain that the ultimate goal for the number of users may only be met within a number of load tests.

The technical approach usually includes building the script (and its data) to address the specific scenario, testing it locally and verifying that it actually works in JMeter. Once the script performs as expected using the SandBox feature, you should evaluate how many threads can be applied to a single load engine without crashing (ensuring the load engines themselves won’t be the bottleneck). Then, the script will be scaled to a cluster of engines, and more clusters can be added as you go.

Important points to consider:

1. Does your data need to be unique per engine? If so:

  • Set up a different CSV for each engine
  • Use the “Split any CSV file to unique files and distribute among the load servers”
  • Use JMeter functions like __InstanceID, __RandomString(), __threadNum() to add unique data to your test.

2. Does your scenario have data manipulation in real time? If so:

3. Do you use timers? If not, you should (even if it’s just 300ms) because every user has some think time between pages

  • You can use the Throughput Shaping Timer or the Constant Throughput Timer to control your test hits/s
Managing Your Load Test

From a management perspective, of course, every company is unique. But most share the same key criteria. These include:

1. Scheduling a meeting before the actual test.  Two weeks is generally enough time for this. It allows everyone involved to see the expected script scenario and everyone can discuss issues like who to alert about the test and possible interferences.

2. Including a representative from each department during the initial load tests. This will ensure smoother sailing once the test is up and running. In later tests, you might want to schedule the actual environment you’re going to load test. For example: in a staging environment, the VP of R&D needs to know that the environment is going to be used, which may affect updates to production. Whereas, if load tests are run on production environments, a landing page should be created to notify users that maintenance procedures will take place during that particular time.

3. Creating a drawing or flow chart of the actual script. This is an additional measure but a useful one –  after all, a single picture says a thousand words. It helps people who aren’t familiar with JMeter or aren’t aware of what’s happening with particular clients or in the front-end to relate and understand what’s expected to happen.

Depending on the test’s complexity, you can make a flowchart of:

  1. Your script flow
  2. Cache warmup (after doing a reset cache)
  3. Test scenario

Blog-LoadTestPrepare

This method effectively divides the load test into two parts: the pre-load test, which creates all of the data, and the actual load test itself. Appointing a load test supervisor to announce the various stages of the test will help everyone involved to stay on the same page. Utilizing an organized platform, such as Campfire or HipChat will allow everyone to communicate quickly during the load test, as well as enable the supervisor to control and deliver critical assignments when needed. Additionally, these platforms provide a space for each department to present their conclusions from the test. You can also record the test – which will be a great help when it comes to running future tests and reports.

Categories: Companies

Testing the Limits With Testing ‘Rock Star’ Michael Larsen — Part II

Mon, 10/27/2014 - 15:30

In Part II of our latest Testing the Limits interview with Michael Larsen, Michael talks why test team leads should take a “hands-off” approach, and why testers should be taken oumichaellt of their comfort zones.

Get to know Michael on his blog at TESTHEAD and on Twitter at @mkltesthead. Also check out Part I of our interview, if you already haven’t.

uTest: In a recent post from your blog, you talked about the concept of how silence can be powerful, especially when leading teams. Do you think this there isn’t enough of this on testing teams?

Michael Larsen: I think that we often strive to be efficient in our work, and in our efforts. That often causes us to encourage other testers to do things “our way.” As a senior software tester, I can often convince people to do what I suggest, but that presupposes that I actually know the best way to do something. In truth, I may not.

Also, by handing other testers the procedures they need to do, I may unintentionally be encouraging them to disengage, which is the last thing I want them to do. As a Boy Scout leader, I frequently have to go through this process week after week. I finally realized that I was providing too much information, and what I should be doing is stepping back and letting them try to figure out what they should do.

I also realized that answering questions with questions helped them look at the problems they were facing in new ways. Often, they would come up with solutions that would not be what I would suggest, but their solution was often more inventive or interesting than what I would have taught them. This is why I value the ability to either step back and be quiet, or answer questions with questions, because I love seeing what solutions they come up with.

uTest: Specifically, you feel that too much direction is given, but not enough “free roam” for creativity and critical thinking through exploratory testing.

ML: Exactly. When we give too many parameters, or we require too many steps be processed the same way, we are cutting off the natural curiosity of the tester. Having a checklist to make sure things are covered is all well and good, but I would encourage test leads and senior testers to allow a fair amount of latitude for the testers on their team, so that their natural curiosity is engaged.

uTest: Fill in the blank. One thing missing on many testing teams is _____.

ML: Collaboration. Too many testers, even when they work on teams, get their narrow slice of projects or stories, and then they go into heads-down mode. I think testing teams would be well-served to look for opportunities where testers can interact, and look at a particular story or section as a team.

Let them explore together and ask questions of each other, share what they learn, and encourage those “a-ha!” moments to occur. We can certainly have “a-ha!” moments working on our own, but the opportunities to have them go up exponentially when we as testing teams set aside some time to explore together and riff off of each other.

uTest: It was a pleasure meeting you at CAST this summer. You were there as not only a Board member for the Association of Software Testing (AST), but as a speaker. Could you tell us a little about your session at CAST?

ML: I gave a talk along with Harrison Lovell about “Coyote Teaching,” which is in some ways a more formalized approach to the “be quiet and ask questions” approach I mentioned earlier. The idea behind Coyote Teaching is that we want to give testers the opportunity to learn and grow, and we help them grow the best when we don’t spoon feed them everything. Coyote Teaching is all about immersing yourself in the environment, looking for clues, asking questions, and answering with more questions.

Mostly, it was about how Harrison and I used the Coyote Teaching methods and model as a basis for our mentoring relationship. He explained from the perspective of a relatively new tester (the mentee), and I explained it from the perspective of a veteran tester (the mentor). I think our session was overall successful because we offered experiences from both sides of the relationship.

uTest: Are there any other endeavors coming up that you are excited about?

ML: I was chosen to be the President of AST, and in that process, the board and I are looking to do a number of things in the coming years, many of them centered around software testing education and expanding the options we offer. We currently teach the first three Black Box Software Testing courses through BBST, and we are looking at expanding from there. We are also excited about a project related to self education for software testers, and you should be hearing much more about that in the coming months.

uTest: You presented at CAST with a ton of context-driven proponents, from James Bach to Henrik Andersson. Is “context-driven” best how you would describe yourself as a tester?

ML: Yes, I believe it is critical to be able to frame the testing that we do within the correct context for the project we are working on, and within the time and constraints we need to deal with. The fact is, projects change, often dramatically, and always in real time. Being able to pivot and approach our testing based on new information, new market demands, and changing deadlines is critical.

For some, saying that you are “context-driven” may feel like we are stating the obvious. Yet so many testers doggedly plow through the same procedures day after day, regardless of the landscape and any changes that may have occurred.

uTest: You’ve said that you’re not “just” a software tester. You write, you lead, you hack, you program, and you play detective and journalist. In order for testers to be successful, do you think it’s best for testers to be taken out of their comfort zones?

ML: Having been one who has already felt how depressing stagnation can be, I will say yes, I think testers do owe it to themselves to explore new avenues, to try to do things they haven’t done before, and to avoid getting too comfortable in any given niche. Don’t get me wrong, I think it’s a good thing to have specialties, but I would encourage any tester to frequently try to reach outside of their carved-out niche whenever they can. Sure, it may be uncomfortable, and sure, there may be a chance that those explorations may be frustrating at first, or may not bear much fruit, but keep trying, and keep looking to expand your abilities wherever possible.

uTest: What keeps you busy off the clock?

ML: When I’m not testing, I enjoy spending time with my family (my wife and I are currently navigating the world of three teenagers in three different stages of schooling; we have one child in college, one in high school and one in intermediate school). I also put a considerable amount of time into being a Boy Scout leader, and have done so for the past two decades. Scouting tends to get me outdoors regularly for camping and backpacking events, so I have opportunities to completely unplug and recharge in the wild, so to speak.

During the winter months, I look forward to snowboarding any chance that I get, and I also do my best to stay active with the broader testing community through my work with AST, Weekend Testing, the Miagi-do School of Software Testing, speaking at conferences, recording podcasts and any other endeavors that will let me get creative wherever I can.

Categories: Companies

New Testing Tool Tutorials at uTest University

Fri, 10/24/2014 - 18:01

There are plenty of options when it comes to choosing your suite of testing tools. Some tools may excel at one specific task, while others perform at an average level for more than one testing task.

A few months ago, we launched the Tool Reviews section of our site to let members of the uTest community rate and review the best testing tools. The community has responded by easily singling out the most popular and highest rated testing tools. logos

Over at uTest University, we’ve recently published new tutorials for some of the most requested tools in order to help testers set up these tools to use for testing. These tutorials are designed to be quick, easy to follow, and to get you up-and-running in no time.

Check My Links is a browser extension developed primarily for web designers, developers and content editors. The extension quickly finds all the links on a web page, and checks each one for you. It highlights which ones are valid and which ones are broken. You can learn how to set up and use Check My Links for testing using this new tutorial.

Firebug is a browser extension that allows you to edit, debug, and monitor CSS, HTML, and JavaScript live in any web page. Firebug is often reviewed as a “must-have” tool for both web development and web testing. Learn how to set up and use Firebug for testing using this new tutorial.

Mobizen is a tool that allows the mirroring and control of an Android device using your computer. This free tool features the ability to connect to the device using USB/Wifi/mobile network, screen mirroring with a high frame rate, and movie recording and capturing screenshots. Learn how to set up and use Mobizen for testing using this new tutorial.

liteCam HD is a computer screen recorder for Windows users that helps create professional-looking HD videos. This tool’s easy interface makes quick recordings and reduces complex settings. Learn how to set up and use liteCam HD for testing using this new tutorial.

uTest University is free for all members of the uTest Community. We are constantly adding to our course catalog to keep you educated on the latest topics and trends. Have an idea for a new course or how-to tutorial? Submit your course idea today.

Categories: Companies

Four Reasons Software Testing Will Move Even Further Into the Wild by 2017

Thu, 10/23/2014 - 21:12

apple0132Ever since our inception, uTest and our colleagues within Applause have always been a huge proponent of what we like to call ‘In-the-Wild’ Testing.

Our community is made up of 150,000+ testers in 200 countries around the world, the largest of its kind, and our testers have already stretched the definition of what testing ‘in the wild’ can be, by testing countless customers’ apps on their own devices where they live, work and play.

That ‘play’ part of In-the-Wild testing is primed to take up a much larger slice of testers’ time. While we have already seen a taste of it with emerging technologies gradually being introduced into the mobile app mix, there are four major players primed to go mainstream in just a couple of short years. That means you can expect testers to be spending less time pushing buttons testing on mobile apps in their homes and offices…and more time ‘testing’ by jogging and buying socks. Here’s why.

Apple Pay

Google Wallet has been out for several years now, but it is widely expected by many (including this guy) that Apple Pay will be the technology that takes mobile payments to the mainstream with its ease-of-use and multiple layers of security.

Of course, it will take more of the little banks and retailers to be on-board for Apple Pay to spread like wildfire, but Apple is banking on an ‘if you build it, they will come’ strategy, and it already seems to be working. Case in point: My little, local credit union in Massachusetts — probably 1/25th the size of a Chase or Citibank — has already previewed that it’s working with Apple to bring Apple Pay to all of its members.

This is all well for consumers, but it provides even more of an opportunity for testers — there will be plenty of retailers lined up to make sure the functionality works with their environments, along with retailers needing testers to verify that any in-app functionality is sound when consumers use Apple Pay from the comfort of their own homes. Expect a lot of testers buying socks and sandwiches (not together in the same transaction) as part of their new “testing” routine in the coming months and years.

Smartwatches

While I have been in the camp of only wanting a smartwatch if it promises to produce lasers, I know that there are many out there that will be early adopters. And who can resist their stylin’ nature?

Once again, Apple in this technology category has made smartwatches sleek and sexy with a variety of styles and accompanying straps on its soon-to-be-released Apple Watch. While the $349 may be a sticker shock to many, one space that it is expected to take off in is the enterprise amongst executives and employees on the go.

And for testers, smartwatches will open up a whole new era and class of apps more pint-sized than ever…that you can bet will need lots of testing on proper screen rendering and functionality in those board meetings filled with execs.

Health & Fitness Wearables

With Google and Apple taking on this realm in its smartphones, and fitness-centric trackers from Nike, Fitbit and Jawbone in the form of armbands, the health and fitness wearable market is one that has already actively had much adoption.

From a tester standpoint, testing fitness devices may be the most ‘out there’ definition of in-the-wild testing. As health and fitness apps and armbands track fitness- and health-specific information such as number of steps taken, heart rate and calories burned, expect a lot more of testers’ routines including a 2-mile jog lumped in with their mobile testing.

Automobile In-dash Entertainment

From popular car manufacturers from Ford and Toyota to BMW and Audi, to navigation services like TomTom and Garmin, in-dash entertainment and navigation systems have already taken off in the past year, and that trend is only expected to continue as these packages eventually become standard in automobiles.

And this only opens up more doors for testers. We’ve all heard of texting while driving, but did law enforcement consider ‘testing’ while driving? Testing teams should consider safety first and buddy-up their testers when sending them out to drive for a “testing” assignment.

What do you think? Is the tester’s work environment going to be stretched even more into the wild in the next few years because of these emerging technologies? Are there others you would add to the list such as Google Glass? Will these technologies still just be a shadow in a tester’s daily testing routine? Let us know in the Comments now.

Categories: Companies

Authors in Testing Q&A: Dorothy Graham Talks ‘Experiences of Test Automation’

Wed, 10/22/2014 - 15:00

Dorothy (Dot) Graham has been in software testing for 40 years, and is co-author of four books, including two on test automation (with Mark DG-photoFewster).

She was programme chair for EuroSTAR twice and is a popular speaker at international conferences. Dot has been on the boards of publications, conferences and qualifications in software testing. She was awarded the European Excellence Award in Software Testing in 1999 and the first ISTQB Excellence Award in 2012. You can visit her at her website.

In this Q&A, uTest spoke with Dot about her experiences in automation, its misconceptions, and some of her favorite stories from her most recent book which she co-authored, ‘Experiences of Test Automation: Case Studies of Software Test Automation.’ Stay tuned at the end of the interview for chapter excerpt previews of the book, along with an exclusive discount code to purchase.

uTest: Could you tell us a little more about the path that brought you to automation?

Dorothy Graham: That’s easy – by accident! My first job was at Bell Labs and I was hired as a programmer (my degrees were in Maths, there weren’t many computer courses back in the 1970s). I was put into a testing team for a system that processed signals from hydrophones, and my job was to write test execution and comparison utilities (as they were called then, not tools).

My programs were written on punched cards in Fortran, and if we were lucky, we got more than one “turn-around” a day on the Univac 1108 mainframe (when the program was run and we got the results – sometimes “didn’t compile”). Things have certainly moved on a lot since then! However, I think I may have written one of the first “shelfware” tools, as I don’t think it was used again after I left (that taught me something about usability)!

uTest: There’s a lot of misconceptions out there amongst management that automation will be a cure-all to many things, including cost-cutting within testing teams. What is the biggest myth you’d want to dispel about test automation?

DG: The biggest misconception is that automated tests are the same as manual tests – they are not! Automated tests are programs that check something – the tool only runs what it has been programmed to run, and doesn’t do any thinking. This misconception leads to many mistakes in automation — for example, trying to automate all — and only — manual tests. Not all manual tests should be automated. See Mike Baxter et al’s chapter (25) in my Experiences book for a good checklist of what to automate.

This misconception also leads to the mistaken idea that tools replace testers (they don’t, they support testers!), not realizing that testing and automating require different skillsets, and not distinguishing good objectives for automation from objectives for testing (e.g. expecting automated regression tests to find lots of bugs). I could go on…

uTest: What are you looking for in an automation candidate that you wouldn’t be looking for in a QA or software tester?

DG: If you are looking for someone to design and construct the automation framework, then software design skills are a must, since the test execution tools are software programs. However, not everyone needs to have programming skills to use automation – every tester should be able to write and run automated tests, but they may need support from someone with those technical skills. But don’t expect a developer to necessarily be good at testing – testing skills are different than development skills.

uTest: You were the first Programme Chair for EuroSTAR, one of the biggest testing events in Europe, back in 1993, and repeated this in 2009. Could you talk about what that entailed and one of the most valuable things you gained out of EuroSTAR’s testing sessions or keynotes?

DG: My two experiences of being Programme Chair for EuroSTAR were very different! SQE in the US made it possible to take the major risk of putting on the very first testing conference in Europe, by financially underwriting the BCS SIGIST (Specialist Group In Software Testing). Organizing this in the days before email and the web was definitely a challenge!

In 2009, the EuroSTAR team, based in Galway, gave tremendous support; everything was organized so well. They were great in the major planning meeting with the Programme Committee, so we could concentrate on content, and they handled everything else. The worst part was having to say no to people who had submitted good abstracts!

I have heard many excellent keynotes and sessions over the years – it’s hard to choose. There are a couple that I found very valuable though: Lee Copeland’s talk on co-dependent behavior, and Isabel Evans’ talk about the parallels with horticulture. Interesting that they were both bringing insights into testing from outside of IT.

uTest: Your recent book deals with test automation actually at work in a wide variety of organizations and projects. Could you describe one of your favorite case studies of automation gone right (or wrong) from the book, and what you learned from the experience?

DG: Ah, that’s difficult – I have many favorites! Every case study in the book is a favorite in some way, and it was great to collect and critique the stories. The “Anecdotes” chapter contains lots of smaller stories, with many interesting and diverse lessons.

The most influential case study for me, which I didn’t realize at the time, was Seretta Gamba’s story of automating “through the back door.” When she read the rest of the book, she was inspired to put together the Test Automation Patterns, which we have now developed into a wiki. We hope this will continue to disseminate good advice about automation, and we are looking for more people to contribute their experiences of automation issues or using some of the patterns.

uTest has arranged for a special discount of 35% off the purchase of ‘Experiences of Test Automation: Case Studies of Software Test Automation’ here by entering the code SWTESTING at checkout (offer expires Dec. 31, 2014). 

Additionally, Dot has graciously provided the following exclusive chapter excerpts to preview: 

Categories: Companies

Latest Testing in the Pub Podcast: Part II of Software Testing Hiring and Careers

Tue, 10/21/2014 - 21:02

Testing in the PubThe latest Testing in the Pub podcast continues the discussion on what test managers need to look out for when recruiting testers, and what testers need to do when seeking out a new role in the testing industry.

There’s a lot of practical advice in this edition served over pints at the pub — from the perfect resume/CV length (one page is too short!) to a very candid discussion on questions that are pointless when gauging whether someone is the right fit for your testing team.

Part II of the two-part podcast is available right here for download and streaming, and is also available on YouTube and iTunes. Be sure to check out the entire back catalog of the series as well, and Stephen’s recent interview with uTest.

Categories: Companies

Open Source Load Testing Tools Comparison: Which One Should You Use?

Tue, 10/21/2014 - 18:04

This piece was originally published by our good friends at BlazeMeter – the Load Testing Cloud. Don’t forget to also check out all of the load testing tool options out there — and other testing tools — along with user-submitted reviews at our Tool Reviews section of the site.

Is your application, server or service is fast enough? How do you know? Can you be 100% sure that your latest feature hasn’t triggered a performance degradation or memory JMeter-Response-Times-vs-Threadsleak?

The only way to be sure is by regularly checking the performance of your web or app. But which tool should you use for this?

In this article, I’m going to review the pros and cons of the most popular open source solutions for load and performance testing.

Chances are that most of you have already seen this page. It’s a great list of 53 of the most commonly used open source performance testing tools.  However, some of these tools are limited to only HTTP protocol, some haven’t been updated for years and most aren’t flexible enough to provide parametrization, correlation, assertions and distributed testing capabilities.

Given the challenges that most of us are facing today, out of this list of 52, I would only consider using the following four:

  1. Grinder
  2. Gatling
  3. Tsung
  4. JMeter

So these are the four that I’m going to review here. In this article, I’ll cover the main features of each tool, show a simple load test scenario and an example of the reports. I’ve also put together a comparison matrix at the end of this report – to help you decide which tool is best for your project ‘at a glance’ .

The Test Scenario and Infrastructure

For the comparison demo, I’ll be using simple a HTTP GET request by 20 threads with 100 000 iterations. Each tool will be sending requests as fast as it can.

The server (application under test) side:

CPU: 4x Xeon L5520 @ 2.27 Ghz
RAM: 8Gb
OS: Windows Server 2008 R2 x64
Application Server: IIS 7.5.7600.16385

The client (load generator) side:

CPU: 4x Xeon L5520 @ 2.27 Ghz
RAM: 4Gb
OS: Ubuntu Server 12.04 64-bit
Load Test Tools:
Grinder 3.11
Gatling 2.0.0.M3a
Tsung 1.51
JMeter 2.11

The Grinder

The Grinder is a free Java-based load testing framework available under a BSD-style open source license. It was developed by Paco Gomez and is maintained by Philip Aston. Over the year, the community has also contributed with many improvements, fixes and translations.

The Grinder consists of two main parts:

  1. The Grinder Console – This is GUI application which controls various Grinder agents and monitors results in real time. The console can be used as a basic IDE for editing or developing test suites.
  2. Grinder Agents - These are headless load generators; each can have a number of workers to create the load

Key Features of the Grinder:

  1. TCP proxy – records network activity into the Grinder test script
  2. Distributed testing – can scale with the increasing number of agent instances
  3. Power of Python or Closure combined with any Java API for test script creation or modification
  4. Flexible parameterization which includes creating test data on-the-fly and the capability to use external data sources like files, databases, etc.
  5. Post processing and assertion – full access to test results for correlation and content verification
  6. Support of multiple protocols

The Grinder Console Running a Sample Test

image1

Grinder Test Results:

image2

Gatling

The Gatling Project is another free and open source performance testing tool, primarily developed and maintained by Stephane Landelle. The Grinder Gatling also has a basic GUI – limited to test recorder only. However, the tests can be developed in easy-readable/writable domain-specific language (DSL).

Key Features of Gatling:

  1. HTTP Recorder
  2. An expressive self-explanatory DSL for test development
  3. Scala-based
  4. Produces higher load by using an asynchronous non-blocking approach
  5. Full support of HTTP(S) protocols & can also be used for JDBC and JMS load testing
  6. Multiple input sources for data-driven tests
  7. Powerful and flexible validation and assertions system
  8. Comprehensive informative load reports

The Gatling Recorder Window:

Gatling1

An Example of a Gatling Report for a Load Scenario

Gatling2

Tsung

Tsung (previously known as IDX-Tsunami) is the only non-Java based open source performance testing tool in today’s review. Tsung relies on Erlang so you’ll need to have it installed (for Debian/Ubuntu, it’s as simple as “apt-get install erlang”). The development of Tsung was started in 2001 by Nicolas Niclausse – who originally implemented a distributed load testing solution for Jabber (XMPP). Several months later, support for more protocols was added and in 2003 Tsung was able to perform HTTP Protocol load testing.

It is currently a fully functional performance testing solution with the support of modern protocols like websocket, authentication systems, databases, etc.

Key Features of Tsung:

  1. Distributed by design
  2. High performance. Underlying multithreaded-oriented Erlang architecture enables the simulation of thousands of virtual users on mid-end developer machines
  3. Support of multiple protocols
  4. A test recorder which supports HTTP and Postgres
  5. OS monitoring. Both the load generator and application under the test operating system metrics can be collected via several protocols
  6. Dynamic scenarios and mixed behaviours. The flexible load scenarios definition mechanism allows for any number of load patterns to be combined in a single test
  7. Post processing and correlation
  8. External data sources for data driven testing
  9. Embedded easy-readable load reports which can be collected and visualized during load

Tsung doesn’t provide a GUI – for test development or execution. So you’lll have to live with the shell scripts, which are:

  1. Tsung-recorder – a bash script which records a utility capable of capturing HTTP and Postgres requests and creates a Tsung config file from them
  2. Tsung – a main bash control script to start/stop/debug and view the status of your test
  3. Tsung_stats.pl – a Perl script to generate HTML statistical and graphical reports. It requires the gnuplot and Perl Template library to work. For Debian/Ubuntu, the commands are
    –   apt-get install gnuplo
    –   apt-get install libtemplate-perl

The main tsung script invocation produces the following output:

tsung1

Running the test:

tsung2

Querying the current test status:

 tsung3

Generating the statistics report with graphs can be done via the tsung_stats.pl script:

tsung4

Open report.html with your favorite browser to get the load report. A sample report for a demo scenario is provided below:

A Tsung Statistical Report

tsung5

A Tsung Graphical Report

tsung6

Apache JMeter

Apache JMeter is the only desktop application from today’s list. It has a user-friendly GUI, making test development and debugging processes much easier.

The earliest version of JMeter available for download is dated the 9th of March, 2001. Since that date, JMeter has been widely adopted and is now a popular open-source alternative to proprietary solutions like Silk Performer and LoadRunner. JMeter has a modular structure, in which the core is extended by plugins. This basically means that all the implemented protocols and features are plugins that have been developed by the Apache Software Foundation or online contributors.

Key Features of JMeter:

  1. Cross-platform. JMeter can be run on any operating system with Java
  2. Scalable. When you need to create a higher load than a single machine can create, JMeter can be executed in a distributed mode – meaning one master JMeter machine will control a number of remote hosts.
  3. Multi-protocol support. The following protocols are all supported ‘out-of-the-box’: HTTP, SMTP, POP3, LDAP, JDBC, FTP, JMS, SOAP, TCP
  4. Multiple implementations of pre and post processors around sampler. This provides advanced setup, teardown parametrization and correlation capabilities
  5. Various assertions to define criteria
  6. Multiple built-in and external listeners to visualize and analyze performance test results
  7. Integration with major build and continuous integration systems – making JMeter performance tests part of the full software development life cycle

The JMeter Application With an Aggregated Report on the Load Scenario:

jmeter

The Grinder, Gatling, Tsung & JMeter Put to the Test

Let’s compare the load test results of these tools with the following metrics:

  1. Average Response Time (ms)
  2. Average Throughput (requests/second)
  3. Total Test Execution Time (minutes)

First, let’s look at the average response and total test execution times:

image1

Now, let’s see the average throughput:

image2

As you can see, JMeter has the fastest response times with the highest average throughout, followed by Tsung and Gatling. The Grinder has the slowest times with the lowest average throughput.

Features Comparison Table

And finally, here’s a comparison table of the key features offered to you by each testing tool:

Feature The Grinder Gatling    Tsung JMeter OS Any Any Linux/Unix Any GUI Console Only  Recorder Only No Full Test Recorder TCP (including HTTP) HTTP HTTP, Postgres HTTP Test Language Python, Clojure Scala XML XML Extension Language Python, Clojure Scala Erlang Java, Beanshell, Javascript, Jexl Load Reports Console HTML HTML CSV, XML, Embedded Tables, Graphs, Plugins Protocols

HTTP
SOAP
JDBC
POP3
SMTP
LDAP
JMS

HTTP
JDBC
JMS

HTTP
WebDAV
Postgres
MySQL
XMPP
WebSocket
AMQP
MQTT
LDAP

HTTP
FTP
JDBC
SOAP
LDAP
TCP
JMS
SMTP
POP3
IMAP

Host monitoring No No  Yes Yes with PerfMon plugin Limitations

Python knowledge required for test development & editing

Reports are very plain and brief

Limited support of protocols

Scala-based DSL language knowlegde required

Does not scale

Tested and supported only on Linux systems. Bundled reporting isn’t easy to interpret More About Each Testing Tool

Want to find out more about these tools? Log on to the websites below – or post a comment here and I’ll do my best to answer!

The Grinder – http://grinder.sourceforge.net/
Gatling – http://gatling.io/
Tsung – http://tsung.erlang-projects.org/
JMeter
–  Home Page:  http://jmeter.apache.org/
       –  JMeter Plugins: http://jmeter-plugins.org/
       –  Blazemeter’s Plugin for JMeter: http://blazemeter.com/blazemeters-plug-jmeter

On a Final Note…

I truly hope that you’ve found this comparison review useful and that it’s helped you decide which open source performance testing tool to opt for. Out of all these tools, my personal recommendation has to be JMeter.  This is what I use myself  – along with BlazeMeter’s Load Testing Cloud because of its support for different JMeter versions, plugins and extensions.

Categories: Companies

Testing the Limits With Testing ‘Rock Star’ Michael Larsen — Part I

Mon, 10/20/2014 - 15:00

Michael Larsen is a software tester based out of San Francisco. Including a picture-87071-1360261260decade at Cisco in testing, he’s also has an extremely varied rock star career (quite literally…more on that later) touching upon several industries and technologies including virtual machine software and video game development.

Michael is a member of the Board of Directors for the Association for Software Testing and a founding member of the “Americas” Chapter of “Weekend Testing.” He also blogs at TESTHEAD and can be reached on Twitter at @mkltesthead.

In Part I of our two-part Testing the Limits interview, we talk with Michael on the most rewarding parts of his career, and how most testers are unaware of a major “movement” around them.

uTest: This is your first time on Testing the Limits. Could you tell our testers a little bit about your path into testing?

Michael Larsen: My path to testing was pure serendipity. I initially had plans to become a rock star in my younger years. I sang with several San Francisco Bay Area bands during the mid-to-late 80s and early 90s. Not the most financially stable life, to say the least. While I was trying to keep my head above water, I went to a temp agency and asked if they could help me get a more stable “day job.” They sent me to Cisco Systems in 1991, right at the time that they were gearing up to launch for the stratosphere.

I was assigned to the Release Engineering group to help them with whatever I could, and in the process, I learned how to burn EEPROMs, run network cables, wire up and configure machines, and I became a lab administrator for the group. Since I had developed a god rapport with the team, I was hired full-time and worked as their lab administrator. I came to realize that Release Engineering was the software test team for Cisco, and over the next couple of years, they encouraged me to join their testing team. The rest, as they say, is history.

uTest: You also come from a varied tech career, working in areas including video game development and virtual machine software. Outside of testing, what has been the most rewarding “other” part of your career?

ML: I think having had the opportunity to work in a variety of industries and work on software teams that were wildly varied. I’ve had both positive and negative experiences that taught me a great deal about how to work with different segments of the software world. I’ve worn several hats over the years, including on-again, off-again stints doing technical support, training, systems and network administration, and even some programming projects I was responsible for delivering.

All of them were memorable, but if I had to pick the one unusual standout that will always bring a smile to my face, it was being asked to record the guide vocal for the Doobie Brothers song “China Grove,” which appeared on Karaoke Revolution, Volume 3 in 2004.

uTest: You are also a prolific blogger and podcast contributor. Why did you get into blogging and why is it an effective medium for getting across to testers?

ML: I started blogging before blogging was really a thing, depending on who you talk to. Back in the late 90s, as part of my personal website, I did a number of snowboard competition chronicles for several years called “The Geezer X Chronicles.” Each entry was a recap of the event, my take on my performance (or lack thereof) and interactions with a variety of the characters from the South Lake Tahoe area. Though I didn’t realize at the time, I was actively blogging for those years.

In 2010, I decided that I had reached a point where I felt like I was on autopilot. I didn’t feel like I was learning or progressing, and it was having an effect on my day-to-day work. I had many areas of my life that I was passionate about (being a musician, being a competitive snowboarder, being a Boy Scout leader), but being a software tester was just “the day job that I did so I could do all the other things I loved.”

I decided I wanted to have that same sense of passion about my testing career, and I figured if my writing about snowboarding had connected me with such an amazing community, maybe writing about software testing would help me do the same. It has indeed done that — much more than I ever imagined it would. It also rekindled a passion and a joy for software testing that I had not felt in several years.

uTest: And your own blog is called ‘TESTHEAD.’ That sounds like a very scary John Carpenter movie.

ML: I’m happy it’s memorable! The term “test head” was something we used when I was at Cisco. The main hardware device in the middle that we’d do all the damage to was called the test head. I’ve always liked the idea of trying to be adaptable and letting myself be open to as many experiences and methods of testing as possible, even if the process wasn’t always comfortable. Because of that, I decided that TESTHEAD would be the best name for the blog.

uTest: As you know, James Bach offers free “coaching” to testers over Skype. You’re a founding member of the Americas chapter of “Weekend Testing,” learning sessions for testers in the Western Hemisphere. Does Weekend Testing run off of a similar concept?

ML: Weekend Testing is a real-time chat session with a number of software testers, so it’s more of a group interaction. James’ Skype coaching is one-on-one. It has some similarities. We approach a testing challenge, set up a mission and charters, and then we review our testing efforts and things we learn along the way — but we emphasize a learning objective up front so that multiple people can participate. We also time-box the sessions to two hours, whereas James will go as long as he and the person he is working with has energy to continue.

uTest: In the video interview you gave with us, you mentioned a key problem in testing is the de-emphasis of critical thinking as a whole in the industry. Are endeavors such as Weekend Testing more of a hard sell than they should be because of testers’ unwillingness to “grow?”

ML: I think we have been fortunate in that those that want to find us (Weekend Testing) do find us and enjoy the interactions they have. Having said that, I do think that there are a lot of software testers currently working in the industry that don’t even realize that there is a movement that is looking to develop and encourage practitioners to become “sapient testers” (to borrow a phrase from James Bach).

When I talk with testers that do understand the value of critical thinking, and that are actively engaged in trying to become better at their respective craft, I reluctantly realize that the community that actively strives to learn and improve is a very small percentage of the total number of software testing practitioners. I would love to see those numbers increase, of course.

Continue now with Part II of Michael Larsen’s Testing the Limits interview. Amongst other discussion topics, Michael shares why he believes “silence” is powerful on testing teams.

Categories: Companies

Mad Scientists Welcome at the STARWEST 2014 Test Lab

Fri, 10/17/2014 - 19:30

Testing is dull, boring, and repetitive.

Ever heard anyone say that? Well at STARWEST 2014, the theme is Breaking Software (in the spirit of Breaking Bad), and this crowd is anything but dull! Creativity abounds at this conference, from the whimsical (yet impactful) session topics to the geek-chic booth themes (I do so love a good Star Wars parody!) to the on-site Test Lab run by what at first glance appears to be a crew of mad scientists. Boring or repetitive? I don’t think so!

Because the Test Lab was such a fun space, I interviewed one of the mad scientist/test lab rats, Paul Carvalho, to get the lowdown on what STARWEST 2014 attendees have been up to. Check out the video below for a tour of the STARWEST Test Lab, complete with singing computers, noisy chickens, talking clocks, and more!

You can learn more about Paul Carvalho – an IT industry veteran of more than 25 years – at STAQS.com (Software Testing and Quality) where he is the principal consultant. You can also find him on LinkedIn here.

So what do you think about the STARWEST Test Lab? What would you try to break first? Let us know in the Comments below, and check out all of our coverage from STARWEST 2014.

Categories: Companies

STARWEST 2014 Interview: Mind Over Pixels — Quality Starts With the Right Attitude

Fri, 10/17/2014 - 17:10

How important is a tester’s mindset and attitude when it comes to testing?

I sat down with Stephen Vance, one of the STARWEST 2014 speakers, to chat about just that. As an Agile/Lean coach, Stephen is passionate about helping testers understand how to communicate with developers to better integrate into the software development process, and it all starts with the attitude you bring to the table.

Stephen teaches that investing in a “distinctly investigative, exploratory, hypothesis-driven mindset” is key to achieving process improvement at all levels of the software organization. He sees the value in the iterative approach that so well suits the skills testers bring to a collaboration, and encourages testers to be integral in more aspects of a project than just the black-and-white testing phases.

Stephen’s STARWEST 2014 session was called “Building Quality In: Adopting the Tester’s Mindset.” If you weren’t able to attend, check out my interview with him below to hear what else he had to say!

You can also read more about Stephen Vance on his website and connect with him on LinkedIn here.

What are some ways you think testers can use a hypothesis-driven, investigative approach to inject greater value into the software development life cycle? Feel free to sound off in the Comments below.

Categories: Companies

Top Tweets from STARWEST 2014

Thu, 10/16/2014 - 23:34

If you haven’t stopped by and seen us at the ol’ uTest booth, now’s the time! CM’s own Sue Brown is at the show along with the Applause crew.

But if you’re not there, have no fear, as Sue will be reporting back with some video interviews with testers and her own thoughts on the show here on the uTest Blog. In the meantime, we have selected some of our favorite tweets from STARWEST as the tail-end of the show is in full swing:

OH at #StarWest: "How do we do automation?" People: automation—a tool—isn't something you DO; it's something you USE. #testing #agile

— Michael Bolton (@michaelbolton) October 16, 2014

If you're not free to think, or learn, and adapt while you test, it's not exploration. - @jbtestpilot #starwest

— Ben Simo (@QualityFrog) October 14, 2014

The chickens are getting restless in @TheTestLab as #starwest winds down in the final hours.. pic.twitter.com/8N37iQMzsk

— Paul Carvalho (@can_test) October 16, 2014

If your security testing is focused on the things that you secured, you’re going to miss all the things you didn’t think about. #starwest

— Kwality Rules (@KwalityRules) October 16, 2014

#STARWEST keynote @cheekytester asked the audience "who struggles with Test environments?" Almost every hand up. We need to fix this.

— Paul Carvalho (@can_test) October 15, 2014

Room set for #starwest Going to be huge audience. 3 screens needed ! pic.twitter.com/8HM0rAvWGY

— Alison Wade (@awadesqe) October 15, 2014

"Test the software with the minimum number of tests"… hmmm let's ignore that request and start testing #starwest

— Andy Glover (@cartoontester) October 14, 2014

Fedoras, beers, #APIs, #SoftwareTesting, you name it. #STARWEST always impresses, and this year is no different. pic.twitter.com/t5Y5Vuw5OP

— Ready! API (@ready_api) October 16, 2014

Computer System Innovation crew likes glasses and mustaches #STARWEST #stachecam pic.twitter.com/fGIGy1eLbT

— Yvonne Johns (@yvjohns) October 16, 2014

#starwest Ben Simo's presentation on http://t.co/6APgwb9wvC: part tester, part hacker, all awesome

— Daniel Hill (@RenjyaaDan) October 16, 2014

If your ex can answer the security question- it's a bad question @QualityFrog #healthcare #userexperience #starwest

— StickyMinds (@StickyMinds) October 16, 2014

Agile is about the commitment of the full team, not individual teams by @bobgalen #StarWest #HPsoftwarealm

— silvia siqueira (@silvia_ITM) October 16, 2014

#agile «@jefferyepayne Attack of the killer flip charts. Help! I only asked for 1 but they are swarming #starwest pic.twitter.com/aTLGVBFfd7»

— erik petersen (@erik_petersen) October 14, 2014

http://t.co/a6leVIE4U4 empower All developers to Test for success, test early test continuously #StarWest #HPsoftwarealm #blizzard

— silvia siqueira (@silvia_ITM) October 16, 2014

Never substitute tools for communication. @Jeanne_Schmidt #starwest

— Kwality Rules (@KwalityRules) October 15, 2014

If you are not having fun testing something is wrong @cheekytester #starwest

— Gitte Ottosen (@Godtesen) October 15, 2014

 

To see what other events are upcoming in the software testing world, make sure to check out our brand-spankin’ new Events Calendar.

Categories: Companies

Dynamic Testing According to ISO 29119 the Subject of Software Testing Book Excerpt

Wed, 10/15/2014 - 19:00

As testers, you know that software testing is a critical aspect of the software development process. A new book aims to offer a practi804Hasscal understanding of all the most critical software testing topics and their relationships and interdependencies.

The Guide to Advanced Software Testing (second edition) by Anne Mette Hass, published by Artech House, offers a clear overview of software testing, from the definition of testing and the value and purpose of testing, through the complete testing process with all its activities, techniques and documentation, to the softer aspects of people and teams working with testing.

Practitioners will find numerous examples and exercises presented in each chapter to help ensure a complete understanding of the material. The book supports the ISTQB certification and provides a bridge from this to the ISO 29119 software testing standard in terms of extensive mappings between the two.

The full version of the book is available for £75 (USD $119) from Artech House, but testers will be able to receive an exclusive 20% discount off that list price, plus free shipping, by using promo code EUROSTAR14 at checkout, valid through December 31, 2014.

In the meantime, you can check out our exclusive chapter excerpt right here. This specific sample provided by Artech House clocks in at a generous 30 pages, and its subject matter should be quite familiar to many testers, covering the recent, controversial ISO 29119 testing standard and its associated dynamic testing process.

Categories: Companies

The Ins and Outs of Writing an Effective Mobile Bug Report (Part II)

Wed, 10/15/2014 - 15:30

Be sure to check out Part I of Daniel Knott’s articleimages on effective mobile bug reports for further context before continuing on.

Here’s the rest of the information you should plan on including in every bug report.

Network Condition and Environment

When filing a mobile bug, it’s important to provide some information about the network condition and the environment in which the bug occurred. This will help to identify the problem more easily and will possibly show some side effects no one has thought of.

  • Bad: “No information” or “Happened on my way to work”
  • Good: “I was connected to a 3G network while I was walking through the city center.”

Language

If your app supports several languages, provide this information in your bug report.

  • Bad: “No information”
  • Good: “I was using the German language version of the app.”

Test Data

This information can already be provided in the steps taken to reproduce, but test data you need to reproduce the bug may be more complex, so it makes sense to provide this information in a separate section. Provide SQL dumps, scripts or the exact data you entered in the input fields.

  • Bad: “No information”
  • Good: “Find the attached SQL script to put the database in the defined state” or “Enter ‘Mobile Testing’ into the search input field.”

Severity

Every bug you find needs a severity level. Either your defect management tool will offer you some categories or you have to define them with your team. It is important to give a bug a severity level as it will allow the team to prioritize their bug fixing time so that critical and high priority bugs will be fixed first. If this information is not provided, it takes much more time to find the right bugs that need to be fixed before the release. The default severities are: Critical, High, Medium and Low.

  • Bad: “No information”
  • Good: “Critical” or “Medium”

Bug Category

Besides the severity level, the bug category is also a very useful piece of information. The product owner or the developer can filter by category to get an overview of the current status of bugs per category. For example, if there are lots of UX bugs, this may be an indicator of poor UI and UX or a missing design expert in the team, meaning that the app needs design improvements.

  • Bad: “No information”
  • Good: “Functionality” or “UX” or “Performance”

Screenshot or Video

Whenever you find a bug, try to create screenshots or a video to provide the developer with more information. When providing a screenshot, use an image editing tool to mark the bug in the screenshot (Jing, for instance). A video is also a great way to describe a bug you’ve come across. It is also very useful to give the screenshot or the video a good name or description.

  • Bad: “No screenshots or videos attached” or “Screenshot1.png”
  • Good: “01_InsertSearchTerm.png, 02_SearchResultPageWithError.png”

Log Files

If your app crashes or freezes, connect the device to your computer and read out the log files. In most cases, a stack trace will be shown with a description of the error. This kind of information is extremely useful for developers as they know right away in which class the bug or the error has occurred.

  • Bad: “No information provided when the app crashed.”
  • Good: “Provide the full stack trace in the bug report” or “Attached the log file to the report.”

Tester Who Found the Bug

Write down your name or the name of the tester who found the bug. Developers or product owners may have some questions about the reported bug and they would of course like to directly get in touch with the tester who found the issue. In most cases, this is automatically done by the defect management system where each user has his or her own account. If not, make sure you add your e-mail address and/or phone number.

  • Bad: “No information”
  • Good: “Daniel Knott, daniel@adventuresinqa.com”

Other Things to Remember When Writing Bug Reports

As you have seen, there is a lot of information that should be included in a bug report. There are three other points you should keep in mind when writing them.

Don’t get personal. When filing a bug report, describe the software misbehavior rather than the developer’s mindset or the quality of his or her work. Don’t use offensive or emotionally charged words as those kinds of bugs will be ignored by the developer…and you’ll end up with bad blood within the team.

It’s not you. It’s not your fault that the bug occurred. It is the software that’s broken and you and your colleagues need to fix it.

Keep it simple. Try to write your bug report in such a way that someone with no idea about the project or the app is able to understand the problem. If the bug report is that easy, every developer within the team will be able to fix it and non-technical colleagues can understand the problem and will value your work.

If you want to read more about mobile testing, my book Hands-On Mobile App Testing covers this in depth.

Daniel Knott has been in software development and testing since 2008, working for companies including IBM, Accenture, XING and AOE. He is currently Daniel Knotta Software Test Manager at AOE GmbH where he is responsible for test management and automation in mobile and Web projects. He is also a frequent speaker at various Agile conferences, and has just released his book, Hands-On Mobile App Testing. You can find him over at his blog or on Twitter @dnlkntt.

Categories: Companies

The Ins and Outs of Writing an Effective Mobile Bug Report (Part I)

Tue, 10/14/2014 - 19:05

If you find a bug within a mobile app, you need to report it in order to get it fixed. Filing mobile bug reports requires some additional information 250x250xbug_report1-250x250.png.pagespeed.ic_.H3eXAv82fDthat the developers need in order to reproduce and fix the bug.

But what is important when filing a mobile bug? What should a bug report look like? Before I answer those two questions, I want to raise another one: “Why even send a bug report?”

Bug reports are very important for the product owner, product manager and the developers. Firstly, a bug report tells the developers and the product owner about issues they were not aware of. Reports also help identify possible new features no one has thought of, and, last but not least, they provide useful information about how a customer may use the software. All of this information can be used to improve the software.

Whenever you find something strange or if something behaves differently or looks weird, don’t hesitate to file a bug report.

Now onto the question of what a bug should look like and what’s important when filing it. It should contain as much information as possible in order to identify, reproduce and fix the bug. Having said that, your report should only include information that’s important to handling the bug, so try to avoid adding any useless information. Additionally, only describe one error per bug. Don’t combine, group or create containers for bugs. It’s likely that not all of the bugs will be fixed at the same time, so refrain from combining or grouping them.

Here’s the information you should plan on including in every bug report.

Bug ID

A bug must have an unique identifier like a number or a combination of characters or numbers. If you’re using a defect management tool, the tool will handle the bug IDs for you. If not, think about a unique ID system for your project.

  • Bad: 123 is a unique ID, but you might have several projects where the ID is the same.
  • Good: AppXYZ-123 is good because you’re combining an ID with a project abbreviation and a number.

Description

Create a short but meaningful description in order to provide the developer with a quick overview of what went wrong without going into detail. You should, for example, include error codes or the part of the application where the bug occurred.

  • Bad: “The app crashed,” “White page,” “Saw an error,” “Bug”
  • Good: “Error Code 542 on detail message view,” “Timeout, when sending a search request.”

Steps to Reproduce

This is one of the most important points. Provide the exact steps together with the input data on how to reproduce the bug. If you are able to provide this kind of information, the bug will be very easy to fix in most cases.

  • Bad: “I tried to execute a search.”
  • Good: “Start the app and enter ‘Mobile Testing’ into the search input field. Press the search button and you’ll see the error code 783 on the search result page header.”

Expected Result

In this section, you should describe what you expected to happen when the bug occurred.

  • Bad: “It should work,” “I didn’t expect it to crash.”
  • Good: “I expected to see a search results page with a scrollable list of 20 entries.”

Actual Result

What happened when the bug occurred? Write down the actual result — what went wrong or the error that was returned.

  • Bad: “It just won’t work.”
  • Good: “The search results page was empty” or “I got the error code 567 on the search result page.”

Workaround

If you’ve found a way to continue using the app by avoiding the bug, explain your steps. Those steps are important to know since the workaround could cause other problems or indicate a way in which the app should not be used. On the other hand, a workaround can be very useful for the customer support team in order to help customers solve the current problem until the bug gets fixed.

  • Bad: “I found a workaround.”
  • Good: “If you put the device into landscape mode, the search button is enabled and the user can search again.”

Reproducible

If you found a reproducible bug, that’s fine, but does it occur every time? If it happens every time, that’s great, as this should be an easy fix for the developer. But if the bug only occurs 20 percent of the time for instance, it is much harder to find a solution for that. Make sure you provide this information, however, as it is very useful for the developer and will prevent the bug from being closed with the comment “can’t be reproduced.”

  • Bad: “Sometimes”
  • Good: “The bug occurs 2 out of 10 times.”

Operating System, Mobile Platform and Mobile Device

The same applies to the operating system, the mobile platform and the mobile device. Write down the operating system, mobile platform and device on which the bug occurred.

  • Bad: “On Android” or “On iOS”
  • Good: “Android, Version 4.1.2 Google Nexus 4″ or “iOS, Version 6.1 iPhone 4S”

Mobile Device-Specific Information

Mobile devices have lots of interfaces and sensors that could have an impact on your app. The battery could also affect the app you’re testing. Write down all of this information in your bug report.

  • Bad: “No information”
  • Good: “GPS sensor activated, changed the orientation from landscape to portrait mode” or “Used the device in a sunny place” or “Battery state was 15%” or “Battery state was 100%.”

Browser Version

If your app is a mobile web app and you found an issue, it’s very important to note down the browser version where you found the bug, as it may only occur in certain versions.

  • Bad: “Google Chrome” or “Mozilla Firefox”
  • Good: “Google Chrome Version 45.35626″ or “Mozilla Firefox 27.6.”

Software Build Version

Another really useful piece of information is the current build version of the app where the bug occurred. Maybe you found the issue in version 1.2, but there is already a newer version available where the bug has been fixed. This will prevent the developer from wasting time by trying to reproduce a bug that’s already been fixed.

  • Bad: “No information”
  • Good: “App build version 1.2.3″

Check out Part II of this article right here.

Daniel Knott has been in software development and testing since 2008, working for companies including IBM, Accenture, XING and AOE. He is currently Daniel Knotta Software Test Manager at AOE GmbH where he is responsible for test management and automation in mobile and Web projects. He is also a frequent speaker at various Agile conferences, and has just released his book, Hands-On Mobile App Testing. You can find him over at his blog or on Twitter @dnlkntt.

Categories: Companies

My Weekend with the Goat Simulator App

Mon, 10/13/2014 - 21:18

We often talk about the newest and hottest mobile apps at the uTest Community Management desk. Recently, I was curious if I was missing out on any top apps that I didn’t already have on my Samsung Galaxy S4. I am surrounded by a sea of iPhone users so I am used to not getting in on the latest apps until (much, much) later. Of course, I have the requisite social media, weather, and news apps installed but what is really hot for the Android app market these days? I checked out the top paid apps in the Google Play store and, to my surprise, the one odd app that stuck out is the Goat Simulator at #9 on the Top 10 list. Screenshot_2014-10-10-19-10-05

Per the app’s description: “Gameplay-wise, Goat Simulator is all about causing as much destruction as you possibly can as a goat. It has been compared to an old-school skating game, except instead of being a skater, you’re a goat, and instead of doing tricks, you wreck stuff. When it comes to goats, not even the sky is the limit, as you can probably just bug through it and crash the game. Disclaimer: Goat Simulator is a completely stupid game and, to be honest, you should probably spend your money on something else, such as a hula hoop, a pile of bricks, or maybe pool your money together with your friends and buy a real goat.”

I appreciate the developer’s humor, especially since they list the top key feature as “you can be a goat.” I can’t say I’ve always dreamed of being a goat, but here was my shot. Ryan, our beloved blog and forums guru, practically ordered me to buy the app (cost: $4.99), play it over the weekend, and report back on Monday. A check of the app on Applause Analytics showed a satisfaction score of 77 and noted that it is the app’s strongest attribute. Okay, game on. Goat-sim-screenshot

The Goat Simulator app played as expected. You are a first person goat whose job is to run people down, kick objects across long distances, and generally be a menace to society. (If we’ve ever met, then you know I am capable of such things – no app needed.) I was waiting to encounter the supposed millions of bugs that the developer mentions but, sadly, I did not. I stopped playing the game on Saturday when I realized I had given one too many hours of my life to being a virtual goat and that it was time to take a shower and rejoin civilization.

However, I was still wondering: What odd, strange, or unique apps do you have installed on your phone? And what’s the oddest app that you’ve paid for? Let’s chat about it in the forums.

Happy goating!

Categories: Companies

Software Testing Budgets on the Rise, Focused on the ‘New IT’

Mon, 10/13/2014 - 15:30

Software testing and QA budgets keep on going up, and shiny, new toys are all of their focus.3C8D67088BE44F318BC592671BC43

According to a ZDNet report based off of a new survey of 1,543 CIOs, conducted and published by Capgemini and HP, “for the first time, most IT testing and QA dollars are now being spent on new stuff, such as social, mobile, analytics, cloud and the Internet of Things, and less of it on simply modernizing and maintaining legacy systems and applications.”

In fact, this “new IT” is making up 52 percent of the testing budgets, up from 41 percent in 2012. And it’s just part of a trend of rising testing budgets in general, hopefully good news for testers — testing now represents 26 percent of total IT budgets on average, up from 18 percent in 2012, and projected to rise to 29 percent by 2017.

What testing teams are doing with this extra budget is a whole other story, however, so it remains to be seen whether more budget for testing teams is a good thing, and will provide a much-needed boost to teams strapped by a lack of time and misdirected efforts.

Do these trends look familiar within your own organization? We’d love to hear from you in the Comments below.

Categories: Companies