Skip to content

Feed aggregator

SharePoint Lessons Learned from SPTechCon Austin

Yee-haw! Leading up to their first show in Texas, this was the message that adorned SPTechCon’s website in large letters made of rope. At first, this reference to a caricature of Texas culture made me cringe, being a native Texan and all. However, to SPTechCon’s credit, there is some significance to the phrase being used […]

The post SharePoint Lessons Learned from SPTechCon Austin appeared first on Dynatrace APM Blog.

Categories: Companies

Q&A: ‘Let’s Test’ Leader Talks Global Reach of Context-Driven Testing, Previews Conference

uTest - Wed, 02/25/2015 - 15:30

Johan Jonasson is one of the organizers of Let’s Test conferences, which celebrate the context-driven school of JohanJonassonthought. In addition to co-founding testing consulting firm House of Test, Johlets-test-logo-180x47pxan is a contributing delegate at the SWET peer conferences, and has spoke at several national & international software testing conferences. He is also an active member of the Association for Software Testing (AST). Follow him on Twitter @johanjonasson.

Let’s Test 2015 is slated for May 25-27, 2015, in Stockholm, Sweden, and uTest has secured an exclusive 10% discount off new registrations. Email for this special discount code, available only to registered uTest members.

In this interview, we talk with Johan on the global, inclusive context-driven testing community, and get a sense of what testers can expect at the 2015 edition of Let’s Test.

uTest: You have a lot of crossover with the CAST conference in the US — both are context-driven testing sessions featuring content by testers for testers. There’s also a lot of sessions driven by folks who were at CAST. What does it mean to you to have a fervent following that travels the world for these shows?

Johan Jonasson: The fact that many speakers and attendees are willing to travel lengthy distances to both events is, I think, a great testament to the fact that context-driven testing is something that excites a lot of people, and that there is a truly global and inclusive community eager to meet and share experiences. Last year, we had attendees from literally all parts of the world come to Let’s Test.

At the same time, there is a fairly large number of local testing experts on this year’s program too, which I think shows how much context-driven testing has grown in Europe since the first conference in 2012. The context-driven approach to testing is definitely gaining ground, even in some European markets that are traditionally considered ‘factory testing’ school strongholds, which is great.

uTest: Has there been a common theme in terms of tester pain points coming out of the conferences from the last couple of years?

JJ: There seems to be two main challenges that keep coming up. One is how to package and communicate the qualitative information that our testing reveals in a way that can be readily understood by our stakeholders, who might be used to making release decisions based on traditional and flawed quantitative metrics like bug counts and pass/fail ratios. In other words, how can you perform good test reporting that stands up to scrutiny?

The other one is how to convince managers and companies buying testing services to move away from wasteful and dehumanizing testing practices sold by the lowest bidder, and adopt approaches that focus on the value of information, and the demonstrable skill of the professionals delivering that value.

uTest: Speaking of pain points, ISO 29119, and its attempt to standardize testing, has been a pain for many in the context-driven community. It’s also the subject of one of the sessions this year at Let’s Test. What are your own views on 29119?

JJ: I actively oppose the work being done on ISO 29119. I think it is flawed thinking in the first place to even try to standardize adaptive, intellectual and creative work like testing.

Assuming for the sake of argument that it would be a good idea. In order to standardize testing, we would have to agree on at least the fundamental aspects of testing, and for the longest time, the global testing community has never been in agreement about those. Don’t get me wrong, I think that’s a good thing. Consensus is highly overrated. Argument and disagreement is crucial if we are to move forward. Which is another reason to not try to create a one-size-fits-all standard in a field that is still highly innovative and developing.

Those are just a couple of issues I have with ISO 29119, and that’s before we even start talking about the archaic and long-since-discredited models of testing that the this standard has presented so far, or the motivations behind the standard.

uTest: Which were some of the most impactful or memorable sessions for you personally from the 2014 edition of Let’s Test?

JJ: I very much appreciate all the experience-based talks as well as the inspirational or innovation-focused ones, and it wouldn’t be Let’s Test without them, but my favorite sessions are the experiential workshops and the learning that comes from doing and experiencing situations firsthand in those simulated environments.

Because of that, my favorite session last year was Steve Smith’s experiential keynote session where the entire conference participated in the keynote. So it was a 150+ person simulation which ended with fascinating presentations from the participants, and observations from Steve Smith. I don’t think either Steve or us organizers really knew if it would work to have a simulation that big before we tried it, but we’re never content with just doing what we’ve been doing the year before. We try to constantly raise the bar for both the content and format of Let’s Test.

uTest: You have had Let’s Test in Europe for several years now, and piloted Let’s Test Oz in Australia last year. Are there other  areas where you want to bring context-driven testing or see it emerge more?

JJ: Absolutely! We’ve done smaller Let’s Test events (called Tasting Let’s Test) in both the Netherlands and South Africa during 2014, and we’re planning another Let’s Test Oz, and trying to find a good date that would fit well with other things going on in the near future.

The next big thing though is an upcoming three-day Let’s Test event in South Africa in November 2015 that we’ll be releasing more information on in the coming weeks at our site and on Twitter @Letstest_conf.

uTest: Could you give us a preview of what may be different at Let’s Test this year?

JJ: We’ve really tried to turn up the number of workshops for Let’s Test 2015. Like I said previously, there’s always a need for great talks and experiences at Let’s Test. However, by aligning with the residential, almost retreat-like format of the Let’s Test conference, we felt that what really gets people talking is hands-on sessions where we spend more than an hour on a certain topic.

So for Let’s Test 2015, we have an unprecedented 26 workshops and tutorials of different sizes lined up during the three days of the conference. Several of these are three-plus hours in length to make sure there’s enough to listen to, experience, debrief and discuss for everyone participating.

Not a uTester yet? Sign up today to comment on all of our blogs, and gain access to free training, the latest software testing news, testing events, opportunities to work on paid testing projects, and networking with over 150,000 testing pros. Join now.

The post Q&A: ‘Let’s Test’ Leader Talks Global Reach of Context-Driven Testing, Previews Conference appeared first on Software Testing Blog.

Categories: Companies

100K Celebration Podcast Recording

In preparation for Jenkins 100K celebration, I'm going to record a one-time podcast with Dean Yu, Andrew Bayer, and R. Tyler Croy.

My current plan is to go over the history of the project, how big the community was back then, how we grow, where we are now, and maybe a bit about future.

But if you have any other suggestions/questions that you'd like us to discuss, you have 3 or 4 more hours to send in that suggestion! Your feedback would help us make a better recording, so please don't hesitate to tell us.

Categories: Open Source

Very Short Blog Posts (26): You Don’t Need Acceptance Criteria to Test

DevelopSense Blog - Tue, 02/24/2015 - 22:08
You do not need acceptance criteria to test. Reporters do not need acceptance criteria to investigate and report stories; scientists do not need acceptance criteria to study and learn about things; and you do not need acceptance criteria to explore something, to experiment with it, to learn about it, or to provide a description of […]
Categories: Blogs

Sauce Labs Appoints Technology Veteran Charles Ramsey as Company’s First Chief Revenue Officer

Sauce Labs - Tue, 02/24/2015 - 22:00

SAN FRANCISCO, CA-  Sauce Labs, Inc., the leading cloud-based web and mobile application testing platform, today announced that it has appointed Charles Ramsey as the company’s first Chief Revenue Officer (CRO). Ramsey brings more than 25 years of industry experience and insight to his role at Sauce Labs. He will be responsible for all customer-facing areas, including sales, business development, and marketing, as well as continuing to build on Sauce Labs record 2014 results as the company extends its leadership position in the booming automated testing market.

“Charles is an innovative strategist, committed to building strong relationships with customers and partners,” said Jim Cerna, CEO of Sauce Labs. “His demonstrated ability to strategically grow companies will help us address the exploding demand we are seeing in the market for our technology. With Charles’ experience, leadership, and track record of repeated successes, we are poised to continue our rapid growth trajectory through 2015 and beyond.”

“Sauce Labs is well-positioned to take advantage of the dramatic proliferation of web and mobile apps across a variety of devices and operating systems. I look forward to working with the entire Sauce team to further the company’s market leadership and growth by bringing new levels of innovation, customer experience, and value to the marketplace,” said Ramsey.

Sauce Labs provides an instantly scalable testing cloud that is optimized for continuous integration (CI) and continuous delivery (CD). When tests are automated and run in parallel on virtual machines across multiple browsers and platforms, testing time is reduced and developer time is freed up from managing infrastructure. When paired with a CI system, developers can easily test web, hybrid and native applications early on in their development cycles, continuously and affordably. Sauce Labs currently supports more than 480 browser, operating system and device platform combinations.

Prior to joining Sauce Labs, Ramsey was an early member of the Quest Software management team, where he served as vice president of World Wide Marketing and Sales. He is a former Venture Partner at JMI Equity and has also served on the Board of Directors at notable companies such as Configuresoft, Inc. and ServiceNow, Inc. Early in his career, Ramsey rose to vice president, North America Sales for Computer Intelligence, a division of Ziff Davis, after beginning his career with the IBM National Accounts Division in a variety of sales assignments. He has a Bachelor of Arts from the University of California, San Diego and a Master of Information Management from the American Graduate School of International Management.

Helpful Links

About Sauce Labs
Sauce Labs is the leading cloud-based web and mobile application automated testing platform. Its secure and reliable testing infrastructure enables users to run JavaScript unit and functional tests written with Selenium and Appium, eliminating the time and expense of maintaining a test grid. With Sauce Labs, organizations can achieve success with continuous integration and delivery, increase developer productivity and reduce infrastructure costs for software teams of all sizes.

Sauce Labs is a privately-held company funded by Toba Capital, Salesforce Ventures, Triage Ventures and the Contrarian Group. For more information, please visit

Categories: Companies

Evaluating OSS logistics solutions? Consider these 9 tips.

Sonatype Blog - Tue, 02/24/2015 - 21:08
With well over 17 billion open source components downloaded from public repositories in 2014, it is clear that more software development organizations are assembling software from component building blocks. In fact, Gartner reports that by 2016 the vast majority of mainstream IT organizations will...

To read more, visit our blog at
Categories: Companies

How to Performance Monitor All Your Applications on a Single Dashboard

It’s become easy to monitor applications that are deployed on hundreds of servers – thanks to the advances in application performance management tools. But – the more data you collect the harder it is to visualize the health state in a way that a single dashboard tells you both overall status as well as the […]

The post How to Performance Monitor All Your Applications on a Single Dashboard appeared first on Dynatrace APM Blog.

Categories: Companies

Master the Essentials of UI Test Automation: Chapter Two

Telerik TestStudio - Tue, 02/24/2015 - 18:00
You’re reading the second post in a series that’s intended to get you and your teams started on the path to success with your UI test automation projects. After its completion, this series will be gathered up, updated/polished and published as an eBook. We’ll also have a follow-on webinar to continue the discussion.
Categories: Companies

VectorCAST 6.3 Provides Internet of Things (IoT) Testing

Software Testing Magazine - Tue, 02/24/2015 - 17:22
Vector Software, a provider of software solutions for robust embedded software quality, announced the release of VectorCAST™ 6.3 today, the most Internet of Things (IoT) and Machine-to-machine (M2M)-ready embedded test suite. Building on the embedded domain expertise Vector Software has developed over the last 20 years, version 6.3 provides a new micro harness architecture designed for the special needs of IoT / M2M applications. The new architecture is critical for IoT / M2M applications because of the smaller microprocessors and limited resources that are available to these applications. Analysts are projecting ...
Categories: Communities

LDRA Secures Patent for Software Traceability

Software Testing Magazine - Tue, 02/24/2015 - 16:09
LDRA, a provider of standards compliance, automated software verification, source code analysis and test tools, has secured a patent for TBmanager, its premiere software life cycle traceability and verification system. TBmanager enables developers to bidirectionally link industry-standard objectives, functional requirements, design, code, and test artifacts to the people responsible for those activities. By helping define, enforce, and demonstrate a comprehensive verification workflow, TBmanager provides companies with the audit trail needed to achieve regulatory compliance of safety-critical standards. TBmanager enables development and verification organizations in industries such as aerospace, automotive, industrial controls, ...
Categories: Communities

How To Manage Time with people who make stuff

The Social Tester - Tue, 02/24/2015 - 13:30

Paul Graham has a great article on Manager schedules and Maker schedules on his blog. It’s a post I revisit every week as I strive to work out how to balance my management tendencies with the NVM Dev team’s desire...
Read more

The post How To Manage Time with people who make stuff appeared first on The Social Tester.

Categories: Blogs

Four things that blew my mind with “Typemock Isolator”

The Typemock Insider Blog - Tue, 02/24/2015 - 12:49

I just started working at Typemock a few months ago and the company decided that we need to take a hour a week to write about our experience or about anything but is has to be about the company. Believe it or not I actually like writing and not only code. So yeah… I decided […]

The post Four things that blew my mind with “Typemock Isolator” appeared first on The Unit Testing Blog - Typemock.

Categories: Open Source

Organizing Scripts as Modular Test Suites

The Seapine View - Tue, 02/24/2015 - 12:00

A good way to organize scripts in QA Wizard Pro is to think of each script as a test case. As you work with each script, determine if common functionality exists between them. For example, if several scripts include a sequence that logs in to your application, you could isolate the login action statements in a separate utility script that each script could call. You could create other utility scripts to perform additional actions, such as logging out of your application, configuring user options, or interacting with custom controls.

You can also write scripts to set up and tear down a test configuration to ensure the test environment is the same each time the suite of test cases is run. A setup script might populate a database with usernames and credentials before running the suite. A tear down script might remove those same usernames after the suite has run.

Once all your test case scripts are in working order, create a main test suite script to run the entire suite. The following screenshot shows an example test suite script that first calls the setup script, then each of the test case scripts, and finally the tear down script.

Modular Workspace

Tip: QA Wizard Pro’s Run Main Script functionality allows you to set the main script for the workspace, so at the click of a button the entire suite of test cases will run. To set the test suite script as the main script, right-click it in the Workspace pane and choose Set as Main Script. To run the main script, click Run Main Script on the default toolbar.

Share on Technorati . . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

Page Objects Done Right

Software Testing Magazine - Mon, 02/23/2015 - 18:27
This talk walks the audience step by step at building tests using the Page Object Design Pattern, making several attempts until we reach the current recommendation. We’ll see the dos, don’ts and common pitfalls. This presentation also covers the Page-Factory Design Pattern, and best practices for dealing with asynchronously and how to remove the deadly “random sleeps”. Video producer:
Categories: Communities

The Good, the Bad, and the Ugly of A/B Testing

Testing TV - Mon, 02/23/2015 - 17:49
A/B testing is a great technique to experiment with changes to your product. At Etsy we make extensive use of them to test out ideas; we’ve got 30+ running right now. Although the concept is simple, the execution is a bit tricker then you’d think. This covers the common, and a few of the not […]
Categories: Blogs

ISO 29119: Why is the Debate One-Sided?

uTest - Mon, 02/23/2015 - 17:23

unnamedIn August, the Stop 29119 campaign and petition kicked off at CAST 2014 in New York. In September, I wrote on the uTest Blog about why the new ISO/IEC/IEEE 29119 software testing standards are dangerous to the software testing community and good testing.

I was amazed at the commotion ‘Stop 29119′ caused. It was the biggest talking point in testing in 2014. Over six months have passed, and it’s time to look back. What has actually happened?

The remarkable answer is – very little. The Stop 29119 campaigners haven’t given up. There have been a steady stream of blogs and articles. However, there has been no real debate; the discussion has been almost entirely one-sided.

There has been only one response from ISO. In September, Dr. Stuart Reid, the convener of the working group that produced the standard, issued a statement attempting to rebut the arguments of Stop 29119. That was it. ISO then retreated into its bunker and ignored invitations to debate.

Dr. Reid’s response was interesting, both in its content and the way it engaged with the arguments of Stop 29119. The Stop 29119 petition was initiated by the board of the International Society for Software Testing. ISST’s website had a link to the petition, and a long list of blogs and articles from highly credible testing experts criticizing ISO 29119. It is a basic rule of debate that one always tackles an opponent’s strongest points. However, Dr. Reid ignored these authoritative arguments and responded to a series of points that he quoted from the comments on the petition site.

To be more accurate, Dr. Reid paraphrased a selection of the comments and criticisms from elsewhere, framing them in a way that made it easier to refute them. Some of these points were no more than strawmen.

So Cem Kaner argued that IEEE adopts a “software engineering standards process that I see as a closed vehicle that serves the interests of a relatively small portion of the software engineering community… The imposition of a standard that imposes practices and views on a community that would not otherwise agree to them, is a political power play.”

Dr. Reid presented such arguments as “no one outside the Working Group is allowed to participate” and “the standards ‘movement’ is politicized and driven by big business to the exclusion of others.”

These arguments were then dismissed by stating that anyone can join the Working Group, which consists of people from all parts of the industry. Dr. Reid also emphasized that “consensus” applies only to those within the ISO process, failing to address the criticism that this excludes those who believe, with compelling evidence, that ISO-style standardization is inappropriate for testing.

These criticisms had been made forcefully for many years, in articles and at conferences, yet Dr. Reid blithely presented the strawman that “no one knew about the standards and the Working Group worked in isolation.” He then effortlessly demolished the argument that nobody was making.

What about the content? There were concerns about how ISO 29119 deals with Agile and Exploratory Testing. For example, Rikard Edgren offered a critique arguing that the standards tried but failed to deal with Agile. Similarly, Huib Schoots argued that a close reading of the standards revealed that the writers didn’t understand exploratory testing at all.

These are serious arguments that defenders of the standard must deal with if they are to appear credible. What was the ISO response?

Dr. Reid reduced such concerns with bland and inaccurate statements that “the standards represent an old-fashioned view and do not address testing on agile projects” and ”the testing standards do not allow exploratory testing to be used.” Again, these were strawmen that he could dismiss easily.

I could go on to highlight in detail other flaws in the ISO response — the failure to address the criticism that the standards weren’t based on research or experience that demonstrates the validity of that approach, the failure to answer the concern that the standards will lead to compulsion by the back door, the failure to address the charge from the founders of Context-Driven Testing that the standards are the antithesis of CDT, and the evasion of the documented links between certification and standards.

In the case of research, Dr. Reid told us of the distinctly underwhelming claims from a Finnish PhD thesis that the standards represent “a feasible process model for a practical organization with some limitations.” These limitations are pretty serious — “too detailed” and “the standard model is top-heavy.” It’s interesting to note that the PhD study was produced before ISO 29119 part 3 was issued; the study does not mention part 3 in the references. The study can therefore offer no support for the heavyweight documentation approach that ISO 29119 embodies.

So instead of standards based on credible research, we see a search for any research offering even lukewarm support for standards that have already been developed. That is not the way to advance knowledge and practice.

These are all huge concerns, and the software testing community has received no satisfactory answers. As I said, we should always confront our opponents’ strongest arguments in a debate. In this case, I’ve run through the only arguments that ISO have presented. Is it any wonder that the ‘Stop 29119′ campaigners don’t believe we have been given any credible answers at all?

What will ISO do? Does it wish to avoid public discussion in the hope that the ISO brand and the magic word “standards” will help them embed the standards in the profession? That might have worked in the past. Now, in the era of social media and blogging, there is no hiding place. Anyone searching for information about ISO 29119 will have no difficulty finding persuasive arguments against it. They will not find equally strong arguments in favor of the standards. That seems to be ISO’s choice. I wonder why.

James Christie has 30 years’ experience in IT, covering testing, development, IT auditing, information security management and project management. He is now a self-employed testing consultant, based in Scotland. You can learn more about James and his work over at his blog is  and follow him on Twitter @james_christie.

Not a uTester yet? Sign up today to comment on all of our blogs, and gain access to free training, the latest software testing news, opportunities to work on paid testing projects, and networking with over 150,000 testing pros. Join now.

The post ISO 29119: Why is the Debate One-Sided? appeared first on Software Testing Blog.

Categories: Companies

IQNITE Europe, Düsseldorf, Germany, April 28-30 2015

Software Testing Magazine - Mon, 02/23/2015 - 17:10
IQNITE Europe is a three-day conference taking place in Düsseldorf (Germany) and focusing on all the aspects of software testing and software quality. All the presentations are in German. In the agenda of IQNITE Europe you can find topics like “Test Center of Excellence, bring it to the next level with ITIL”, “Next Generation QA – How Can We Forecast the Quality Assurance of Tomorrow”, “Are You Still Testing or Are You Already Playing?”, “360 Test Coverage of Mobile Applications”, “Economics of the Test Factory”, “A field report on full automated ...
Categories: Communities

How To Amaze Your Users On The First Date

Testlio - Community of testers - Mon, 02/23/2015 - 16:36

Going on the first date with someone is a lot like downloading a new app.

Someone is giving you their time to test whether you deserve to be a continued part of their life. The first date can be the preface to a lasting relationship.

This is the exact same process as creating a successful app.

First impressions are everything.

Whether it’s a web app, mobile app, or a dashing woman at the bar – when you introduce yourself, you need to be delicate with your approach.

In the first date, your goal is to make sure you linger in his/her mind when you two go your separate ways.

Creating an app with 5 star reviews on the app store is no different.

Remember, you are the one who sets the tone of every user’s journey – make sure it’s unforgettable.

Much like a pickup artist’s playbook, there are plenty of ways to onboard your users to make a lasting first impression.

These are the most common user onboarding methods I’ve seen:


1. Intro Slide Guide

In the dating world, this will give you the same amount of success as buying someone a drink. Sometimes it works, most times it doesn’t.

However if it is well executed, you can leave a lasting impression.

How this method works is companies will take screenshots of their app and show them to new users with a description talking about the app in further detail.

I’m not a huge fan of this style. It’s overdone, and people likely do it because they fail to think of a creative way to guide users through the app.

The biggest problem I have with this style is people fail to remember most of the information presented on the slides. The point of your introduction slides is to help your users navigate through your app as if they’re return visitors.

Aside from one or two slides, most information is lost because the you’re not using the talked about feature at that moment. Therefore, we will read it and say “ah cool!” then forget about it.

This style can carry a benefit. It’s a large one too. If done well, introduction slides can inform your user of the unique value proposition of your app. Imagine getting every one of your users to your app’s “A-HA! Moment” before they even create an account.

Mailbox app - 5 star app reviews

Mailbox does an incredible job with their introduction screens. Instead of swiping through to the next tutorial, they force you to interact with the screenshot as if it’s your own e-mail.

I’ve seen this wow people before they connect their own e-mail accounts. That’s what you want. Interactive and educational introduction screens like this turn friction into dust. Get people excited at this stage and they’ll be begging to you to let them be users.

With intro slides, you can explain to your users why you’re so special with deeper depth than any of the other methods I list below.


2. UI Overlays

If introduction slides is the simple thoughtless introduction, this next one is the dating equivalent of peacocking.

Like a male peacock uses his feathers to attract a mate, peacocking involves using a man’s clothing and adapting his behavior in an over the top and flashy manner, for the purpose of attracting women — but not necessarily a mate.

I’m a big fan of this onboarding method, but only if it’s very clear what your app does. If you have a complicated app with multiple use cases, I would steer clear of only relying on this.

What’s great about these is they are not distracting at all, and only come up when it’s relevant. In Slack’s case they’re little circles you can choose to ignore until you choose to make an action with said element.

Slack UI Overly - High App Store Reviews

A flaw with the previous method was giving information before people have expressed any interest in the feature. This method solves that. If a person is about to make a search with Slack, they will be notified of additional information the user will be extra attentive to because it’s directly relevant to helping them achieve their goal.

Once they use it, it’ll be much harder to forget because they’ve practiced in the application – therefore the value of that feature is immediately presented.

Slack UI Overlay (clicked) - 5 Star App Reviews

There’s a caveat with using this method though. I would not recommend using this unless your app can be summed up in less than a few words (ex: Slack – Chat service). Your goal at this point is not stressed on the value of your app, but rather the features which set you apart from your competitors.


3. Open Instructions

I would compare this onboarding method to impressing someone on the dance floor or with an instrument.

It can work really well, but only if you have the ability.

These are incredibly cool. If the goal of your onboarding process is to teach how to be a basic user of your app, this is the way to go.

When a user opens your app for the first time, they are shown the home page, however there are instructions that require them to interact with the app. This method is like a love child between methods #1 and 2. Think of it as learning on the job.

FB Paper Onboarding - How To Get High App Reviews

Facebook did a great job here. Given that Paper an entirely gesture driven design, they knew it was very important to teach their users how to navigate.

This method teaches users how to use your app better than any other way.

The flaws this method have are the same as UI Overlays. In that, you won’t be able to go too deep in communicating the unique value of your app.


4. Nothing

Seriously. Nothing. This is a method.

This is like going to a bar/club sitting back and expecting attractive members of the opposite sex to come to you.

In your app, the probability of users immediately understanding its value has about the same probability as the situation previously stated.

However it happens, and if you do this, you better be damn sure of one of two things (though preferably both):

  1. People know without a doubt what you do before they download your app.
  2. Your app is dead simple.

If you can take #1 then you already show signs of being a unicorn.

When people can come to your app and already know who you are and the value you will bring to them then you’re in a good position. You could require them to give you their bank account and social security information and they won’t be deterred from signing up.

If you can take #2 then you better be sure it’s as simple as you think.

Signs of a dead simple app tend to entail a dominant element on the front page of your screen and a single clear action you can take.

Examples of this are Instagram and Snapchat.

When you log into Instagram, you’re presented a feed of photos which take up the vast majority of the screen real estate. There’s one of two things they want you to do, swipe up or take a photo yourself.

With Snapchat there’s a camera. Given you have already imported your contacts, there are two actions they want you to do – take a photo or look at your friend’s stories. Both of which drive value to your user’s experience.

The clear problem with using this method is if the previously mentioned conditions aren’t met. If they aren’t, you’re throwing your users into a blackhole.

If you expect your users to figure your app out on their own then you shouldn’t expect a second date with them.



Treat every new user like the first date with the mate of your dreams. Let them know who you are and why you deserve a piece of their thoughts. On the first date, the first impression is everything.

I’ve encountered four main ways to make that initial first impression go smoothly, however each one have their pros and cons.

  1. Introduction Screenshots
  2. UI Overlays
  3. Open Instructions
  4. Nothing

Each method has their own pros and cons. Be sure you fully evaluate your app before sticking with a single one. If you need help figuring that out, feel free to send a tweet to me @willietran_ and I’ll be more than happy to take a look at your app with you.

Thanks a ton to Samuel Hulick at for the awesome cover photo. That site has always been an awesome source of information on how to think for my user in mind.

The post How To Amaze Your Users On The First Date appeared first on Testlio.

Categories: Companies

Letter to a starting tester

PractiTest - Mon, 02/23/2015 - 11:05

I’ve been working on the State of Testing survey and report for the last couple of months, and as part of this project I’ve talked to a large number of testers and testing teams.

In some of these talks I explained how 17 years ago I started working as an accidental testers, how I tried to escape from testing during the first 3 or 4 years of my career, and how somewhere along the road and without really noticing I found my testing vocation.

ID-10076253After one of these chats I realized that when I was beginning my work as a rookie tester I really had the need for a mentor to help me get started on this journey.  There were many times when I would have appreciated professional guidance and advice, especially during some of my moments of doubt.

And so, I decided to write here the email I would have sent to myself back when I started testing to help me cope with some of the main challenges ahead.

Maybe this email is only cheap therapy for myself, but there is also a chance that it may help some of the testers who are only now starting their professional endeavours.

An email to Joel, a starting tester, back in 1998

Dear Joel,

I wanted to send you this mail to help you during some of the difficult times and the challenges that you will encounter as you start your professional career as a tester.  What I will tell you may sound lame and trivial at times, but these are the things you will need to hear (and do) to cope with a number of the situations that await you in your coming career.

No one really knows what you need to do better than you do, in the end you will need to figure it out on your own.

Many times you will feel that you don’t understand what’s expected from you as a tester.

People want you to find the bugs and test the product, but they don’t have time to explain what the product really does and how, they will not be open to criticism, they will also hate it when you bring them bad news, and on top of everything else they also expect you to complete all your work within a couple of minutes…

Even though you did not go through any special training, you are suddenly the expert in testing, and sometimes this new responsibility will weigh too much in your shoulders.

As strange as it sounds, no one in your team knows how to do your work better than you do (this will be especially true in the start-up companies where you will work at the beginning of your career).  It will be up to you to learn and figure out your job, and to define your tasks in the best way you can with the resources at your disposal.

Don’t count on the knowledge of others in your team to rescue you from your responsibility…

There are other testers out there that you can talk too, look for them and share your knowledge and questions.

You are not alone!

Even if you are the only tester in your company, there are still other companies near by where you will find other testers.

One of the biggest and most important things you will do is to lose your “public embarrassment” and reach out to other testers, to talk with them about your questions, challenges and dilemmas!

You will be amazed about how similar your issues are, and how much you can learn by simply talking to them and coming up with shared ideas on how to solve your professional predicaments.

This will help you learn that asking questions is not a sign of being weak or dumb, but a sign of being professionally confident and smart…

Testing is as much about learning and asking questions, as it is about pressing buttons and reporting bugs.

Hand-in-hand with the last point, you should also learn to ask the people on your team questions about your product, this will help you become a better tester.

Many times you’ll see that a developer comes to you to explain a new feature or a change, and after he/she is done explaining (or at least after they think they are done explaining) you have more questions than the ones you started with.   When this happens don’t be ashamed to ask more questions and to request this person to explain the point from a different angle or using different examples.

A number of (nice) developers may forget that you are not aware of all the technical details, or they will start their explanations based on other assumptions that are not known to you, or they are simply bad communicators and so explain stuff in the worst possible way…

This is just the way it is, and you don’t need to be afraid to keep asking until things are clear enough for you to do your work.

It is as much their jobs to make sure you understand how to test as it is to write the code correctly.

Whenever you get a feeling that something is not right, don’t keep quiet!  Understand what bothers you and communicate this to the team.

At times you will get a feeling that something is not right with your product or your process, and it will be your instinct to think that it is you who made a mistake during your tests.

This will surely be the case many times, but once you have re-checked your assumptions and your procedures, and if you still have that feeling of something not being right make sure to communicate this to others.

Stand your ground when you think you are right.

Sometimes it may be a matter of interpretation, you think that the feature should behave “this way” but the developer thinks that it should behave “that way”.

When this happens go to someone else who will help you make the correct choice.  Try someone who knows the customer and will be able to provide feedback based on their knowledge of how the users work.

The same goes for the times when you see your project is going to be delayed, but you see people behaving like everything was normal and OK.

If you see that features are slipping, and that the quality of the deliverables is below the status you expected them to be at this time make sure to raise a flag and wave it for the whole team to see it.

After all it is your job to ensure the quality of the process and not only of the product under test!

You are not the gatekeeper of your product by brute force!

Having said all these, it is not your job to stop the bugs (or the versions containing them) from walking out the door.

Your job is to provide visibility into the status of your product and your project, and give everyone on the team the information they need to make their decisions.

After you have given everyone the correct information they will need to make the choice whether to release the product into the field or not.  You may be part of this team making the decision, but your voice will never be the only one that counts!

Remember that you may not have all the information related to marketing, sales, competitors, or a range of other factors that are usually involved in the process of deciding whether to release a version to the field or not.

Have fun…

Testing should not be 100% serious all the time!
It is OK to have fun and to joke around with the people in your company, just remember that there are times for joking and there are times for keeping serious.

And one last thing! 

When you are working in the Silicon Valley around 1999, look for a company called Google, and ask them if they need a good tester.  Even if they pay you only in stock take the job…  It will be worth it ;-)

Categories: Companies

Reducing Teamicide with Lightning Bolt shaped Teams

Teamicide is the act of purposefully disbanding a team after they are done with a task or project.  While this may not sound particularly negative at first glance, an organization loses the benefit of achieving team productivity and team cohesion each time they disband a team.  When team’s form, they take time to gel as a team. This is an organizational investment that often isn't realized.
To gain some perspective, let’s take a moment to review Tuckman's model that discusses the gelling process.  Established by Bruce Tuckman in 1965, this model has four sequential phases (e.g., Forming, Storming, Norming, and Performing) that teams go through to effectively function as a unit, know each other's strengths, self-organize around the work, with optimal flow, and reduced impediments.  In relation to teamicide, if a team hasn't yet achieved the performing state, they will have invested in the time and team building effort without actually gaining the benefits of a performing team.   The irony is that while companies focus a lot on return on investment (ROI) in relation to the product, they inadvertently achieve no ROI since they disband teams and not allowing them to achieve performing.  
The next question is, why does management disband teams?  Do they not understand the harm they are doing to their organization when they disband teams?  Do they not respect the benefits of a performing team?  Or maybe they apply a move the team to the work method, when they really should be applying a move work to the team method.  Exploring the “move team to the work” method, this may occur because either there is a “form a team around a project” mindset or there is a belief that teams don’t have all of the skills or disciplines needed to handle the new types of work.   
So how do we solve this problem and gain the most from performing teams?  The first change that must be made is to move to (or experiment with) applying the “move work to the team” method.   This assumes that we have teams that have the skills and disciplines to handle a variety of work.  Therefore, the second change is to invest in building Lightning Bolt shaped teams. These are teams where each team member has a primary skill, a secondary skill, and even a tertiary skill
The shape of a lightening bolt has one spike going deep (primary skill) and at least 2 additional spikes of lessor depth (secondary and tertiary).   The purpose of having various depths of skills is for the team to be able to handle a broad range of work and for team members to be able to step up and fill gaps that other team members may not have or need help with.  Note: some have used the term “T-shaped” teams, but I find that the lightning bolt shape is more apropos to the several spikes of skills and the various depths that are needed.  
To create a lightning bolt shaped team, takes an investment in education.  This takes a commitment to educate each team member in both a secondary and tertiary skill.  As an example, let’s say that a developer has a primary skill of programming code.  As a secondary skill, they can also learn how to build database schemas and as a tertiary skill, they can write unit tests and run test cases.  The long-term benefit is that if the team members can develop additional skills, there is a greater likelihood that a team can work on a much wider range of work and then they can be kept together allowing the organization to gain the benefits of a high performing team.   This can reduce teamicide and increase the organization’s ability to produce more high quality product.
Have you seen teamicide occurring in your organization?  Have you seen the benefit of allowing a team to remain together long enough to become a high performing team?  If so, what level of skills were or are prevalent on the team? 
Categories: Blogs

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today