Skip to content

Feed aggregator

TestComplete Integrates with LoadComplete

Software Testing Magazine - Tue, 11/03/2015 - 19:56
SmartBear Software has announced a new version of TestComplete that redefines how functional and performance testing is done with script reuse. One of the world’s most recognized automated testing tools, TestComplete is now integrated with LoadComplete, SmartBear’s load testing solution. You are able to repurpose TestComplete functional test scripts for performance testing in LoadComplete. Expanding coverage while reducing testing time can be difficult to achieve for ...
Categories: Communities

EuroSTAR 2015: Pictures From The Event

uTest - Tue, 11/03/2015 - 19:26

As we head into the EuroSTAR Conference expo today, let’s take a moment and check out some amazing photos. Our very own uTest member, Constantine Buker, is our onsite tester correspondent. Check in this week as we have some more EuroSTAR related content pieces on the way. Here’s what he shared with us so far. Also, be sure to […]

The post EuroSTAR 2015: Pictures From The Event appeared first on Software Testing Blog.

Categories: Companies

Tools for Mobile Apps Testing

Software Testing Magazine - Tue, 11/03/2015 - 16:33
Testing mobile applications requires using many different tools for activities that ranges from testing communication to recording bug. In this article, Dmitry Radchenko lists some of the tools that he uses in his daily mobile apps testing activities. Author: Dmitry Radchenko, TestMatick, A great amount of applications and games for mobile devices are released every year. There are new players on the market that offer you ...
Categories: Communities


Panamo QA - Georgia Motoc - Tue, 11/03/2015 - 07:10
How to achieve high success with your job search by staying confident and focused Welcome to the 5′ BA Podcast. In the last podcast episode we talked about transferable skills and how you can use those skills for a successful … Continue reading →
Categories: Blogs

Now Available! Team Foundation Server Integration with TestTrack

The Seapine View - Tue, 11/03/2015 - 01:30

We’re happy to announce that Team Foundation Server (TFS) integration with TestTrack is now available. This integration means developers who are using TestTrack 2015.1.2 and later can attach TFS files and changesets to issues, test cases, or requirements without leaving their development environment. It also provides such things as end-to-end traceability between TestTrack artifacts and TFS source files.

Matrix report with TFS filesMatrix report with TFS source files

See TestTrack Third-Party Integrations for supported TFS versions.

The post Now Available! Team Foundation Server Integration with TestTrack appeared first on Blog.

Categories: Companies

New Contest Alert: Update Your Profile and Win an Apple TV!

uTest - Mon, 11/02/2015 - 22:37

With so many exciting releases in our midst (the new version of Apple TV was just released on Oct. 30, and the new version of uTest is coming soon), we have decided to launch a contest that will both prepare uTesters for the new uTest site while giving them an opportunity to win a cool new gadget. Since […]

The post New Contest Alert: Update Your Profile and Win an Apple TV! appeared first on Software Testing Blog.

Categories: Companies

Instant Load Engines retired

Web Performance Center Reports - Mon, 11/02/2015 - 18:29
Beginning with the 6.6 release, we will no longer be producing our Instant Load Engines ISOs. We know that some of our customers like the convenience of these and we are glad we were able to provide this feature as long as we have. We were one of the first load-testing companies to support load generation from the cloud and we have watched as usage of the Instant Load Engines has dwindled and the usage of our cloud-based engines has increased. The usage of the Instant Load Engine feature no longer justifies the continued development and maintenance required to keep it updated. But for the next few releases, … Continue reading »Related Posts:
Categories: Companies

TestBash New York, New York, USA, November 5-6 2015

Software Testing Magazine - Mon, 11/02/2015 - 16:58
TestBash New York is a two-day conference dedicated to software testing. It provides an opportunity to software testers in New York to learn and network in the software testing community. The first day is dedicated to workshop and the second to presentations. In the agenda of the TestBash New York conference you can find topics like “How to Be an Outstanding Leader”, “Dynamic Lifecycle Shifting”, “Tales ...
Categories: Communities

Capturing Comments About Actions Performed in Surround SCM

The Seapine View - Mon, 11/02/2015 - 08:00

Surround automatically tracks historical data about actions that users perform, such as adding, checking in, and moving files. But, what if a team member makes a change that another person has questions about later? What if the person who made the original change has no recollection of it or isn’t available to help? To make sure this important information is captured, require users to enter comments when performing actions.

To require comments, the Surround SCM administrator sets a minimum comment length for actions in the server options. Previously, the minimum length applied to comments for all actions. Starting with Surround SCM 2015.1, you can specify different lengths for different actions, giving you more flexibility. For example, you may want to require a minimum of 20 characters for check ins, but not require comments for file renames.

To set minimum comment lengths, choose Tools > Administration > Server Options. Select Comments from the Mainline Options category.


Enter the minimum number of characters required in comments for each action. If you do not want to require comments for an action, enter 0.

When a user performs an action that requires comments, they cannot complete the action until they enter the required minimum number of characters.

Keep in mind that these settings apply to all branches and users, but you can override them for specific mainline branches.

See the Surround SCM help for more information.


The post Capturing Comments About Actions Performed in Surround SCM appeared first on Blog.

Categories: Companies

Adopt a plugin!

With more than a thousand public plugins in the Jenkins community now, it should come as no surprise that some of them are no longer actively maintained. Plugin authors move on when they change jobs, or lose interest in the plugin, and that's fine. Plugins are hosted on the Jenkins project infrastructure after all, and when a maintainer moves on, others can continue their work.

The major problem of course is that it's often difficult to tell whether a plugin is still maintained (and there's just not a lot going on), or whether its maintainer has lost interest. Most plugins don't get released every few weeks, or even every few months, and still do their job just fine.

To connect plugins that aren't actively maintained with potential maintainers, we recently implemented the "Adopt-a-plugin" initiative: We built a list of plugins that are up for "adoption", and display a prominent message on the plugins' wiki pages. Anyone interested in taking over as a plugin maintainer can then contact us, and we'll set you up.

Are you interested in becoming a plugin maintainer? Maybe one of your favorite plugins isn't actively maintained right now. Check out the Adopt a Plugin wiki page for more details on this program, and a list of plugins that would benefit from your help.

Categories: Open Source

The wrong question: What percentage of tests have you automated?

Dorothy Graham Blog - Sun, 11/01/2015 - 19:24
At a couple of recent conferences, I became aware that people are asking the wrong question with regard to automation. There was an ISTQB survey that asked “How many (what percentage of) test cases do you automate?”. In talking to a delegate after my talk on automation at another conference, she said that her manager wanted to know what percentage of tests were automated; she wasn’t sure how to answer, and she is not alone. It is quite common for managers to ask this question; the reason it is difficult to answer is because it is the wrong question.
Why do people ask this? Probably to get some information about the progress of an automation effort, usually when automation is getting started. This is not unreasonable, but this question is not the right one to ask, because it is based on a number of erroneous assumptions:

Wrong assumption 1)  All manual tests should be automated. “What percentage of tests” implies that all existing tests are candidates for automation, and the percentage will measure progress towards the “ideal” goal of 100%.
It assumes that there is a single set of tests, and that some of them are manual and some are automated. Usually this question actually means “What percentage of our existing manual testsare automated?”
But your existing manual tests are not all good candidates for automation – certainly some manual tests can and should be automated, but not all of them!
Examples: if you could automate “captcha”, then the “captcha” isn’t working, as it’s supposed to tell the difference between a human and a computer. “Do these colours look nice?” or “Is this exactly what a real user would do”? And tests that take too long to automate such as tests that are not run very often or those that. are complex to automate.
Wrong assumption 2) Manual tests are the only candidates for automation. “What percentage of tests” also implies that the only tests worth automating are existing manual tests, but this is also incorrect. There are many things that can be done using tools that are impossibly or infeasible to do when testing manually.
Examples: additional verification or validation of screen objects – are they in the correct state? When testing manually, you can see what is on the screen, but you may not know its state or whether the state is displaying correctly.
Tests using random inputs and heuristic oracles, which can be generated in large volume and checked automatically.
Wrong assumption 3) A manual test is the same as an automated test. “What percentage of tests” also assumes that a manual test and an automated test are the same - but they are not. A manual test consists of a set of directions for a human being to follow; it may be rather detailed (use customer R Jones), or it could be quite vague (use an existing customer). A manual test is optimised for a human tester. When tests are executed manually, they may vary slightly each time, and this can be both an advantage (may find new bugs) and a disadvantage (inconsistent tests, not exactly repeated each time).
An automated test should be optimized for a computer to run. It should be structured according to good programming principles, with modular scripts that call other scripts. It shouldn’t be one script per test, but each test should use many scripts (most of them shared) and most scripts should be used in many tests. An automated test is executed in exactly the same way each time, and this can be an advantage (repeatability, consistency) and a disadvantage  (won’t find new bugs).
One manual test may be converted into 3, 5, 10 or more automated scripts. Take for example a manual test that starts at the main menu, navigates to a particular screen and does some tests there, then returns to the main menu. And suppose you have a number of similar tests for the same screen, say 10. If you have one script per test, each will do 3 things: navigate to the target area, do tests, navigate back. If the location of the screen changes, all of those tests will need to be changed – a maintenance nightmare (especially if there are a lot more than 10 tests)! Rather, each test should consist of at least 3 scripts: one to navigate to the relevant screen, one (or perhaps many) scripts to perform specific tests, and one script to navigate back to the main menu. Note that the same “go to screen” and “return to main menu” script is used by all of these tests. Then if the screen is re-located, only 2 scripts need to be changed and all the automated tests will still work.
But now the question is: how many tests have you automated? Is it the 10 manual tests you started with? Or should you count automated scripts? Then we have at least 12 but maybe 20. Suppose you now find that you can very easily add another 5 tests to your original set, sharing the navigation scripts and 4 of the other scripts. Now you have 15 tests using 13 scripts – how many have you automated? Your new tests never were manual tests, so have you automated 10 tests (of the original set) or 15?
Wrong assumption 4) Progress in automation is linear (like testing). A “what percent completed” measure is fine for an activity that is stable and “monotonic”, for example running sets of tests manually. But when you automate a test, especially at first, you need to put a lot of effort in initially to get the structure right, and the early automated tests can’t reuse anything because nothing has been built yet. Later automated tests can be written / constructed much more quickly than the earlier ones, because there will (should) be a lot of reusable scripts that can just be incorporated into a new automated test. So if your goal is to have say 20 tests automated in 2 weeks, after one week you may only have automated only 5 of those tests, but the other 15 can easily be automated in week 2. So after week 1 you have automated 25% of the tests, but you have done 50% of the work.
Eventually it should be easier and quicker to add a new automated test than to run that test manually, but it does take a lot of effort to get to that point.
Good progress measures. So if these are all reasons NOT to measure the percentage of manual tests automated, what would be a good automation progress measure instead? Here are three suggestions:
1)            Percentage of automatable tests that have been automated. Decide first which tests are suitable for automation, and/or that you want to have as automated tests, and measure the percentage automated compared to that number, having taken out tests that should remain manual and tests that we don’t want to automate now. This can be done for a sprint, or for a longer time frame (or both). As Alan Page says, "Automate 100% of the tests that should be automated."
2)            EMTE: Equivalent Manual Test Effort: Keep track of how much time a set of automated tests would have taken if they had been run manually. Each time those tests are run (automatically), you “clock up” the equivalent of that manual effort. This shows that automation is running tests now that are no longer run manually, and this number should increase over time as more tests are automated.
3)            Coverage: With automation, you can run more tests, and therefore test areas of the application that there was never time for when the testing was done manually. This is a partial measure of one aspect of the thoroughness of testing (and has its own pitfalls), but is a useful way to show that automation is now helping to test more of the system.
Conclusion So if your manager asks you “What percent of the tests have you automated?” you need to ask something like: Percent of what? Out of existing tests that could be automated or that we decide to automate? What about additional tests that would be good to automate that we aren’t doing now? Do you want to know about progress in time towards our automation goal, or literally only the tests, as this will be different because automated tests are structured differently to manual tests.
It might be a good idea to find out why he or she has asked that question – what is it that they are trying to see? They need to have some visibility for automation progress, and it is up to you to agree something that would be useful and helpful, honest and reasonably easy to measure. Good luck! And let me know how you measure your progress in automation!

If you want more advice on automation, see the wiki that I am doing with Seretta Gamba at

Categories: Blogs

Euro Stars Oft Ware Testing

Hiccupps - James Thomas - Sat, 10/31/2015 - 07:15
If you give yourself some space you can usually find another perspective.

I'm British, and so European, but wouldn't call myself a star and I rarely look at warez these dayz.  I am speaking at EuroSTAR next week, though:
Categories: Blogs

Do You Know How to Wow Mobile Users? StarWest Interview with Speaker Ron Anderson

uTest - Fri, 10/30/2015 - 21:39

This year at StarWest I was lucky enough to sit down with Ron Anderson and learn about his opinions on mobile testing and how it can improve the testing space. He believes that there should be more emphasis on the user experience by testing with the user in mind. uTest is a great resource for […]

The post Do You Know How to Wow Mobile Users? StarWest Interview with Speaker Ron Anderson appeared first on Software Testing Blog.

Categories: Companies

Jenkins 2.0 Proposal: Improved "Out of the box" user experience

This week we have featured a number of proposals for what we would like to see in "Jenkins 2.0", the vision of which is to make Jenkins users more efficient, productive and happy. We started with some more internally facing changes and have slowly progressed from the "inside-out" to today's topic: improving the out of the box user experience. That is to say, the experience that a brand-new Jenkins user has when getting started with the server.

Just to recap, so far we've reviewed:

The subject of today's proposal is captured in JENKINS-31157, which, like yesterday's proposal, contains a few issues linked from it with more details.

At a high level, the problem aiming to be solved is:

When a new user installs Jenkins, they are greeted with the main, empty, dashboard which suggests that they "create jobs." This makes no mention of plugins or the configuration options that are relevant to helping the user make Jenkins match their needs.

In past and current versions of Jenkins, if you know what you're looking for it's relatively easy to move around the interface. If you've never used Jenkins before, it can be very challenging to find your way around or even know what it is possible to do with Jenkins.

The proposed changes aim to address this initial confusion:

Instead of changing the post-install defaults, which may not properly represent the user's needs, the first-time user experience should help guide the user through configuration and plugin installation quickly so they can use Jenkins for their needs. Effectively it should be as easy as possible for a user to arrive at a good configuration for their usage.

Jenkins contributor Tom Fennelly, who has led this discussion on the mailing lists in the past, has posted a good prototype screencast of what some of this might entail:

Providing Feedback

We're asking you to read the issues linked from JENKINS-31157 and comment and vote on those issues accordingly.

If you have ever logged in to the issue tracker or the wiki, you have a "Jenkins user account" which means you'll be able to log into the issue tracker and vote for, or comment on the issue linked above.

(note: if you have forgotten your password, use the account app to reset it.)

We're going to review feedback, make any necessary adjustments and either approve or reject the proposal two weeks from today.

This concludes this week's blog series highlighting some of the Jenkins 2.0 proposals we felt were important to discuss with the broader Jenkins user audience. Many of these, and other minor proposals, can be found on the Jenkins 2.0 wiki page.

Categories: Open Source

EuroSTAR Conference, Maastricht, Netherlands, November 2-5 2015

Software Testing Magazine - Fri, 10/30/2015 - 17:00
The EuroSTAR Conference is a four-day conference focused on software testing. Global and European software testing experts propose a program full with tutorials and presentations. In the agenda of the EuroSTAR Conference you can find topics like “Testing in the World of Startups”, “Testing the New Web – Tackling HTML5 with Selenium”, “Adapting Automation to the Available Workforce”, “Beacons of the Test Organisation”, “How We Transformed ...
Categories: Communities

Microsoft Releases Autocomplete for Animation

uTest - Fri, 10/30/2015 - 16:07

  If you’re at all interested in drawing, designing, or graphics then Microsoft just changed your virtual designing world. Four days ago, Microsoft introduced Autocomplete for Animation. Yes, that’s right, you can now fill in all of those pesky lines or dots instead of manually running over them. As artists know, taking the time to […]

The post Microsoft Releases Autocomplete for Animation appeared first on Software Testing Blog.

Categories: Companies

Regression Testing is Not An Option (It’s a Necessity)

Testlio - Community of testers - Fri, 10/30/2015 - 03:45

  What is regression testing? Nothing is ever perfect on the first try. Apps are no exception. Most developers have a backlog of bugs waiting to be fixed. But what about bugs that just keep popping up, over and over again? Don’t worry – you’re not getting deja vu. A bug you see more than once […]

The post Regression Testing is Not An Option (It’s a Necessity) appeared first on Testlio Blog.

Categories: Companies

(10/31) Weekend Trivia Series!

uTest - Thu, 10/29/2015 - 23:06

Thanks to all those who participated last week. Make sure you check out our Twitter, Facebook, Google+ and LinkedIn pages at 6pm (EST) tonight, where we will announce last weekends winner. The winner will also be posted in this blog tomorrow. For those of you who didn’t take part last week, here is a quick background of […]

The post (10/31) Weekend Trivia Series! appeared first on Software Testing Blog.

Categories: Companies

Jenkins 2.0 Proposal: UX Improvements (Part One)

We have been featuring a few proposals this week for what "Jenkins 2.0" is going to include. Today we'll be diving into the most noticeable changes being proposed for Jenkins 2.0: the User Experience (UX) improvements

Thus far in this blog series we have reviewed proposals covering:

The UX improvements being proposed aren't necessarily as uniform as the proposals from earlier in the week but represent a large amount of prototype and exploratory work done by folks like Tom Fennelly, Gus Reiber and a few others. Those following the dev list may have already seen some of these proposals in some of the "mega threads" that we have had discussing potential UI/UX improvements previously.

The improvements proposed for 2.0 can be found under JENKINS-31156. The most promising proposal under this issue is to update the plugin manager experience.

Another very important proposal for 2.0 worth mentioning is the proposal to update UI work well on mobile devices.

Providing Feedback

We're asking you to read the issues linked from JENKINS-31156 and comment and vote on those issues accordingly.

If you have ever logged in to the issue tracker or the wiki, you have a "Jenkins user account" which means you'll be able to log into the issue tracker and vote for, or comment on the issue linked above.

(note: if you have forgotten your password, use the account app to reset it.)

We're going to review feedback, make any necessary adjustments and either approve or reject the proposal two weeks from today.

Stay tuned for tomorrow's post covering the remainder of the proposed user experience changes!

Categories: Open Source

INVISION Finds Their Agile Match in TestTrack

The Seapine View - Thu, 10/29/2015 - 19:30
Leading provider of multi-platform advertising sales software is committed to development agility and product innovation—TestTrack helps them keep that commitment.

When INVISION, a provider of advertising sales management software for media companies, made the switch from Waterfall to Scrum in 2010, they were serious about embracing the spirit of Agile. Serving the fast-paced media industry, the company takes product innovation and the trust of their clients seriously. So, they were disappointed when the first application lifecycle management (ALM) tool they chose to support their Agile process just couldn’t keep pace with the need for continuous improvement that drives INVISION’S development teams.

Just a few years after they had made the switch to Scrum, INVISION hit a wall with the limitations of their Agile ALM tool. What’s more, the multitude of development tools they were using simultaneously—SharePoint, Excel, Access databases, customer service tools, and so on—were causing redundancy, frustration, and confusion among teams. INVISION CIO John Parks decided it was time to choose a new Agile ALM tool, but ideally one that could streamline their development processes and tools, while clearing the road for future adaptability.

That would be a tall order to fill. Or so it seemed.

INVISION’S search for a new ALM solution included a re-assessment of their current tool, other third-party software, and TestTrack. INVISION had been using TestTrack for years, but simply as a replacement for defect tracking spreadsheets. The in-depth re-assessment of TestTrack surprised Parks—it became clear to him that the answer to INVISION’S Agile ALM roadblocks and tool redundancies was already on site.

Read the customer story to learn more.

The post INVISION Finds Their Agile Match in TestTrack appeared first on Blog.

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today