Skip to content

Feed aggregator

Ranorex Announces Neotys as Technology Partner

Software Testing Magazine - Fri, 08/14/2015 - 17:37
Ranorex has announced a new partnership with Neotys, a provider of stress and load testing tools for rich Internet applications (RIAs). Since 2005, Neotys has helped over 1,600 customers in more than 60 countries enhance the reliability, performance and quality of their applications. The NeoLoad load testing solution for RIAs offers best-in-class capabilities, flexibility and efficiency with infinite scalability via the cloud while providing practical analyses and full support for all new technologies including HTML5, websocket, LCDS and RTMP. All this is backed by a dedicated team of Neotys professionals, providing ...
Categories: Communities

At-a-Glance and Early-bird Prices Now Available for PNSQC

Software Testing Magazine - Fri, 08/14/2015 - 17:21
The Pacific Northwest Software Quality Conference (PNSQC) has announced that the At-a-Glance program and the early-bird prices are now available. The At-a-Glance has complete details on how we plan to get you brewing. With tracks in Process and Performance, from Security to Continuous Integration, there will be plenty of options to follow your pathway to quality at the Pacific Northwest Software Quality Conference. Technical Program highlights include: * Brewing a User-Centeric Identity Solution * Brewing Analytics Quality for the Cloud Performance * Brewing Java Quality in Android Robots * Plus Code Clubs, Skunky Testing and Jenkins ...
Categories: Communities

Inflectra Announces Spira Build Server Plugin for MS TFS

Software Testing Magazine - Fri, 08/14/2015 - 17:12
Inflectra has announced the release of the Spira Build Server Plugin for the Microsoft Team Foundation Server (TFS) build server. This plugin allows you to integrate the build management features of TFS with Spira. This is in addition to our existing source code plugin for TFS and Visual Studio IDE integration. How does SpiraTeam work with Build Servers? SpiraTeam includes the ability to integrate with a variety of continuous integration / automated build servers so that the results of automated builds can be displayed in SpiraTeam linked to the associated release or ...
Categories: Communities

Reducing Regression Execution Times

Sauce Labs - Thu, 08/13/2015 - 17:00

We all know the saying “time is money.” QA managers are constantly under pressure not only to deliver high-quality software products, but also to do so within time constraints.

Regression testing is a vital component in any software development life cycle to ensure that no new errors are introduced as a result of new features or the correction of existing bugs. Every time we modify existing source codes, new and existing test cases need to be executed on different configurations such as operating systems and platforms. Testing all of these permutations manually is simply not cost- or time-effective, and is also inconsistent. Automated regression addresses these challenges. But as we increase feature coverage and permutation testing, companies increase their execution times to a level that is no longer acceptable for the delivery of high-quality products within a tight schedule. Here are a few ways to improve execution time:

1. Introduce Continuous Integration (CI) Tools:

When going from a few scripts to a few thousand scripts, you begin to notice some growing pains. I frequently notice engineers manually executing script batches one at a time in a serialized manner and monitoring them as they execute. This consumes both time and resources. This becomes even more challenging when regression runs overnight or during weekends, with no one available to troubleshoot. As your automation grows, you need to have an infrastructure in place that allows you to scale and be able to run regressions unattended.

I recommend you use a Continuous Integration (CI) tool to manage automated regression executions, such as CircleCI or Jenkins. Tools like CircleCI or Jenkins can help you bring up virtual machines, start regressions, handle a more dynamic queuing mechanism, monitor regressions, and warn you if something goes wrong that requires manual intervention. It can also help you through a recovery mechanism that can be triggered if something becomes non-operational.

2. Use CI Tools not Only to Run Scripts, but also to Automate All Manual Steps!

When it is time to execute regression, there are a series of steps that need to be executed before a script can be run, such as:

  • Loading the new software to be tested
  • Updating your scripts with the latest version
  • Configuring servers
  • Executing scripts
  • Posting results
  • Communicating failure details

Very frequently we see customers using CI only to run scripts, relying on a manual process to execute the remaining steps, which is indeed a very time-consuming process.

I recommend minimizing manual intervention, and trying to achieve an end-to-end automated process using a CI tool that allows you to monitor and orchestrate the different tasks.

3. Introduce Dynamic Timeouts

Timeouts are a necessary evil when writing automation. They allow you to slow down your automation, simulating a more human-like interaction, or you can simply wait for something to happen before the next step. When abused, this practice can increase execution time substantially. (I have seen regressions running 3 times slower before adjusting timeouts – no kidding!)

I recommend using dynamic timeouts. You can program dynamic timeouts to wait a certain amount of time, or until you get the expected event that will reduce waiting time.  The effectiveness of dynamic timeouts is based on your implementation, but in most cases it’s better than just hard-coding them.

4. Unlock the Power of Parallel Execution and Virtualization

Once you have a seamless end-to-end automated process using a CI tool, your next productivity leap could come from increasing capacity by introducing parallel execution.

To perform parallel execution, you must increase the number of physical or virtual machines by using a CI tool to manage dynamic queuing and load balancing. This can cut down execution time exponentially based on the number of VMs or servers you integrate.

5. Build a Fully Integrated Automation Framework

To have an efficient automation framework all its components must be fully integrated and talking to each other. Missing or non-fully integrated components add extra time to the whole regression process, as you would have to execute those steps manually.

I recommend integrating a centralized tool to keep track of results – it can be as simple and inexpensive as automatically posting results to a Google spreadsheet, or using a Cloud-based test case management tool consumed as a service and paid per engineer usage, or buying the license to that tool and installing it. You also need to analyze each component in your automated regression framework and make sure it is fully integrated to achieve higher levels of efficiency and effectiveness.

6. Combat High Script Failure Rates

If script failure rates are high, time spent in failure analysis can invalidate automated regression, leading to unacceptable time loss. I have personally seen cases where automated regression becomes irrelevant due to this problem. A high failure rate (anywhere from 25-50%) will make it extremely difficult to detect new issues due to changes in code.

Before you continue to add more scripts to the pool of an automated regression environment that has a high failure rate, perform a root cause analysis of the failures and focus on fixing them until you get a less than 5% failure rate, ideally.

Conclusion

Many factors adversely affect the time spent during automated regressions, but if done right, you can achieve high levels of effectiveness when using them.

Israel Felix has more than 20 years of experience working in the technology industry serving multiple leadership roles within development and test groups. The last 15 years have been focused on leading key functional, regression, system, API and sanity test in both automation and manual environments – in technologies such as Switching, Routing, Network Management, Wireless, Voice, and Cloud-Based Software. He has also managed large global test groups across US, India, Thailand and Mexico.

Categories: Companies

Citrix Session Reliability: Does it Cloud Your Network Insight?

In this post, I won’t discuss the merits – good or bad – of Citrix’s Session Reliability feature; that topic is best left to Citrix engineers. Instead, I’ll focus on the importance of understanding and managing the performance of the underlying network to ensure the best possible end-user experience, with an emphasis on potential connectivity […]

The post Citrix Session Reliability: Does it Cloud Your Network Insight? appeared first on Dynatrace APM Blog.

Categories: Companies

Brand New Personal Certification Page

Ranorex - Thu, 08/13/2015 - 12:00
We have great news for all Certified Ranorex Professionals: There is a brand new personal certification page on the Ranorex website for verifying certification status; you can add a link to your email signature using the official " Ranorex Certification Email Banner ".

Have a look at our certification page for more information.

If you haven’t already done it, now is the best time to get yourself certified.

Categories: Companies

Update: Wiki and issue tracker outage

I recently wrote about the two day outage of our wiki and issue tracker:

While this was a rather lengthy outage, it could have been much worse. We lost none of the data, after all.

OSUOSL have since published their post mortem. I was really wrong about not losing any data:

A further complication was that our backups were pointed at mysql2, which was out-of-date with mysql1, due to the initial synchronization failures. Fortunately, we had the binary logs from the 17th through the 30th. This means that though most data could be restored, some data from between the 15th and the 17th was lost.

For our issue tracker, that means that issues JENKINS-29432 to JENKINS-29468 were lost, as well as comments posted from about July 15 12:20 PM to July 17 2 AM (UTC). We know this thanks to the jenkinsci-issues mailing list where the lost issues and comments can be looked up for reposting.

We unfortunately don't have such a record from our wiki.

Categories: Open Source

New! Support for OS X 10.11 and iOS 8.3, 8.4 & 9.0

Sauce Labs - Thu, 08/13/2015 - 01:46

El Capitan 200 px wideIn our continuing effort to make Sauce Labs the best place to test, we’ve just added some additional OSes and Browsers. In addition to Windows 10 and the Edge Browser we announced last week, today we are extending the platforms we support to include released and upcoming iOS and OS X operating systems:

OSX 10.11 El Capitan (beta)
iOS 8.3 and 8.4
iOS 9 (beta)
Safari 8.0.7

To use any of these platforms visit our platforms configurator.

When we ask our users what it is they love about Sauce one of the most common responses is the wide variety of platforms we provide our users. In an ecosystem of increasing complexity, we know our customers have a lot of devices and platforms they need to test against. Our humble aspiration is to make your life a just little bit easier by having the platforms you need when you need them.

Sign in to start testing

Categories: Companies

Are you fixing your development problems, or just slapping on another Band-Aid?

The Seapine View - Wed, 08/12/2015 - 22:25

Untitled designMany companies still develop products piecemeal, managing development items independent of one another. They know it’s not efficient. It doesn’t foster innovation. It prevents them from connecting the dots and making good use of their development data.

But they do it anyway.

Why? Sometimes, it’s just easier to slap a Band-Aid on a problem. They tell themselves they’ll look at the bigger picture when they have more time—as if time is just going to magically appear one day.

They tell themselves that Microsoft Office products and other open source tools don’t cost anything—as if the loss of productivity and the overall risk to product development projects isn’t costly. Even if all of your technology investments have open APIs, who is going to code and maintain them—your IT department?

I’m talking about companies who develop and market products that have some form of complexity, risk, or innovation. On some level, these products will have to integrate an ever-growing array of new technology: software, browsers, platforms, operating systems, languages, connected devices, and materials.

The old assumption that integration or an integrated solution is too difficult to scope or validate is false. Arguably, your technology framework is already outdated and doesn’t communicate with other technology investments—forcing you to manage several tools instead of a single integrated solution.

I thought companies made money from selling their own products, not managing products made by others.

Isolation Doesn’t Foster Innovation

How often have you heard these excuses?

  • I didn’t know…
    • you were done.
    • you weren’t done.
    • you needed my input.
    • you needed my signature.
    • the requirement changed.
    • my change impacted you.

Without connected points of communication—transfers, notifications, escalations, etc.—work rarely gets done on time. You’ll never have time to innovate if a lack of communication always has you struggling to meet deadlines.

These connected points of communication may change based on the nature of the project and available resources. Regardless, companies need a business tool that adapts to the way they want to work and communicate, both internally and externally.

If you’re not streamlining communication and work, you will always be stuck in a reactive mode and have no time to innovate or test new ideas.

Disconnected Data Limits Analysis and Effective Reuse

Anyone can manage and file data, but making sense of it is a whole other story. If your project data, reports, and artifacts aren’t connected, how can you analyze the data to see the impact, risk, and gaps?

How can you reproduce success, if you don’t know how or what got you there?

Streamlining and automating processes is good, but it can be ineffective if the data isn’t connected. Because most siloed tools and systems don’t facilitate information transfers, they are not good at providing effective reuse or evolution of data and work. Integrated tools, on the other hand, allow companies to automate and evolve these tedious manual tasks.

Ideally, companies are looking at ways to make better business decisions and provide continuous improvement. So, how can they?

Traceability Is Not a “Nice to Have”

Traceability should not be a “nice to have” item hidden in an RFP or a simple desired feature from regulatory. It should be one of the most heavily weighted requirements because traceability ultimately provides the greatest value to your product development projects.

  • It ensures that people are on the same page.
  • It facilitates and fosters innovation.
  • It shows you the overall impact of change.
  • It identifies where the gaps are.
  • It allows for effective reuse and best practice development.
  • It tells you how close or how far you are from completing a project.
  • It provides the reporting that most regulatory agencies require.

If you start with the end in mind, you just might find that traceability is required for successful projects. Many companies can manage assets, but can they make sense of their data and relationships? Can traceability be overdone? Absolutely. But companies can avoid that trap by developing an effective traceability strategy.

Conclusion

While it may be easier to slap a Band-Aid on a development problem, the trouble is that you’re still bleeding. Maybe the damage is hidden by your temporary fix, but it’s still there, and it’s still costing you big money in lost productivity and an inability to innovate.

It’s time to think about your development process holistically. Rip off the Band-Aid and spend the time really examining the problem. Then fix it right.

You’ll be pleasantly surprised how much better a healthy development platform fosters innovation.

The post Are you fixing your development problems, or just slapping on another Band-Aid? appeared first on Blog.

Categories: Companies

How to configure Selenium Grid?

Testing tools Blog - Mayank Srivastava - Wed, 08/12/2015 - 19:54
Here I noted down some quick steps to configure Selenium Grid on windows. I am assuming that you are aware about selenium and its features. So here we go- Download Selenium Standalone Jar file from here. Dump it somewhere in host and node machines. Open CMD at Host machine and go to location where you […]
Categories: Blogs

uTest Platform Updates for August 12th, 2015

uTest - Wed, 08/12/2015 - 17:03

Each week, the Community Management team will provide updates on platform changes and bug fixes that are either upcoming or ready to be deployed. Here are the platform updates for the week of August 10th, 2015. Enhancing “+1s” (Issue Confirmation) The “+1” feature allows you to indicate that you also encountered a bug if it […]

The post uTest Platform Updates for August 12th, 2015 appeared first on Software Testing Blog.

Categories: Companies

Advice Posts from Veterans

uTest - Wed, 08/12/2015 - 15:33

At uTest we want to provide all of the support we can for our new testers. We are very lucky to have an engaged community that is also committed to supporting beginners. A forum’s post started by a tester asks what the top worries and concerns are for new testers. A few veterans contributed responses in […]

The post Advice Posts from Veterans appeared first on Software Testing Blog.

Categories: Companies

Look, I am Your Father

Hiccupps - James Thomas - Wed, 08/12/2015 - 07:22
My youngest daughter has recently started using Powerpoint at school and set herself a project of making a newspaper in it. She asked members of the family for contributions and my mum and dad went crazy (yeah, they're retired) and wrote half a dozen articles. They haven't got Powerpoint so they used Presentation and emailed their pieces over to me as an OpenDocument file.

I wasn't surprised that Powerpoint warned there might be problems opening it as I've seen similar things with Word. (My dad sends me all of his tech support questions as documents created in Writer ...) So when it showed up as a single slide with masses of text, it was no big deal.

I told Dad and got an email back the following day saying he'd been trying desperately to get it onto multiple slides but to no avail. Perhaps I hadn't explained well enough that I thought it was Powerpoint that had corrupted his content. But if it hadn't started as one slide, what was he doing?

I called him up. It turns out that his mental model of how Presentation worked was that it was essentially a word processor and when he got to the bottom of a "page" a new "page" would be started for him.

Yes, of course! Why not?

I find it delightful, revealing and humbling to get a glimpse inside the head of a user with a perspective so different from my own. Although on the positive side, I guess one day it'll be my head too.
Image: Jimmy and Granddad
Categories: Blogs

Top Paid Software Testing Projects at uTest: Week of August 10

uTest - Tue, 08/11/2015 - 23:01

As we enter the dog days of summer, tester participation tends to slow down a little. Despite this, we continue to receive a high number of projects and requests from our customers, all of which we could use your help on! This weeks in-demand projects cut across a wide array of categories and testing types, from […]

The post Top Paid Software Testing Projects at uTest: Week of August 10 appeared first on Software Testing Blog.

Categories: Companies

Platform generalists versus framework specialists

Jimmy Bogard - Tue, 08/11/2015 - 21:50

A trend I’ve noticed especially in Javascript/front-end jobs are emphasizing, requiring, even titling jobs after specific Javascript frameworks. “Ember.js engineer needed”. “Angular programmer”. This raises a few of thoughts for me:

  • Is the framework so complicated you need someone with extensive experience building non-trivial apps?
  • Is your timeline so short you don’t have time to teach anyone a framework?

Earlier in my career, when WPF was announced, I had a choice to invest time in my career to learning it. It was the obvious successor to WinForms, and *the* way to build thick-client apps in .NET, if not Windows going forward. I hesitated, and decided against it, after learning more about the technology and seeing how proprietary it was.

Instead, I doubled-down on the web, learning server-side frameworks to help build web apps more than ever. ASP.NET MVC embraced the web, unlike WebForms, which had its own invented abstractions. All the WebForms knowledge I had is more or less wasted for me, and I vowed to never again become so engrossed with a technology that I become specialist in a framework at the expense of understanding the platform it’s built upon.

I don’t want to be an “Ember” guy, an “Angular” dev, an “Aurelia” expert. I want to be a web expert, a messaging expert, a REST expert, a distributed systems expert. I saw this most clearly recently when helping out on a Spring MVC project. Because I understood web platforms, I could pick up the framework easily. The parts in Spring that were hard were hard because of the complexity of the framework, and those were parts I was a lot less inclined to be an expert upon.

One of the biggest reasons I’ve shied away from new, all-inclusive, heavyweight frameworks is it’s a tradeoff for me of potential productivity and administrivia knowledge. I’ve only got so much room in my head, and I’d much rather it be occupied with more important things, like vintage Seinfeld/Simpsons quotes.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Software Projects Suffer From Poor Oversight Methodology

uTest - Tue, 08/11/2015 - 19:11

A large software project funded by the British government recently experienced massive setbacks partially due to an ill-suited development methodology. Perhaps more so than any other industry, the IT sector has repeatedly found itself subject to business leaders looking to implement the latest hyped trends without properly considering how these implementations will fit into the […]

The post Software Projects Suffer From Poor Oversight Methodology appeared first on Software Testing Blog.

Categories: Companies

New CoAP Testing Plugin for Ready! API Provided by SmartBear

Software Testing Magazine - Tue, 08/11/2015 - 17:22
SmartBear Software has released a new plugin for Ready! API that supports CoAP (Constrained Application Protocol) for Internet of Things (IoT) testing. The plugin adds new test steps to support CoAP testing, which furthers SmartBear’s commitment to the IoT industry. SmartBear’s Ready! API is the first fully integrated, extensible and affordable platform to help development, testing and operations teams build reliable, scalable and secure APIs. There are millions of devices in use today, many of which use different protocols for their communications. Hence, one of the biggest challenges with the IoT ...
Categories: Communities

Why Manual Testing Helps Your Release

Sauce Labs - Tue, 08/11/2015 - 17:00

Will we ever truly be at 100% automation?  I hope not. Of course automation is critical in implementing Continuous Integration and Delivery, but there are just some things that you can’t leave to a machine. Human evaluation is important.

In a world where we are looking to release faster and faster, why would we want manual testing?  Let’s take a look at some of the things you may want to do that automation can’t, and how manual evaluation helps us deliver the right product.

The human aspect

Several years ago, our UX team kept asking, “Is it delightful?”  I’ve worked on many features that I truly felt would make for a better experience in education.  There are several that, frankly, I just couldn’t stand. We just weren’t building the right product sometimes; even if all tests passed, and there were no bugs — if I didn’t like using it, I found myself asking, “How would users feel?” I have to say, I’m fascinated by the human factor and evoking feelings (for better or worse) when testing software.

As a consumer of software, sometimes I find myself thinking, Wow, did ANYONE look at this? (For example, I’m on the Board of Directors for my Home Owner’s Association, and the software we use to track documents, get assessments, and so forth just makes me want to cry).

I’ll be honest – sometimes it is difficult to see how usable something is until there is something to use it for. Hopefully, though, we can spot this early as we define acceptance criteria and are evaluating the workflows, specs, wireframes, or prototypes.

Be like Lewis and Clark

The obvious manual testing activity that should be at the top of everyone’s list is exploratory testing. In an ideal world, the features themselves are completely automated, and development is done when all tests pass. This is fantastic, but what if that were it?  Lewis and Clark had specific goals, and reported what they found along the way. Exploratory testing is similar to me: I start with a charter (or a goal) from a user perspective, and report what I find along the way. Perhaps it is a feature that works fine in unit and integration tests, but once I start looking at it myself in an end to end workflow, I think of other scenarios that perhaps we didn’t think of when writing scripts.

One of the biggest successes I’ve seen lately is with a bug bash.  This activity is open to people even outside of our dev teams. As features complete development and pass initial rounds of testing, we open them up to a bug bash. People from QA, Engineering, UX, Product Management, Support, and beyond have been involved, getting a wide range of perspectives and scenarios that perhaps were not thought of before.

Use exploratory testing to your advantage — there is no one way to do it. Do what works best for you! Get different perspectives, think like the user, and see how the product makes you feel.

Testing for everyone

Another area of testing that I’m very passionate about is Accessibility. Not everyone uses the Web in the same way.  (To see some of the many ways in which someone may be hindered, read more in this fascinating post: https://the-pastry-box-project.net/anne-gibson/2014-july-31).

Yes, there are tools and scans that can be used to check for standards. But if I don’t check for accessibility manually, how can I experience whether keyboard navigation occurs in a logical order when tabbing through, or know (by seeing it in person) that what the screen reader calls out makes sense?

To celebrate Global Accessibility Awareness Day, we recently held a bug bash where we asked users not to use a mouse. While most features held up well to the challenge, it gave us more perspective on trying to build the right product, and building one that is delightful to use. (To learn more, read: http://blog.blackboard.com/mouse-free-an-accessibility-challenge/).

Mobile everywhere

Consider that we live in a world where most people are accessing your software on their mobile devices.  Now consider what you look for in an app. Personally, there have been moments where I’ve tried out five or six running apps — only to quickly delete most of them after about ONE MINUTE if they aren’t user friendly or don’t meet my needs. ONE MINUTE. If you don’t take time to run through your app and explore how it is to use, why would your clients? If I can’t figure out how to do something quickly, I’m pretty much done with that app. DELETE. (Don’t let that be you.)

Putting it all together for continuous delivery

I hope it’s pretty clear that manual testing can definitely help you deliver the right product to users, and a fun one at that. Of course you don’t want to save manual testing for too late (the costs keep rising the later issues are reported), so try and find a happy balance from your feature/dev branches to the test environment, and you will start seeing the usability much earlier. Although Continuous Delivery provides for a constant feedback cycle and opportunity to learn and fix issues quickly based on that feedback, use manual testing to your advantage and make clients happy from that first (or thousandth) release!

Ashley Hunsberger is a Quality Architect at Blackboard, Inc. and co-founder of Quality Element. She’s passionate about making an impact in education and loves coaching team members in product and client-focused quality practices.  Most recently, she has focused on test strategy implementation and training, development process efficiencies, and preaching Test Driven Development to anyone that will listen.  In her downtime, she loves to travel, read, quilt, hike, and spend time with her family.

T: @aahunsberger
L: https://www.linkedin.com/in/ashleyhunsberger

Categories: Companies

QASymphony Announces Visual Mapping Tool for Agile Testers

Software Testing Magazine - Tue, 08/11/2015 - 16:46
QASymphony has announced the general availability of qMap, the industry’s first visual mapping solution for Agile testers of cloud, mobile, big data and IoT applications. In beta since April, qMap leverages data from QASymphony’s exploratory testing tool, qTest eXplorer, providing visual insight into features tested, by whom, bugs, testing time and defects related to critical functions of the application. qMap addresses a significant issue for software developers: the need to quickly identify high-risk areas in an application and resolve them before the release. qMap does this by tracking and analyzing testing ...
Categories: Communities

The Evolution of APM

The Internet made global local, Social changed the way we communicate and Mobile made it possible to do everything anywhere. The collection of these efforts has forced more innovation in a shorter period of time than anywhere else in human history. It’s also forced an evolutionary change in customer expectations and a cascading demand on […]

The post The Evolution of APM appeared first on Dynatrace APM Blog.

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today